article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
magnetic fields in diffuse media are extremely important for many astrophysical phenomena ( see ) .magnetic fields are important for star formation , propagation of cosmic rays , transport of heat etc .the observational study of magnetic fields , however , is limited by the ability of the available techniques to trace magnetic fields . for instance , in spite of the both progress of observational techniques and instrumentation for measuring emission ( see ) and absorption polarization and the recent progress in understanding grain alignment ( see for recent reviews ) the reliable high resolution tracing of magnetic fields is not readily available and requires a lot of resources ( e.g. planck example ) .in this situation is is very much advantageous to have alternative and synergetic ways of probing magnetic fields .astrophysical magnetic fields are turbulent and observations testify that turbulence is really ubiquitous in astrophysics .magnetic field makes turbulence anisotropic with turbulent eddies elongated along magnetic field ( see , for a review ) . as a result ,observed velocity correlations are expected to be elongated along magnetic field and this property was demonstrated with synthetic observations ( , henceforth el05 ) .later the study of anisotropies was performed using the principal component analysis ( pca ) and applied to observations .there is another way to employ properties of mhd turbulence in order to study magnetic fields .the aforementioned turbulent eddies aligned with magnetic field entail that the velocity gradients should have larger value when calculated perpendicular to the magnetic field .this property of magnetic turbulence was employed by ( , henceforth gl16 ) who proposed tracing magnetic fields using velocity centroids gradients ( vcgs ) .the technique was elaborated by ( , henceforth yl17 ) and the new way of magnetic field tracing was successfully compared with observations of planck polarization .these papers introduced a new way of magnetic field studies with spectroscopic data .velocity centroids present one way of representing velocities .however , centroids have the contribution not only of velocity , but also of density ( see el05 ) and the density fluctuations are not as good aligned with magnetic field as velocity ( see ) . at the same time , the analytical study in revealed that channels maps are sensitive only to velocity fluctuations if the corresponding turbulent density fluctuations are dominated by large scale contributions .this motivates our present study to explore the ability of velocity channel gradients ( vchgs ) of magnetic field tracing . in what follows, we discuss in [ sec : theory ] the theoretical motivation of this work . [ sec : numerics ] discusses the numerical methods on simulations and ways of analysis .we explore the gradients in the channel maps from turbulent velocities in [ sec : gradients - in - fsa ] . in [sec : fluctuations ] we compare the performance of density fluctuations and velocities from channel maps .we examine the performance of channel map gradients and correlation anisotropies in [ sec : anisotropy ] .we explore the reduced centroid gradient in [ sec : reducedcentroid ] .we testify our method in observation in [ sec : obs ] .we discuss our result in [ sec : discussion ] , and conclude in [ sec : conclusion ]mhd turbulence theory is the an old subject that has been boosted recently by the ability of performing high resolution 3d numerical simulations . before that there was no way of testing theoretical constructions and many competing theories describing mhd turbulence could co - exist .for instance , the original studies of alfvenic turbulence by and were based a hypothetical model of isotropic mhd turbulence .some of the later studies ( see ) pointed out to the anisotropic nature of the mhd cascade .however , the modern theory of mhd turbulence originates from the prophetic work by , henceforth gs95 ) .given originally rather lukewarm acceptance by the mhd turbulence pandits , this theory nevertheless was supported by further theoretical and numerical studies ( , henceforth lv99 , , see for a a review ) that extended the theory and provided its rigorous testing .our present study is based on the modern understanding of the mhd turbulence cascade and the statistical properties of turbulence confirmed numerically .the original gs95 theory is the incompressible mhd theory , i.e. the theory of alfvenic turbulence and the turbulence of pseudo - alfven waves .the latter represent fundamental slow waves in the incompressible limit .the third fundamental component of mhd perturbations , namely fast waves , do not exist in the incompressible limit where the sound velocity is infinitely high .the incompressible mhd turbulence was successfully tested in and .the numerical studies of the applicability of gs95 ideas and its generalizations for the compressible case ( see ) were first performed in .they testified that the gs95 incompressible scaling is applicable to the alfven and slow modes , while revealed that fast modes provide isotropic spectrum .in fact , these studies revealed that the fundamental modes can be considered separately , as for non - relativistic mhd turbulence the coupling between different types of fundamental modes is a subdominant effect therefore , in what follows we can consider three distinct cascades , namely , the cascade of alfven , slow and fast modes .numerical simulations show that the alfven and slow modes are dominant component of the cascade and those correspond to anisotropic perturbations aligned with the magnetic field .this provides the theoretical justification for relating gradients and magnetic fields in the observational studies . in the subalfvenic regime ,i.e. for the injection velocity being less than the alfven velocity , the alfven modes initially evolve by increasing the perpendicular wavenumber while keeping the parallel wavenumber stays the same ( see lv99 , ) .the increase of the perpendicular wave number makes the the alfvenic wavevectors more and more perpendicular to magnetic field .therefore the gradients of magnetic field and velocity get aligned perpendicular to the magnetic - field direction .the increase of perpendicular wavenumbers while the parallel wavenumber stays the same , increases the energy cascading .when the fraction of cascaded energy becomes the gs95 cascade takes over .this happens at the scale , where is the turbulence injection scale and is the alfven mach number ( see lv99 , ) . in the gs95 cascadeboth parallel and perpendicular wavenumbers of alfvenic perturbations increase . to quantifythis one should adopt the system of reference aligned with the local magnetic field . in this system of referencethe so - called critical balance condition should be satisfied , which states that the time of the interaction of the oppositely moving wavepackets / eddies , where is the parallel to magnetic field scale of the wavepacket / eddy is equal to the perpendicular shearing time of the wavepacket / eddy , where is the perpendicular scale of the wavepacket / eddy and is the turbulent velocity associated with this scale .this is how the cascade proceeds in the strong regime with the wavepackets getting more and more elongated according to ( lv99 ) : which testifies that for much smaller than the injection scale the perturbations are strongly elongated and aligned with the local magnetic field .therefore the gradients of the velocities are expected to be aligned with the _ local _ magnetic field and by measuring them one can trace the local variations of the magnetic field within a turbulent volume . within the gs95 picture and its generalization to the compressible media( gs95 , ) the slow modes are slaved by alfven modes . indeed ,alfven modes shear and cascade slow mode perturbations , which is the process that is confirmed by numerical simulations .thus , in agreement with numerical studies , the the anisotropic scaling given by eq .( [ lpar ] ) is valid the slow mode eddies . as a resultboth the gradients of alfven modes and of slow modes are expected to be perpendicular to the local direction of magnetic field .this is the key idea behind tracing magnetic - field directions with velocity gradients .while velocity gradients are directly not available from astrophysical observations of diffuse media , other measures can be constructed using observational data .velocity channel maps can be constructed using spectroscopic observations of doppler shifted lines .the statistics of these maps has been described in ( , henceforth lp00 ) for the optically thin data and in for the observations in the presence of absorption . in what follows we concentrate on the optically thin case and only mention some the possible effects of optically thick data.we note that when we discuss thin and thick slices of data , we mean not the effects of absorption but the thickness of channel maps .the minimal thickness of the latter is determined by the spectral resolution of the instrument , but it can be increased by integrating the spectroscopic data over larger . an important prediction in lp00 is that velocity caustics create fluctuations of intensity of the channel maps and the relative importance of the velocity and density fluctuations changes with the thickness of the channel maps .in particular , lp00 identified a regime of `` thin velocity slices '' and found that in this regime the intensity fluctuations in the slice are dominated by the velocity fluctuations provided that the density fluctuations have the 3d spectrum that is steep , i.e. most of the fluctuating energy is concentrated at the large scales .the aforementioned statement about the steep density spectrum can be expressed in terms of the 3d power spectrum . in terms of this spectrumthe kolmogorov cascade corresponds to and it is steep .the borderline spectrum is , with turbulence having spectrum , containing more energy at the small scale and therefore being shallow . for subsonic flowsthe density spectra in mhd turbulence are steep ( see ) .thus for such flows the intensity fluctuations in thin velocity slices of position - position - velocity ( ppv ) data are influenced only by the turbulent velocity statistics .naturally , the thermal broadening interferes with the minimal slice of the ppv for which fluctuations can be studied .this means that for studying thin slices of subsonic flows one should use heavier species that those of the main hydrogen astrophysical flow .the heavier species which are referred usually in the interstellar literature as `` metals '' can be used .if the density spectrum is shallow , i.e. , the contribution of density and velocities to the statistics of intensity fluctuations of the velocity slices was evaluated in lp00 . for thin slices ,both velocity and density are important , while the contribution of velocity decreases as the slice thickness increases .when the integration is performed over the entire line , only density fluctuations determine the fluctuations of the resulting intensity distribution .the anisotropy of mhd turbulence results in the anisotropy of the intensity distribution within the ppv slices .this was first demonstrated with the observational data in where a new technique of studying magnetic field direction was suggested . later a systematic numerical study of the change of the anisotropy of intensity fluctuations in ppv slices was performed in .an analytical study of the effect of the alfven , slow and fast modes on the anisotropy of intensity fluctuations in the ppv slices was performed in ( , henceforth klp16 ) .this study opens an avenue for separating the contributions of the different modes using the anisotropies of the measured correlation functions of the slice intensities . in view of the above ,it is important to perform the study of the gradients of intensities within ppv slices . in terms of the separating the compressible and incompressible components in klp16 ,such a study provides an additional test that the anisotropies are actually caused by turbulence . by varying the thickness of the slice one can study the variation of gradients as the relative contribution of density and velocity changesthis may be important as the density and velocity , in general , have different statistics and the fluctuations of these fields are aligned with magnetic field to a different degree .for instance , the strongly supersonic flows demonstrate isotropic spectra of density . in addition, for studies of galactic flows the regular shear opens a way to study different turbulent regions separately and therefore different channel maps can be associated with different locations within the galactic volume . in what follows the advancements of the understanding of the theory of ppv anisotropies in klp16guide us in studying gradients in the ppv velocity slices .note that the criterion for the velocity slice of being thin or thick as it given in lp00 is as follows : the velocity slice is thin if the square root of turbulent velocity dispersion on the scales that the slice is being studied is _ greater _ than the thickness of the slice , i.e. where is the separation of the correlating points over plane of the sky , i.e. the pp separation . in the following , we would characterize the channel thickness by whether the ratio passes through unity .the numerical data was obtained by numerical simulations using a single fluid , operator - split , staggered grid mhd eulerian code zeus - mp/3d to set up a three - dimensional , uniform , isothermal turbulent medium .periodic boundary conditions are applied to emulate a part of interstellar cloud .solenoidal turbulence injection was employed .our simulations consists of various alfvenic mach number and sonic mach number , where is the injection velocity , while and are the alfven and sonic velocities respectively , which they are listed in[tab : simulationparameters ] .the domain corresponds to the simulations of plasmas with magnetic pressure larger than the thermal pressure , i.e. plasmas with low , while the domain corresponds to the pressure dominated plasmas with . to investigate the detail structure of gradients from different and , we employ the wave mode decomposition method in to extract alfven , slow and fast modes from velocity data .the corresponding equations determining the basis for the decomposition into modes are : [ eq : fsa - decompositions ] where , , and .we would only use the los component of the decomposed velocities for velocity channels calculations.that is to say , the three velocity modes can then be acquired by (\hat{\zeta}_{f , s , a } \cdot \hat{\zeta}_{los})\ ] ] where is the fourier transform operator .the upper panel of figure [ fig : fsa ] illustrates the decomposition procedure that takes place in the fourier space .the resulting 3 data cubes are dominated by the alfven , slow and fast modes , respectively . in the middle and lower panels of figure [ fig : fsa ]we show the results of decomposed velocity cube which being projected along the x - axis .we illustrate the decomposition method using two cubes with low and high .the properties of the fast and slow modes differ in low and high plasmas , therefore we study these two cases separately in the following sections . [tab : simulationparameters ] c c c c c model & & & & resolution + a1 & 0.2 & 0.2 & 0.02 & + a2 & 0.2 & 0.2 & 0.2178 & + a3 & 0.2 & 0.2 & 2 & + a4 & 0.2 & 0.2 & 21.78 & + a5 & 0.2 & 0.2 & 200 & + b11 & 0.4 & 0.04 & 0.02 & + b12 & 0.8 & 0.08 & 0.02 & + b13 & 1.6 & 0.16 & 0.02 & + b14 & 3.2 & 0.32 & 0.02 & + b15 & 6.4 & 0.64 & 0.02 & + b21 & 0.4 & 0.132 & 0.2178 & + b22 & 0.8 & 0.264 & 0.2178 & + b23 & 1.6 & 0.528 & 0.2178 & + b31 & 0.4 & 0.4 & 2 & + b32 & 0.8 & 0.8 & 2 & + b41 & 0.132 & 0.4 & 18.3654 & + b42 & 0.264 & 0.8 & 18.3654 & + b51 & 0.04 & 0.4 & 200 & + b52 & 0.08 & 0.8 & 200 & + gradients were calculated following the procedures described in yl17 . for determining whether the gradients are probing thin or thick channel map we use the following criterion : the channel is thin if the gradient is calculated over the patch of size for which the criterion given by eq . ( [ criterion ] )is satisfied ; otherwise , the channel is thick . according to the thickness conditions, we construct the _ column density channel map _ about some velocity slice by : the respective _ velocity channel map _ is to extract the statistics from gradients map , we use the sub - block averaging method introduced in yl17. we also use a way to produce channel maps that contain only spatial frequencies for which the slice is thin . for this purpose we provide filtering of the high spatial frequencies for which the criterion given by eq .( [ criterion ] ) is not satisfied . as the velocity dispersion described in eq .( [ criterion ] ) depends on the block size .we use the lowest velocity dispersion value in all blocks for the production of channel maps .as the velocities , unlike densities , are directly related to mhd turbulence , we first consider only intensity fluctuations that arise from only turbulent velocity . for this purposewe create data cubes using the velocity field obtained from our 3d numerical simulations but substitute the actual densities by a constant density .figure [ fig : illus-1 ] illustrates the relative orientation of ppv velocity slice intensity gradients or velocity channel gradients ( vchgs ) and the projected magnetic field on a _ thin _ slice setting .the gradients are all rotated for 90 degrees , which will be annotated as _ rotated _ gradients .the rotated gradients according to the theoretical considerations above correspond to the magnetic field directions . in the following sections ,gradients are assumed to be rotated , unless emphasized specifically .illustrations on the the intensity ( left , green ) and velocity ( right , red ) channel _ rotated _ gradients from a thin slice map with respect to the projected magnetic field ( blue ) .notice that the velocity field have not been decomposed by the cl03 method yet.,title="fig:",scaledwidth=22.0% ] illustrations on the the intensity ( left , green ) and velocity ( right , red ) channel _ rotated _ gradients from a thin slice map with respect to the projected magnetic field ( blue ) .notice that the velocity field have not been decomposed by the cl03 method yet.,title="fig:",scaledwidth=22.0% ] the fact that the vchgs for thin maps are strongly influenced by the velocity field shows the promise of the approach for tracing magnetic fields using thin channel maps .below we consider the gradients arising from alfven , slow and fast modes separately .the theoretical considerations giving the guidance for this study are provided in klp16 , where the anisotropies of the correlation of the intensities arising from velocities were studied .the dynamics of alfvenic modes in the strong mhd turbulence are very different from the dynamics of freely propagating alfven waves .the modes cascade on the scale of the order of one period and the wavevector of the alfvenic perturbations in strong turbulence is nearly perpendicular to the local direction of the magnetic field .the anisotropy of the intensity correlation of the iso - contours of correlation is elongated _ parallel _ to magnetic field .the latter is essential for tracing of magnetic fields , as the gradients that these modes are inducing are also perpendicular to the local direction of magnetic field , which is employed in gl16 , yl17 and consequent studies from them . for alfven modesthe theoretical expectation is very natural : the anisotropy of the intensity correlations in a thin slice increases with the decrease of the alfven mach number , i.e. the smaller the velocity perturbations , the more anisotropic is the velocity distribution .figure [ fig : alf-1 ] illustrates the velocity channels in low and high and their gradients within a thin slice that are induced by alfvenic modes .the right panel of figure [ fig : alf-2 ] shows the change of the with respect to .according to the change of alfven modes with is marginal .this is shown in the left panel of [ fig : alf-2 ] .the ( constant density ) velocity channel map for low ( left ) and high ( right ) alfven mode in a thin slice map .their respective gradients ( red for vchgs , green for igs ) are overplotted by projected magnetic field ( blue).,title="fig:",scaledwidth=22.0% ] the ( constant density ) velocity channel map for low ( left ) and high ( right ) alfven mode in a thin slice map .their respective gradients ( red for vchgs , green for igs ) are overplotted by projected magnetic field ( blue).,title="fig:",scaledwidth=22.0% ] as we increase the thickness of channel maps , the of alfven modes is decreasing , which is shown in the blue curves in [ fig : alf-3 ] .at the same time the contrast of the gradients given by is decreasing . due to this decreasethe role of velocity fluctuations in the channel map fluctuations is decreasing and the thick channels get dominated by density effects .this is illustrated in figure [ fig : alf-4 ] . a more detailed discussion on the contribution of density effect will be in 5 .one should remember , however , that whether the channels are thin or thick depends on the size of the eddies that are being studied ( lp00 ) .therefore , if we calculate the gradients for the large blocks , we are sampling large eddies and for such large eddies the slices can be still thin .variation of with respect to channel map thickness.,scaledwidth=48.0% ] slow waves present perturbations of magnetic field and density that propagate along magnetic field lines . in the limit of incompressible media , slow waves are pure magnetic compressions that propagate along magnetic field lines .formally the incompressible case corresponds to and in this limit the slow modes are frequently called preudo - alfven modes . for the slow waves are density perturbations propagating along magnetic field lines . in the presence of alfvenic turbulence , slow modes do not evolve on their own , but are sheared by alfven modes . as a result , the features of alfvenic turbulence , e.g. spectrum and anisotropies , are imprinted on the slow modes ( see gs95 , ) .that also means the perpendicular velocity gradients in alfven modes will also be inherited by slow modes .figure [ fig : slow-1 ] illustrates the channel maps for a thin slice for the case of low ( left panel ) and high ( right panel ) .it is obvious that low vchgs performs better , which is also shown in figure [ fig : alf-3 ] .in fact , when is small , its respective channel maps behave more likely to that of alfven mode channel maps . on the other hand ,channel maps from high systems are not as highly structured as the maps with low as and are complementary measures of the physical conditions in mhd turbulence , we can study the change of by varying and .figure [ fig : slow-2 ] shows the change of the of the vchgs in the thin channels induced by slow modes for different by changing .indeed , while according to klp16 the anisotropies of correlation functions of the intensities in thin slice for slow modes are different for the case of low- and high- plasmas , the properties of turbulence are also changing with .indeed , for we expect the formation of shocks in the media .the _ thin _ slice ( constant density ) velocity channel maps for low ( left ) and high ( right ) for slow modes , title="fig:",scaledwidth=23.0% ] the _ thin _ slice ( constant density ) velocity channel maps for low ( left ) and high ( right ) for slow modes , title="fig:",scaledwidth=23.0% ] the change of of slow modes vchgs with respect to by varying ,scaledwidth=48.0% ] on the change of channel widths to , the green curves from figure [ fig : alf-3 ] shows the superiority of alfven modes when compared to slow modes .moreover , the change of from slow modes due to a change of channel width is much more significant than that of alfven modes . while these modes are not direct observables , as both modes contributes to velocity gradient alignments .the result of figure [ fig : alf-3 ] suggests velocity centroid maps without proper selection of channel widths may only have fair performance due to pollution of from slow modes .this in turn suggesting a reduction of channel widths can enhance the performance on tracing magnetic fields using the gradient technique .similar to the properties of the slow modes , the properties of fast modes are different in low and high plasmas . for high plasmasthe fast wave are similar to sound waves that propagate with sound speed irrespectively of the magnetic field direction .similar to acoustic turbulence , the corresponding turbulence is expected to be isotropic . in low fast modes correspond to magnetic field compressions that propagate with the alfven velocity . in terms of correlation function anisotropy of fast modes ,the elongation of the iso - contours of correlation is elongated _ perpendicular _ to magnetic field .the alignment of undecomposed velocity gradients will contain contributions from the three modes .therefore for fast mode dominated environments , one should expect the gradients to be parallel instead of perpendicular to local magnetic field .the _ thin _ slice ( constant - density ) velocity channel maps with _ unrotated _ gradients for low ( left ) and right ( right ) for fast modes .the red vectors are the _ unrotated _ vchgs , and the blue vectors are projected magnetic field direction.,title="fig:",scaledwidth=23.0% ] the _ thin _ slice ( constant - density ) velocity channel maps with _ unrotated _ gradients for low ( left ) and right ( right ) for fast modes .the red vectors are the _ unrotated _ vchgs , and the blue vectors are projected magnetic field direction.,title="fig:",scaledwidth=23.0% ] figure [ fig : fast-1 ] shows the channel maps with low and high respectively .the low map carries more structures than that of high map , but they are obviously carrying different anisotropy direction when examining the contours in the maps . in the language of fast mode gradients ,the maximal gradient is _ parallel _ to local magnetic field direction .figure [ fig : fast-2 ] illustrates the of fast modes vchgs which the gradient vectors are not rotated .in contrast to the decreasing for rotated alfven and slow mode vchgs , the of unrotated fast mode vchgs increases significantly when the channel width passes through unity for both low and high case .this suggests a decrease of channel width can suppress the contribution of fast modes in velocity gradient calculations .numerical simulations ( e.g. ) indicate that fast modes are subdominant at least for the cases of incompressible driving of turbulence .our study indicates that in terms of vchgs one can expect a further suppression of the fast mode contribution in thin channels .this important as the directions of gradients induced by fast modes are orthogonal to those by fast and slow modes and , therefore , potentially their interference could make magnetic field tracing less reliable . at the same time , this peculiar properties of gradients arising from fast modes can be used to separate their contribution . note , that fast modes are identified in as the dominant source of cosmic ray scattering within galactic environments .to study the relative importance of density and velocity fluctuations , we create synthetic maps by transposing the velocity data cube on the plane - of - sky plane , if the original velocity cube is , and the line - of - sight along the x - direction , then the transposed cube is .computationally , we would calculate the correlation function anisotropy of a quantity called _ transposed centroid _ : the reason behind this is that both densities and velocities are anisotropic and trace the local magnetic field direction .after the transpose procedure , the velocity anisotropies are orthogonal to that from density data and the relative influence of the two contributions to the centroid anisotropy can be evaluated .when examining the anisotropy using the correlation function in the transposed centroid map , the ratio between the axis perpendicular ( the elongation direction of ) to magnetic field direction over that of parallel one ( the elongation direction of ) will reflect the relative importance of velocity to density fluctuations .a scatter plot showing the change of axis ratio relative to velocity slice thickness.,scaledwidth=48.0% ] figure [ fig : fluc-1 ] shows the decreasing trend of the aforementioned axis ratio as velocity slice thickness increases .it is evident that even though density information can contaminate the contribution of alignment to magnetic field from velocity information , the overall anisotropy comes from the velocity centroid , which essentially a weighted product of density and velocity data , is still carrying the anisotropic nature along local magnetic field directions .nevertheless , using a thin slice can reduce the contribution from density data , which in turn provide a more reliable detection of local magnetic field .it was demonstrated in that the anisotropies in channel maps can trace magnetic fields . while these anisotropies can be very informative in terms of determining the relative contribution of compressible versus incompressible modes ( see kandel et al .2016 , 2017 ) , the tracing of the detailed magnetic field structure is problematic with them . in figure[ fig : cfa-1 ] we show the vchgs alignment measure versus the alignment measure of the directions of the maximal anisotropy ( elongation ) of the correlation functions of the channel map intensities .we clearly see that the channel map intensities are not good for detailed magnetic field tracing , which is consistent to our previous studies on the comparisons of alignments from correlation function anisotropies and different types of gradients ( see yl17 , , yl17b ) . a comparison of on a thin channel map when applying the gradient technique and correlation function anisotropy techniques to alfven mode vchgs ., scaledwidth=48.0% ] the reason of this is easy to understand .the velocity gradients correspond to the angular velocities of turbulent eddies .these individual eddies are aligned with the local magnetic field . on the contrary ,the correlation functions are the measures that are well defined only after the ensemble averaging .thus the averaging over small patches does not produce good statistics necessary for the correlation functions to be well defined .our study above suggests that the decrease of channel width improves the alignment the gradient and magnetic field alignment .however , it is important to understand whether one can use only a part of the spectral line for the analysis . given the channel map data one can form _ reduced centroids _ that contain only part of the line : where the index determines the order of the centroid .increasing may enhance the effects of velocity , but it also can increase the noise .these measures are useful both in the case of studying gradients from extended galactic disk data and also for the data from the wings of the absorption lines . with the dispersion of velocities from equation [ criterion ] and the average width of all the doppler shifted lines , we can classify the line into three regions , namely the central part , the middle part and the wing .we would like to illustrate the alignment of gradients for the reduced centroid from each of these three portion of the lines using the alfven mode .figure [ fig : rc-1 ] illustrates the reduced centroid from a selected alfven mode velocity data cube .the central portion is very much the same as the thin slice result , giving an excellent alignment in respect to the magnetic field .the middle portion of the reduced centroid also shows a fair alignment , but the number of data points gets limited due to the limitations of the numerical resolution and the shot noise increase . nevertheless , the gradient map after sub - block average still provides a nice fit to magnetic field direction .the wing portion for this illustration is very much limited by the discrete data : it does not contain enough data points even for showing the distribution of centroids .in fact , in mhd turbulence we expect that the overall profile to be determined by the largest eddies that produce most of the dispersion .the gradients are not expected to depend on this .therefore all three parts of the line should be influenced by the small scale eddies that are aligned in respect to the magnetic fields local to them . by looking at the contour structure of the three maps ,we actually see the correspondence of the elongation to magnetic field directions obtained with different parts of the line .for observations of galactic hi the use of different portions of the line is not any more just a test of our theoretical concepts . due to the galactic rotation ,different parts of the galactic hi have different velocities in respect to the observer .therefore one can trace the variations of magnetic field using the reduced centroids . to demonstrate the effect we use the galfa data and compare our results with the averaged along the line of sight magnetic field directions as they are traced by planck polarization . unlike the study with full centroids in yl17now we do not expect good tracing if the perpendicular direction of magnetic field is changing along the line of sight .following the same procedure as in yl17 and selecting the same region with the same resolution , we calculate the reduced centroid by varying velocity thickness .figure [ fig : obs-1 ] shows the resulting map from the central , middle and wing parts of the line that are obtained using the reduced centroids .we see notable variations of the alignment which may be indicative of the magnetic field direction variation along the line of sight. this conclusion can be tested by using starlight polarization from the stars with known distances .turbulence can be studied using both velocity centroids and channel maps ( see in terms of the properties of correlation functions both techniques have full analytical treatment based on the statistics of the ppv developed in lp00 and .the anisotropies of the statistics of the channel maps is described in klp17 and of the velocity centroids in .the two techniques have proven to be complementary .for instance , studies of supersonic turbulence spectra is heavily distorted for velocity centroids . at the same time , supersonic turbulence is the natural domain for velocity channel analysis ( vca ) suggested in lp00 ( see e.g. ) . in view ofthat we feel it is advantageous to use gradients for both velocity centroids and channel maps .the most interesting case for the gradient technique that we explore is the case of gradients in thin channel maps . in this regimethe gradients of channel map intensities are affected mostly by velocity fluctuations , the latter tracing magnetic field better than densities . on the contrary , the gradients in the velocity channels in the limiting case when the velocity channels are wide to encompass the entire line correspond the intensity gradients .the latter case was studied in .the statistics of channel of intermediate thickness reflects both density and velocity statistics. for the studies of magnetic fields in the disk of the galaxy gradients within channel maps allow to investigate the line of sight distribution of magnetic fields using the galactic rotation curve . however , the interpretation of the results for the diffuse distribution should include the dispersion induced by turbulence .it is not possible to increase the 3d spacial resolution of the magnetic field directions by going to the thin velocity channels .the change of the channel thickness , nevertheless , changes the relative contributions of the velocity and density fluctuations into the channel map . in terms of resolving the spacial structure of magnetic field , for the atomic hydrogen studies the spatial resolution depends both on the direction of study , the velocity range as well as on the turbulent velocity dispersion .the latter provides the grid size that should be multiplied to the visual shear along the line of sight in the direction of the observations .individual clouds with high density contrast can be studied separately .this can be relevant to e.g. co .if we imagine a situation that direction of magnetic field is changing along the line of sight the techniques that are discussed here can provide a coarse graded picture of the changes of the magnetic field direction perpendicular to the line of sight . in this way we can distinguish the variations of regular magnetic field over the cells that are larger than . choosing velocity slice thickness much less than we are not getting higher spacial resolutions , but changing the relative contributions of the velocity and density fluctuations in the channels . if we choose the velocity slice thickness of the order of the turbulent velocity dispersion it produces a rough structure of the column densities over the size of the large scale eddies .therefore the corresponding channel map gradients can be associated with the igs that trace magnetic field with the limitations relevant to the ig tracing .in the situation when the effects of velocities should be stressed , the reduced centroids can also be used .in fact , or the hi line broadened through galactic rotation if the averaging over the entire turbulent velocity dispersion and taking gradients brings us back to the well known igs , the calculation of the reduced centroids over the velocity dispersion provides the vcg measure .in the paper above we have shown that the gradients calculated within velocity channel maps , i.e. vchgs , can trace magnetic field in diffuse media .these measures are complementary to the velocity centroids gradients , i.e. vcgs .our decomposition of the turbulent velocities into alfven , slow and fast modes shows that these modes provide different signatures in terms of the vchgs . in particular , 1 .we tested that the alignments of alfven and slow modes increases as the slice gets thinner .we showed that fast modes , whose gradients are parallel to magnetic field , can be suppressed by using thin slices .we showed that density contributions can be suppressed also by using thin slices .we constructed observable measures termed `` reduced centroids '' and explored their ability to trace magnetic fields .* acknowledgments .* elucidating discussions with chris mckee and susan clarke are acknowledged .al acknowledges the support the nsf grant ast 1212096 , nasa grant nnx14aj53 g as well as a distinguished visitor pve / capes appointment at the physics graduate program of the federal university of rio grande do norte , the inct inespao and physics graduate program / ufrn .the stay of khy at uw - madison is supported by the fulbright - lee fellowship .clemens , d. p. , cashman , l. , hoq , s. , montgomery , j. , & pavel , m. d. 2014 , american astronomical society meeting abstracts # 224 , 224 , 220.06 correia , c. , lazarian , a. , burkhart , b. , pogosyan , d. , & de medeiros , j. r. 2016 , , 818 , 118 , a. , & lazarian , a. 2005 , http://dx.doi.org/10.1086/432458 [ , 631 , 320 ] esquivel , a. , & lazarian , a. 2005 , , 631 , 320 esquivel , a. , lazarian , a. , horibe , s. , et al . 2007 , , 381 , 1733 fernandez , e. r. , zaroubi , s. , iliev , i. t. , mellema , g. , & jeli , v. 2014 , , 440 , 298 esquivel , a. , & lazarian , a. 2011 , , 740 , 117 gaensler , b. m. , haverkorn , m. , burkhart , b. , et al .2011 , http://dx.doi.org/10.1038/nature10446 [ , 478 , 214 ] galtier , s. , pouquet , a. , & mangeney , a. 2005 , physics of plasmas , 12 , 092310 , c. a. , burkhart , b. , lazarian , a. , gaensler , b. m. , & mcclure - griffiths , n. m. 2016 , http://dx.doi.org/10.3847/0004-637x/822/1/13 [ , 822 , 13 ] hayes , j. c. , norman , m. l. , fiedler , r. a. , et al .2006 , , 165 , 188 heyer , m. , gong , h. , ostriker , e. , & brunt , c. 2008 , , 680 , 420 - 427 , j. c. 1984 , http://dx.doi.org/10.1086/162481 [ , 285 , 109 ] lazarian , a. , pogosyan , d. , & esquivel , a. 2002 , seeing through the dust : the detection of hi and the exploration of the ism in galaxies , 276 , 182 lazarian , a. http://dx.doi.org/10.1086/505796[2006 , 4 ] . 2016 , http://dx.doi.org/10.3847/1538-4357/833/2/131 [ , 833 , 131 ] lazarian , a. , & pogosyan , d. 2000 , , 537 , 720 lazarian , a. , & pogosyan , d. 2004 , , 616 , 943 lazarian , a. , & pogosyan , d. 2006 , , 652 , 1348 , a. , & pogosyan , d. 2012 , http://dx.doi.org/10.1088/0004-637x/747/1/5 [ , 747 , 5 ] , k. c. 1959 , http://dx.doi.org/10.1086/146713 [ , 130 , 241 ] yan , h. , & lazarian , a. 2002 , physical review letters , 89 , 281102 yan , h. , & lazarian , a. 2012 , numerical modeling of space plasma slows ( astronum 2011 ) , 459 , 40 yuen , k. h. , & lazarian , a. 2017 , arxiv:1701.07944 | we explore the ability of gradients of velocity channel map intensities to trace magnetic fields in turbulent diffuse media . this work capitalizes both on the modern theory of mhd turbulence that predicts the magnetic eddies tend to be aligned with the local direction of magnetic field , and the theory of position - position - velocity ( ppv ) statistics that describes how the velocity and density fluctuations in real space are being mapped into the ppv space . we show that for steep , e.g. kolmogorov - type density spectrum , the velocity channel gradients ( vchgs ) in thin velocity channels are dominated by velocity contributions . while for the velocity channel thickness comparable to turbulent injection velocities , the vchgs are dominated by the properties of turbulent densities . as turbulent velocity structures are better aligned with magnetic fields , the tracing with thin channels has the ability of representing the magnetic field better . we decompose the results of 3d mhd simulations into alfven , slow and fast modes and analyze synthetic maps produced with these modes . we show that alfven and slow modes act in unison to trace magnetic field , while the velocity gradients produced by the fast mode are orthogonal to those produced by the first two modes . however , for thin channel maps the contributions from the alfven and slow modes are shown to dominate which allows a reliable magnetic field tracing . we also introduce centroids that use only part of the spectral line rather the entire spectral line and apply them to galfa 21 cm data . we compare the directions obtained with the gradients of these `` reduced centroids '' and the magnetic field directions as they are traced by the planck polarization . we believe that the observed deviations can potentially reveal the variations of the magnetic field along the line of sight . |
the _ local network effect _ , which is a relatively new idea in economics , means that the decision of one entity can influence those by whom that entity is connected to .particularly , the network effect manifests through inactivation as the value of exiting the enterprise is enhanced through the catalytic action of the connections between participants .homophily spells that a contact between similar people occurs at a higher rate than among dissimilar people" , and strongly influences contagions that diffuse through social links .mlm participants are thus more likely to connect with others of the same ses , consequently elevating homogeneity ( or depressing diversity ) in the firm .diversity here is measured by the simpson index serving as a dimensionless potential function minimised when ( at highest diversity ) .the network effect is constituted as , for ; hence , the network effect is stronger at less diversity . the generalized hurst method has been coded by one of its authors , t. aste .the code was downloaded from matlab file exchange website , http://www.mathworks.com/matlabcentral/fileexchange/30076 , and was used with default settings in the calculation of and .characteristic timescale is chosen at days ( i.e. , month days ) .assuming that the system - size parameter is of the order , individuals , and the unit individuals , then setting implies an average per - capita encounter rate between and per month , which is a reasonable estimate .100 beamish , t. d. & biggart n. w. mesoeconomics : business cycles , entrepreneurship , and economic crisis in commercial building markets . in lounsbury , m & hirsch , p. m. ( ed . ) , markets on trial : the economic sociology of the u.s .financial crisis : part b _ res .org . _ * 30 * , 245 - 280 ( 2010 ) .koellinger , p. d. & thurik , a. r. entrepreneurship and the business cycle ._ , doi:10.1162/rest_a_00224 ( 2011 ) .eagle , n. , macy , m. & claxton , r. network diversity and economic development ._ science _ * 328 * , 1029 - 1031 ( 2010 ) .granovetter , m. the impact of social structure on economic outcomes . _ j. econ .perspect . _* 19 * , 33 - 50 ( 2005 ) .minsky , h. p. monetary systems and accelerator models ._ * 47 * , 860 - 883 ( 1957 ) .sims , c. a. macroeconomics and reality ._ econometrica _ * 48 * , 1 - 48 ( 1980 ) .colander , d. , howitt , p. , kirman , a. , leijonhufvud , a. & mehrling , p. beyond dsge models : toward an empirically based macroeconomics .papers & proc . _ * 98 * , 236 - 240 ( 2008 ) .elsner , w. why meso ?on aggregation " and emergence " , and why and how the meso level is essential in social economics . _ for .econ . _ * 36 * , 1 - 16 ( 2007 ) .albaum , g. & peterson , r. a. multilevel ( network ) marketing : an objective view . _ the market .* 11 * , 347 - 361 ( 2011 ) .morales , r. , di matteo , t. , gramatica , r. & aste , t. dynamical generalized hurst exponent as a tool to monitor unstable periods in financial time series . _physica a _ * 391 * , 3180 - 3189 ( 2012 ) .di matteo , t. multi - scaling in finance ._ quant . fin ._ * 7 * , 21 - 36 ( 2007 ) . burns , a. f. _ the frontiers of economic knowledge _ ( princeton u. p. , 1954 ) .dopfer , k. the origins of meso economics : schumperter s legacy and beyond ._ j. evol .* 22 * , 133 - 160 ( 2012 ) .coughlan , a. t. & grayson , k. network marketing organizations : compensation plans , retail network growth , and profitability . _ int ._ * 15 * , 401 - 426 ( 1998 ) .borland , l. option pricing formulas based on a non - gaussian stock price model .lett . _ * 89 * , 098701 ( 2002 ) .gimeno , j. competition within and between networks : the contingent effect of competitive embeddedness on alliance formation .j. _ * 47 * , 820 - 842 ( 2004 ) .simpson , e. h. measurement of diversity ._ nature _ * 163 * , 688 ( 1949 ) .sundararajan , a. local network effects and complex network structure ._ * 7 * , doi:10.2202/1935 - 1704.1319 ( 2008 ) .galeotti , a. , goyal , s. , jackson , m. o. , vega - redondo , f. & yariv , l. network games .stud . _ * 77 * , 218 - 244 ( 2010 ) .jackson , m. o. & yariv , l. diffusion of behavior and equilibrium properties in network games .econ . rev ._ * 97 * , 92 - 98 ( 2007 ) .goeree , j. k. , mcconnell , m. a. , mitchell , t. , tromp , t. & yariv , l. the law of giving .j. microecon . _* 2 * , 183 - 203 ( 2010 ) .apicella , c. l. , marlowe , f. w. , fowler , j. h. & christakis , n. a. social networks and cooperation in hunter - gatherers ._ nature _ * 481 * , 497 - 501 ( 2012 ) .mcpherson , m. , smith - lovin , l. & cook , j. m. birds of a feather : homophily in social networks ._ * 27 * , 415 - 444 ( 2001 ) .aral , s. , muchnik , l. & sundararajan , a. distinguishing influence - based contagion from homophily - driven diffusion in dynamic networks .u s a _ * 106 * , 21544 - 21549 ( 2009 ) .campbell , k. e. , marsden , p. v. & hulbert , j. s. social resources and socioeconomic status . _ social networks _ * 8 * , 97 - 117 ( 1986 ) .van kampen , n. g. _ stochastic processes in physics and chemistry _( north - holland , 2007 ) .gillespie , d. t. a general method for numerically simulating the stochastic time evolution of coupled chemical reactions ._ j. comput .* 22 * , 403 - 434 ( 1976 ) .blume , l. e. & easley , d. optimality and natural selection in markets ._ j. econ .theor . _ * 107 * , 95 - 135 ( 2002 ) .webb , j. n. _ game theory : decisions , interaction and evolution _ ( springer - verlag , 2007 ) .koehn , d. ethical issues connected with multi - level marketing schemes .* 29 * , 153 - 160 ( 2001 ) . | business - cycle phenomenon has long been regarded as an empirical curiosity in macroeconomics . regarding its causes , recent evidence suggests that economic oscillations are engendered by fluctuations in the level of entrepreneurial activity . opportunities promoting such activity are known to be embedded in social network structures . however , predominant understanding of the dynamics of economic oscillations originates from stylised pendulum models on aggregate macroeconomic variables , which overlook the role of social networks to economic activity echoing the so - called _ aggregation problem _ of reconciling macroeconomics with microeconomics . here i demonstrate how oscillations can arise in a networked economy epitomised by an industry known as multi - level marketing or mlm , the lifeblood of which is the profit - driven interactions among entrepreneurs . quarterly data ( over a decade ) which i gathered from public mlms reveal oscillatory time - series of entrepreneurial activity that display nontrivial scaling and _ _ persistence__ . i found through a stochastic population - dynamic model , which agrees with the notion of profit maximisation as the organising principle of capitalist enterprise , that oscillations exhibiting those characteristics arise at the brink of a critical balance between entrepreneurial activation and inactivation brought about by a homophily - driven _ network effect_. oscillations develop because of stochastic tunnelling permitted through the destabilisation by noise of an evolutionarily stable state . the results fit altogether as evidence to the burns - mitchell conjecture that economic oscillations must be induced by the workings of an underlying network of free enterprises searching for profit" . i anticipate that the findings , interpreted under a mesoeconomic framework , could open a viable window for scrutinising the nature of business oscillations through the lens of the emerging field of network science . enquiry along these lines could shed further light into the network origins of the business - cycle phenomenon . known widely in literature as _ network marketing _ , mlm executes through embedded social networks its essential business functions such as goods distribution , consumption , marketing , and direct selling . that makes mlm a stark microcosm of a networked economy . one salient yet so far overlooked feature of mlm dynamics is the aperiodic oscillations in firm size , quantified by the number of participating entrepreneurs ( fig . 1 ) . empirical quarterly firm - size data have been collected from four public mlms ( supplementary data ; supplementary methods , s1 ) : nuskin enterprises ( nus ) , nature sunshine ( natr ) , usana health sciences ( usna ) , and mannatech inc . ( mtex ) . publicly - listed firms are chosen because they are required to disclose accurate business data on a regular basis . the average revenue ( i.e. , total revenue divided by for any given quarter ) does not proportionately rise with ( fig . 1c , d ) , implying that firm - size expansion does not inevitably translate into revenue growth . firm size is thus a more reliable quantifier for entrepreneurial activity than is total revenue . the scaling property of the time - series is examined via hurst analysis ( methods ; supplementary methods , s1 ) . the hurst exponents , and , quantify the scaling of the absolute increments and of the power spectrum , respectively . if time series were generated by a wiener process , such as in the black - scholes model , then . but indicates _ persistence _ , i.e. , changes in one direction usually occur in consecutive periods ; whereas suggests _ anti - persistence _ , i.e. , changes in opposite directions usually appear in sequence . ideally , a single scaling regime means , which applies to time - series generated by unifractal processes such as the wiener process and the fractal brownian motion . table 1 presents hurst exponents for different mlms . generally , except for nus north asia and natr with ( within standard deviation ) ; and within standard deviation . overall , these features of the time - series suggest that mlm firm dynamics is a non - wiener unifractal process . unifractality implies self - similarity such that conclusions drawn at one timescale remains statistically valid at another timescale . .hurst exponents and for different mlms estimated using the generalised hurst method . indicates persistence , whereas indicates anti - persistence . values , which are closely related to the scaling of the power spectrum , are also shown . generally , within standard deviation . the standard deviation values are determined from a pre - testing procedure ( supplementary methods , s1 ) . [ cols="^,^,^",options="header " , ] [ tab : hursts ] [ fig : empirical ] an mlm firm is considered as a population of profit - seeking entrepreneurs . this population exhibits disorder through the presence of two entrepreneur types distinguished by socio - economic status ( ses ) . let and denote these types , wherein has higher ses than , and and denote their subpopulation size , respectively . the total population at any given time is thus . three major processes run the population dynamics : entrepreneurial activation by recruitment ; competitive inactivation ; and catalytic inactivation due to a network effect . recruitment is expressed in the following reaction equations : wherein and are per - capita rates of recruitment of types and , respectively , and because s higher ses implies a faster rate of entrepreneurial activation through the support of bigger capital and vaster social resources . competitive inactivation occurs due to market overlap , or _ _ niche overlap__ , as participants can go head - to - head over the same clientele or market . an encounter rate , which can be related to the density of the embedding social network , quantifies the probability of market overlaps . thus , competitive inactivation is : lastly , catalytic inactivation , which denotes the _ network effect _ ( methods ) , is expressed as : the network structure of the mlm catalyses inactivation of existing participants at the rate , where is a measure of the probability that two members drawn randomly from the mlm belong to the same type . it has been widely used in literature as a diversity index . due to interconnectedness and homophily , the inactivation of one could ( like a contagion ) infect another to follow suit . combining equations ( [ eq : recruitment] [ eq : catalytic ] ) results to a master equation ( supplementary equation 1 ) for the state probability density . perturbation analysis accounts for the fluctuations arising from demographic stochasticity . in terms of the system size ( roughly the size of that part of the overall population considered _ fit _ for entrepreneurial activities ) the following ansatz is made : and , where and are the average concentrations , and and are the magnitude of the fluctuations of the stochastic variables and , respectively ( supplementary methods , s2 ) . the highest order in the expansion expresses the macroscopic rate equations for and : meanwhile , the next highest order term gives the fokker - planck equation , or fpe ( supplementary equation 2 ) , governing the dynamics of the probability density for the magnitude of the fluctuations . from the fpe , the expectation values of the stationary fluctuations are , which supports the interpretation that the deterministic solutions to ( [ eq : macroscopic ] ) are the correct average values . the model is nondimensionalised by setting the characteristic timescale at days ( methods ) . consequently , the rates can be squarely related to empirical data by rescaling to appropriate units . dimensionless rates take on simplified yet meaningful values : ; ; and . bifurcation analysis ( supplementary methods , s4 ) of the nondimensionalised equation ( [ eq : macroscopic ] ) unveils a bifurcation manifold , where equation ( [ eq : bifurcation ] ) coincides with an evolutionarily stable state ( ess ) of a population game between the types ( supplementary methods , s3 s4 ) . the fraction of entrepreneurs is , where , and is the fluctuation component . analysis of the second moments from the fpe shows that the variance diverges as as ( supplementary methods , s5 ) . that is a signature of criticality through which the ess , where and , is ( quite counterintuitively ) destabilised as the bifurcation manifold is approached . this mechanism is hereby referred as _ stochastic tunnelling _ wherein noise enables the state trajectory to cross a phase barrier that could not have been otherwise traversed without actively tuning the bifurcation parameter ( supplementary methods , s4 ) . stochastic tunnelling drives the business oscillations ( fig . 2 ) . time series is generated by solving the model using a numerical technique , known as gillespie s stochastic simulation algorithm , which directly integrates the master equation ( supplementary methods , s6 ) . diverging variance indeed allows the solution to wander far enough from the ess and closer to an unstable point ( uep ) which pushes that solution toward the boundary state , where and ( fig . 2a ) . that noise also enables the solution to sling back to the ess consequently forming loops in the phase portrait , hence , oscillations in the time series ( fig . 2b ) . the time series consist of upswings associated with increasing diversity and downswings with decreasing diversity , i.e. , as . the remarkable observation is that recovery from low points of the series coincide with periods when is dominant a case of the fitter entrepreneurs surviving through recessions" . profit maximisation is an axiom of capitalist enterprise . mlm may enhance profitability by maximising the proportion of entrepreneurs ( supplementary methods , s7 ) . thus , the time - average value is examined for various pairs of and which consequently depicts the phase diagram of the model ( fig . 3a ) . business oscillations come about as a result of stochastic tunnelling through the critical boundary . phase ii , where stably dominates ( supplementary fig . s2b ) , can be considered pareto - optimal as the mlm maximises profitability as a whole . but high levels of targeted recruitment , i.e. , , are required . entrepreneurial activation , however , might in reality be less discriminatory and thus , which denotes higher entropy ( supplementary discussion ) . the critical boundary delineates , for any magnitude of the network effect , the minimum that promotes long - run dominance of entrepreneurs . nevertheless , a stronger network effect tends to frustrate that dominance as catalytic inactivation increasingly outpaces activation , leading to degradation of entrepreneurial activity ( supplementary fig . s2d ) . the ess at the iii - iv boundary ( fig . 3a ) is therefore pareto - dominated . the hurst maps ( fig . 3b , c ) locate where the real mlms are on the phase diagram . the hurst exponents are determined from the same exact method . clusters appear in the vicinity of the iii - iv boundary . on these clusters and are approximately between and , about the same range of values found in real mlms ( see table 1 ) . a correlative plot ( fig . 3d ) between and further confirms not only agreement between model and empirical data , but also their unifractality . overall , these findings suggest that real - world mlms are pareto - dominated economic systems , which are operating in an environment characterised by high entropy , i.e. , , and by a strong network effect ( i.e. , ) . the study paints an illuminating insight about the nature of mlm operations . mlms have been accused in several instances by discontent participants for ethical violations concerning its business practices . the model justifies such disgruntlement for two reasons . first , that profit is closely associated with recruitment implies less selective entrepreneurial activation . second , that recruitment proceeds through embedded networks connotes strong network effects . less - fit entrepreneurs can join the market in droves but are weeded out too soon because of the pareto - dominated nature of the venture . the feeling of being victimised is thus not at all surprising . the mesoeconomic framework ( i.e. , linking microeconomic foundations with macroeconomic phenomena ) puts the present study in a broader economic context . a more network - dynamic approach to viewing business cycles is hereby encouraged . lastly , the mathematical model could be extended or refined , such as by generalising the network effect using the hlder mean such that {x_a^q+x_b^q},\ ; \forall q>1 $ ] ( supplementary discussion , supplementary fig . s1 ) ; whereas empirical data of higher temporal resolution may become available in the future , to further test the implications that came forth . [ fig : phaseportrait ] , , for a total time of periods year . * a , * phase portrait showing one stochastic realization for ( in units ) versus ; marks the initial condition : and . the dashed curves and lines are the nullclines of the replicator equations from the evolutionary game ( supplementary methods , s3 ) , which intersect at the evolutionarily stable state ( ess , ) and at an unstable equilibrium point ( uep , ) . the trajectory of the solution forms a loop indicating the oscillations . * b , * time series for and . hurst analysis gives and for the series.,title="fig : " ] [ fig : phasediagram ] for different pairs of and ( resolution : ) . phase i represents the regime where the critical manifold is inaccessible due to the constraint . phase ii denotes the pareto - optimal region and where . phase iii , where is similar to phase i except that here the critical manifold is accessible . phase iv is where but resulting to degradation of due to ( supplementary fig . 2d ) . _ phase boundaries _ : i ii , iii , ; iii ii , iv , ; and ii iv , . * b , c , * map of the hurst exponents and , respectively . the phase boundaries are superimposed . the data are generated by simulating the model for periods years with initial population for different pairs of and ( resolution : ) . each data pixel is an average of four stochastic realisations . * d , * correlative plot between and . empirical data are those listed in table 1 . the dashed line denotes unifractality of the time series.,title="fig : " ] |
adhesion between elastic bodies was relatively unexplored until the last few decades , and this is reflected in the very marginal role it has in the otherwise very comprehensive book of k.l .johnson ( 1985 ) , despite johnson himself is one of the authors of one of the most important papers on adhesion ( on adhesion of elastic spheres , the jkr theory , johnson et al . , 1971 , which has over 5000 citations at present ) .this is obviously because until sufficiently accurate and high - resolution technique were developed , adhesion was hard to measure , because roughness , it was commonly observed and explained , destroys the otherwise very strong field of attraction between bodies , which should in principle make them stuck to each other at the theoretical strength of the material .jkr theory itself was developed having in mind the special cases where adhesion can indeed be measured at the macroscopic scales , using very soft materials like rubber and gelatin spheres , clean and with extremely smooth surfaces .today , there is however interest in both scientific and technological areas also at small scale , where very smooth surfaces for example in information storage devices result in adhesive forces playing a more crucial role than in more conventional tribological applications . on the other hand ,when people have started to study adhesion in geckos , which adhere to just about any surface , being it wet or dry , smooth or rough , hard or soft , with a number of additional extraordinary features ( self - cleaning , mechanical switching ) , interest is emerging on how to reproduce these capabilities in `` gecko inspired synthetic adhesives '' .the interest stems on the fact that adhesion can not be produced on hard rough surfaces , and therefore only the strikingly complex hierarchical structure of the gecko attachment can produce the macroscopic values of load that gecko can sustain .the hierarchical structure of the gecko attachment ( about three million microscale hairs ( setae ) which in turn each branch off into several hundreds of nanoscale spatula , totalling perhaps a billion spatula ) makes one wonder why the multiscale nature of surface roughness also could not show an effect of adhesion enhancement . indeed , at least one model of adhesion of solid bodies ( that of persson and tosatti , 2001 , pt in the following ) , does show adhesion persistence and even enhancement .there seems to be a qualitative difference for surfaces with fractal dimensions below , which turns out to be the case in most if not the totality of the commonly observed rough surfaces ( persson , 2014 ) . in general , it is hard to measure strong adhesion , despite the van der waals interactions in principle are orders of magnitude larger than atmospheric pressure this adhesion paradox ( pastewka and robbins , 2014 , persson _ et al_. , 2005 ) has been linked to surface roughness , but the explanations of the paradox have been different , the latest very interesting one being due to pastewka and robbins ( 2014 ) , which is a very promising parameter - free theory that shows how adhesion changes contact area and when surfaces are sticky but mostly in a regime near small contact areas .pastewka and robbins ( 2014 ) conclude that " _ for most materials , the internal cohesive interactions that determine elastic stiffness are stronger than adhesive interactions , and surfaces will only stick when they are extremely smooth .tape , geckos , and other adhesives stick because the effect of internal bonds is diminished to make them anomalously compliant"_. this conclusion seems in qualitative agreement with the classical asperity theory , except that pastewka and robbins use in their model quantities related to slopes and not to heigths and therefore are in quantitative disagreement .persson ( 2002a , 2002b ) introduced more elaborate version of the theory , which solves the partial contact problem also , and the coupling of the two aspects ( effective energy due to roughness in full contact , and its use in a partial contact with a diffusion model ) makes the limit behaviour for very short wavelengths difficult to capture , and motivated us to search a possibly simpler , more traditional picture . the traditional asperity model of fuller and tabor ( 1975 ) ,today is not considered to be adequate because of its many assumptions on geometry and absence of interaction , showed that adhesion and pull - off force is reduced very easily at macroscopic scale by roughness .even extremely tiny amounts of roughness , of the order of the pull - off distance for the highest asperities in contact , make the pull - off force orders of magnitude lower than the nominal value .ft theory seemed to be in good agreement with the experiments , within the limits of their accuracy .the only case where it seemed contradicted by some experimental evidence , was in some measurements of adhesion in highly viscoelastic solids ( fuller and roberts , 1981 , briggs and briscoe 1977 ) .these experiments indeed showed an enhancement of adhesion with roughness , which was not expected in the pure elastic ft model .more recent evidence comes from the cleverly designed experiments using a two - scale axisymmetric problem with roughness between gelatin and perspex flat rough plates , by guduru and his group ( guduru ( 2007 ) , guduru _ et al _ ( 2007 ) , waters _et al _ ( 2009 ) ) .they showed clearly that an elastic jkr analysis explains the strong increment of pull - off forces observed ( an order of magnitude increase ) , and that this comes with irreversible energy dissipated in many jumps of the force - area curve . in this paper , we shall try therefore a new model for a rough surface , completely different from either asperity models , and pt model ( or persson , 2002a , 2002b ) .the model is based on the very simple idea johnson used several times in analyzing contacts near full contact , and which in turn could be attributed to bueckner ( 1958 ) : namely , that the gaps in an otherwise full contact are cracks that can not sustain finite stress intensity factors in the case of pure mechanical contact without adhesion , or that can sustain the appropriate stress intensity factor corresponding to the toughness (in terms of surface energy , ) , in the case of adhesion .further , it was used more recently by xu _( 2014 ) ( xjm theory ) for a random rough surface near full contact but without adhesion , whose model in fact inspired the present extension to the case with adhesion .before embarking into the full rough surface case , it is crucially important to understand qualitatively the mechanics of adhesion near full contact . the best strategy is to start from the relatively simple behavior of a single sinusoidal contact , as studied quantitatively by ken johnson under the jkr regime assumption ( johnson , 1995 ) .taking therefore a sinusoid ( in either 1d or full 2d ) with wavelength , amplitude , and considering the limit case without adhesion is the compressive mean pressure to flatten the sinusoid and achieve full contact , the adhesive case follows curves of area - load described in fig.1 , where we have considered the case of a 1d profile for simplicity because it is fully analytical , whereas probably the 2d case can not be solved in closed form , except near full contact and near pull - off . starting from the case of `` low adhesion '' , , we can describe the behavior during loading as follows .the curve has two extremes , a minimum and a maximum : under zero load , the contact jumps into a state of contact given by the intersection of the curve with the load axis . upon further increase of the load, it follows the stable curve , until it jumps into full contact at the maximum . at this pointthe strength is theoretically infinite ( more precisely , the theoretical stress of the material , which is very high ) unless we postulate the existence of some flaw of trapped air , as johnson suggests , and which gives a bounded tensile pressure for returning on the curve at the maximum . upon further unloading ,the curve is followed stably until the minimum is reached ( therefore we have a new part of the curve that is now stable , that under negative loads ) , where pull - off , or jump out - of - contact , is obtained . for a `` critical '' value of adhesion , which depends on modulus of the material as well as the two length scales in the problem, the surfaces will spontaneously snap into contact at zero load .this occurs for johnson s parameter where is the surface energy .what matters in particular to the present investigation is that the original contact curve ( that without adhesion ) changes sharply shape when adhesion is introduced , since the negative pressure region appears which is crucial to understand pull - off loads , and the transition towards infinite tension is also introduced , rather than having full contact at specific value of ( compressive ) pressure . however ,for what concerns the condition of jump into full contact , this is essentially given by the maximum of the curve which is a perturbation of the contact solution the curve for negative tension and pull - off requires a detailed analysis of the regime of low contact , and we believe for this part the model of fuller and tabor ( 1975 ) is a good starting point , as the hertzian jkr solution is a good starting point to study the sinusoidal indenter .notice finally that for we simply have that the pull - off force will be load dependent .* figure 1 .the relationship between * * * and contact area ratio * * * * for johnson s jkr solution of the single 1d sinusoidal adhesive contact problem .the change from the pure contact case * * , * to adhesive case * .in the classical random process theory , the pressure to cause full contact , , is a random variable , whose variance is easily related in terms of the power spectrum density ( psd ) of the profile or of the surface ( see manners and greenwood , 2006 , persson , 2001 , persson and tosatti , 2001 ) where is the variance of the slopes . here is the combined plane strain modulus of the contact materials .the distribution of pressures in full contact is also a gaussian distribution , namely \label{pfullcontact}\ ] ] this means that , strictly speaking , there is always a tail of negative ( tensile ) pressures , and indeed persson s solution for the adhesionless contact problem simply truncates this distribution by subtracting from it a specular distribution , with negative mean pressure , obtaining in fact a rayleigh distribution . for typical self - affine surfaces, eliminating the so called `` roll - off '' wavelength , the power spectrum is a power law above a certain long wavelength cut - off ( wavenumber ){cc}0 & \text{for } q < q_{0}\\ \frac{h}{2\pi}\left ( \frac{h_{0}}{\lambda_{0}}\right ) ^{2}\left ( \frac { q}{q_{0}}\right ) ^{-2\left ( h+1\right ) } & \text{for } q > q_{0}\end{array } \right.\ ] ] for any finite short wavelength cutoff , the moments are bounded , but in the limit , only is bounded ( the variance of the heights ) , whereas diverges as well as the other higher order ones .for this reason , it is already well known that full contact can not occur for any finite pressure , in the fractal limit , as it is for determinist fractal profiles like those defined with the weierstrass series in ciavarella et al .( 2000 ) . in the random case , persson s solution ,which is approximate but qualitatively correct ( wolf et al , 2014 ) shows that the contact area is only complete for infinite applied compression . according to bueckner s principle ( 1958 ) , as used by johnson ( 1995 ) and xjm ( xu _ et al ._ 2014 ) , we need to look carefully at the tensile stresses of the full contact pressure solution , which are applied by superposition to the gaps area , and compute the stress intensity factors .considering this contact pressure solution as a random process , it is clear that its moments will correspond to the original surface moments , shifted by a factor 2 and multiplied by according to taylor series expansion therefore , as manners and greenwood ( 2006 ) , and xjm ( xu _ et al ._ 2014 ) also suggest , the tensile part of the full contact solution can be approximated with quadratic equations near the pressure tensile summits , similarly to what is done for asperity theories for the real geometry of the surface .this generates a model of isolated `` non - contact '' areas . the xjm model ( xu _ et al ._ 2014 ) shows that this leads to the following:- * the individual tensile area is where is the radius of the asperity full contact pressure distribution , and is the pressure on the asperity ( the full contact solution in the tensile regions , with a change in sign due to the bueckner s superposition ( 1958 ) ) ; * the non - contact area can be exactly shown , according to bueckner s principle , to correspond to a pressurized crack . * for the pure contact case , the stress intensity factor ( sif ) has to be zero along the boundary , axysimmetric by assumption ( consideration of elliptical form do not change the results significantly)= 0\ ] ] leading to the conclusion that the non - contact area is larger than the tensile stress area , and in particular , it is in size of the original tensile area , notice however that the condition is also satisfied trivially by the solution since clearly the fact that the size of the gap goes to zero is also a solution is problematic for studying that limit .the point is we also have another condition and not just lefm : namely , that there can not be any tension , which means actually the strength is zero .if we consider adhesion , there can be tension , up to the theoretical strength .* for the case where there is in fact surface energy , the condition becomes more elaborate = k_{ic } \label{kic}\ ] ] and for example if , we would have an open crack that would tend to propagate .we expect naturally as an effect of adhesion .* there can not be solutions below the size where we need to take into account the transition towards a strength criterion .however , this is of concern only if there is a solution of full contact with finite pressure , and in any case , the suggestion of johnson to consider at this point the presence of trapped air and we shall return later on this point . substituting ( [ area_tension ] ) , into ( [ ct_noadhesion ] ) , we have an equation for = k_{ic } \label{kic2}\ ] ] it is clear that we can not solve this equation easily in a rigorous sense , since is a random variable .this equation ( [ kic2 ] ) is an implicit equation which defines and acts like the basic function defining the local area as a function of the `` separation '' in the equivalent asperity model created by the tensile full contact pressure `` surface '' .it is useful to derive separately this case , as done by xu et al ( 2014 ) . from ( [ kic2 ] ) , with there is no minimum tensile tension that can be sustained ( unlike with adhesion ) and the integration proceeds simply as suggested by eqt.38 of xu et al ( 2014) where is the asperity density of the full contact pressure surface , is the distribution of the pressures summits in this surface , and hence it can be solved easily . notice the integral is simply of the same mathematical form as in the standard greenwood and williamson s ( 1966 ) theory , where mean separation is replaced by mean pressure , and the geometrical surface is replaced by the pressure surface .in the present form , we obtain -\frac{1}{2}\frac{\overline{p}}{\sqrt{v}}erfc\left ( \frac{\overline{p}}{\sqrt{2v}}\right ) \right)\ ] ] further , at sufficiently large magnifications , , and , while also the density of asperities , which gives where is a prefactor of the order 1 ( the exact prefactor to make the area of contact zero at zero pressure is , but this is not the correct value at large pressures where we are concentrating our efforts) -\frac{1}{2}\frac{\overline{p}}{\sqrt{v}}erfc\left ( \frac { \overline{p}}{\sqrt{2v}}\right)\ ] ] this suggests that if we want to keep a constant value of given area of gap , upon increasing magnification , we need to keep constant , i.e. increase the mean pressure without limit .this is the well know behavior of pure contact problem , and it is confirmed here .it is important to discuss in details equation ( [ kic2 ] ) , as it governs the basic behavior of the gaps during the loading process .first of all , it is easier to manipulate it in terms of the tension on the `` pressurized cracks '' , as ( notice that being the radius of the pressure surface , has dimensions ) , by rewriting it as where the second term cancels out obviously in the adhesion - less case , but also with adhesion at very large .a few curves could be plotted to show the general trend of introducing a minimum which clearly corresponds to the maximum in the case in johnson s sinusoidal case ( see fig.1 ) near the ( unstable ) transition to full contact .we prefer however to arrive at a cleaner plot , which will be in fig.2 , to include in a unique curve .notice that as we increase the mean compression in the contact , the actual value of tension in the gaps decreases therefore , the loading progresses here by reducing the pressure on the y - axis .the minimum occurs at , which gives in which case , for where is a radius of pressure `` asperity '' .hence , we can rewrite ( [ kic3 ] ) as the result comes clean defining hence , we can rewrite ( [ eqz - intermedia ] ) as and this is indeed the curve plotted in fig .2 . in the adhesion - less case , , and and hence the curve diverges to infinity and hence this simple unique curve for the adhesive case represents a discontinuity and the dimensional quantities should be plotted to see more clearly the transition from the adhesion - less case to the adhesive one .* figure 2 .the relationship between peak dimensionless tension * * applied on a pressurized isolated gap and the size of the dimensionless gap radius squared * * * , together with a linear asymptotic approximation which turns out to be reasonably accurate also at low tensions .the minimum in the curve , which leads to the transition to full contact , is obviously overestimated by the approximate curve , so there will be a minor spurious effect of increase of area of the gaps .the way the plot is constructed does nt permit to see the limit of pure contact ( adhesive - less case ) , since both * * * * and * * * * are zero in that limit , and hence the adhesive - less curve becomes the very tail of the present one at infinity * * further , considering that we want to make estimates of the area of the gap that remains in contact , it is clear that a very good approximate solution could be {cc}\frac{1}{5}\widehat{c}_{t}^{2}+\frac{1}{2 } & \text{for } \widehat{c}_{t}^{2}>\frac{5}{2}\\ 1 & \text{for } 0<\widehat{c}_{t}^{2}<\frac{5}{2}\end{array } \right .\label{p - approx}\ ] ] since the other branch of the equation is unstable .the only significant approximation this introduces , it is to overestimate the size of the gap region radius just before jumping into contact . however , while this will only change some prefactors by small amounts , it simplifies the study of the problem enormously .now , supposing we start from a contact which does nt jump into contact completely and with an applied compressive load but that gaps area do exist .it is clear that the ( [ p - approx ] ) solution holds for each gap , depending on the local tension arising from cancelling the tension in the full contact pressure , and at any given mean applied load , each gap will have a certain level of pressure and therefore a certain equilibrium size as per ( [ p - approx ] ) , which may include some gaps closing .we need an integration process to establish the total area of gaps , and therefore the complementary remaining contact area . also , upon loading , the mean compressive pressure increases , and hence the tensile pressure on each gap decreases , so that more gaps will tend to reach the condition .therefore , a certain number of gaps will close , and the others will reduce their size . to understand if the final state is full contact or not , we should consider the adhesion - less case for reference . in this case , approximately , persson s solution indicates that only is the parameter ruling the contact area size .if we increase the short wavelength content , increasing , for a given contact area , we have to increase the mean pressure in proportion .given grows unbounded , the pressure to obtain any value of contact area ( in fact , not just full contact ) , grows unbounded . repeating this reasoning here , we need to observe if , for a given condition with adhesion , the contact area depends only on the ratio or not . rewriting and inverting the equation ( [ p - approx ] ) in terms of the gap radiuses{cc}5\left ( \widehat{p}-\frac{1}{2}\right ) & \text{for } \widehat{p}>1\\ 0 & \text{for } \widehat{p}<1 \end{array } \right.\ ] ] and in dimensional terms{cc}5\left ( \frac{\left ( p-\overline{p}\right ) } { p_{0}}-\frac{1}{2}\right ) & \text{for } \frac{\left ( p-\overline{p}\right ) } { p_{0}}>1\\ 0 & \text{for } \frac{\left ( p-\overline{p}\right ) } { p_{0}}<1 \end{array } \right .\label{ct - final}\ ] ] or \text { , for } \left ( p-\overline{p}\right ) > p_{0}\ ] ] we shall neglect the variation of with height , as otherwise the integration becomes too cumbersome .we simply assume a mean value given by random process theory , and develop an integration of the type where the function for is now given in ( [ ct - final ] ) , which results in -\frac{\overline { p}+p_{0}}{2\sqrt{v}}erfc\left ( \frac{\overline{p}+p_{0}}{\sqrt{2v}}\right ) \right]\ ] ] which agrees with the adhesiveless case when . in this format, it is clear that the effect of is exactly analogous to an increase of the mean pressure .the non - contact area would tend to stay constant with magnification now if we keep increase , instead of the applied pressure proportionally to , only , and this is simple to study . at sufficiently large magnifications , looking at the parameter ( [ pmin ] ) , and considering the usual scaling arguments on the psd and its moments where the exponent is positive for ( low fractal dimensions ) , when the dimensionless increases with magnification , or negative otherwise .hence , for low fractal dimensions , and a given applied mean pressure ( including zero ) , the non - contact area tends to decrease without limit , implying the tendency to full contact .naturally , the tendency will be stronger the higher i.e. the farther from the limit case of .there seems to be some connection to the conclusions and the parameters involved in the `` effective adhesion energy '' in persson and tosatti ( 2001 ) , which leads them to suggest that adhesion persists for low fractal dimensions ( the real range of surfaces , see persson ( 2014 ) ) . in their theory, they obtain from the elastic energy associated to the deformation in balance with surface energy of a full spectrum of frequencies , a parameter which seems related to where is the first order moment of the psd .this has the same qualitative power law behaviour ( besides the ) , but it should be emphasized different from in pt coming from the first moment of the surface , and in our case from a combination of roots of the second and 6-th moment . * figure 3 .the area of gaps * * for zero applied pressure decreases rapidly with the dimensionless adhesion pressure term * * * and when * * **we can assume full contact holds . * * fig.3 shows a plot of the non contact area at zero applied mean pressure , , which decays very rapidly with the dimensionless ratio , indicating there is chance of spontaneous full contact for , although the model is clearly approximate in that range since we are far from full contact . for and hence large fractal dimensions, the dimensionless ratio will decrease with magnification , and hence we return at large magnifications to the case of pure adhesion - less contact , for which we expect at zero load simply zero area . the situation is clear therefore also with applied compressive forces to the contact .contrary to the case without adhesion , where no mean pressure is sufficient to squeeze the contact flat , here for large hurst exponent , the non - contact area decreases asymptotically to zero and the trends of the zero applied pressure are confirmed , as clearly seen in fig.4 , where we assume the power law scaling with .{cc}\frame{\includegraphics[width=120 mm , keepaspectratio = true]{new_theory_fig4a.eps } } & a\\ \frame{\includegraphics[width=120 mm , keepaspectratio = true]{new_theory_fig4b.eps } } & b\\ \frame{\includegraphics[width=120 mm , keepaspectratio = true]{new_theory_fig4c.eps } } & c \end{array } ] * figure 5 .the real contact area for different * * * as a function of the dimensionless applied compression * * * * shows that the contact becomes practically full at finite values of the * * * * ( and even zero load for * * * * , within the limits of our model ) . for * * * * , the magnification effect is neutral , whereas for * * * there is no significant difference with the adhesive - less case , and the curves are close to the adhesive - less case .we assume power law scaling for * * with * * * let us now discuss the case of pull - off in qualitative terms .this may occur in a range with small contact area , if either the contact does nt proceed spontaneously into large fractions of the nominal contact area , or if the applied loading does nt push toward this range . in that case contactis so clearly isolated on asperities , than theories like fuller and tabor ( 1975 ) would be approximately true .the problem and limit of the present theory is , insteed , that it can not deal with the low contact areas , where gaps are interacting and certainly the bends in the area - load curves which would appear in a set of isolated asperity theories are not reproduced here .the only estimate we can make is therefore to extrapolate pull - off from the point where the contact area is expected to be zero , and this gives . which leads to a new meaning for the pressure which always increases with magnification .notice however that in absolute terms , is always increasing for all parameters and this result is not easy to beleive , and in opposite constrast to asperity theories but even pastewka and robbins ( 2014 ) who find stick surfaces only those that are smooth enough ( in terms of surface slopes ) to have the cohesive energy in the bulk giving up against the adhesion forces at the interface .a new model of adhesion has been discussed and shown to lead to very simple and clear results : there can not be spontaneous jump into contact for any surface having sufficiently multiscale content , no matter its fractal dimension .the model is devised near the full contact regime , so that the contact consists of a set of isolated gaps whose surfaces are then loaded by bueckner s principle by the tensile pressures of the `` linear '' full contact solution , which are approximated by parabola since , by taylor s expansion , they must have this form near full contact when gaps are closing , and it is easy to write this for a gaussian surface . the stable branch of the curve of the gap radius vs applied pressure in the gapsare then found imposing the stress intensity factor to be constant along the edge , and a very good approximation turns out to be a linear law with an offset , which permits extremely simple integration , which resemble those of asperity models , and indeed can be considered as a `` pressure asperity '' model for the pressurized gaps . in the case of no adhesion ,as already noticed by xu et al ( 2014 ) , the result turn out to be extremely similar to those of persson s contact theory ( 2001 ) which is a widely recognized as a good approximate solution near full contact , and which gives us confidence the results are also very similarly accurate where it tends to be exact in the limit of full contact ( for infinite mean pressure applied ) .it is shown that a dimensionless ratio governs the contact and a pressure can be defined which is scale dependent and includes the energy of adhesion .this pressure has a role equivalent to the mean applied pressure in the equation of the non - contact area and hence since it grows without limit for low fractal dimensions , permits full contact to be achieved for those surfaces .the conclusions can not be complete of unloading and pull - off force , which require further investigation .an equation which looks like an extension of persson s theory for contact mechanics has been derived .briggs g a d and briscoe b j 1977 the effect of surface topography on the adhesion of elastic solids j. phys .phys . 10 24532466 ciavarella , m. , et al .`` linear elastic contact of the weierstrass profile . '' proceedings of the royal society of london .series a : mathematical , physical and engineering sciences 456.1994 ( 2000 ) : 387 - 405 .persson , b n j , albohr , o , tartaglino , u , volokitin a i and tosatti .e _ _ _ _ 2005 . on the nature of surface roughness with application to contact mechanics , sealing , rubber friction and adhesion j. phys . : condens .matter 17 r1 , _yastrebov , v.a . ,anciaux , g. , molinari , j.f . , 2014 .from infinitesimal to full contact between rough surfaces : evolution of the contact area .j. solids struct .available from : ://arxiv.org / abs/1401.3800 . | recently , there has been some debate over the effect of adhesion on the contact of rough surfaces . classical asperity theories predict , in agreement with experimental observations , that adhesion is always destroyed by roughness except if the amplitude of the same is extremely small , and the materials are particularly soft . this happens for all fractal dimensions . however , these theories are limited due to the geometrical simplification , which may be particularly strong in conditions near full contact . we introduce a simple model for adhesion , which aims at being rigorous near full contact , where we postulate there are only small isolated gaps between the two bodies . the gaps can be considered as `` pressurized cracks '' by using ken johnson s idea of searching a corrective solution to the full contact solution . the solution is an extension of the adhesive - less solution proposed recently by xu , jackson , and marghitu ( xjm model ) ( 2014 ) . this process seems to confirm recent theories using the jkr theory , namely that the effect of adhesion depends critically on the fractal dimension . for , the case which includes the vast majority of natural surfaces , there is an expected strong effect of adhesion . only for large fractal dimensions , , seems for large enough magnifications that a full fractal roughness completely destroys adhesion . these results are partly paradoxical since strong adhesion is not observed in nature except in special cases . a possible way out of the paradox may be that the conclusion is relevant for the near full contact regime , where the strong role of flaws at the interfaces , and of gaps full of contaminant , trapped air or liquid in pressure , needs to be further explored . if conditions near full contact are not achieved on loading , probably the conclusions of classical asperity theories may be confirmed . roughness , adhesion , fuller and tabor s theory , fractals |
let be a random vector of dimension for some integer , and be its observed value .our argument is based on the multivariate normal model with unknown mean vector and covariance identity , where the probability with respect to ( [ eq : ynorm ] ) will be denoted as .let be an arbitrary - shaped region .the subject of this paper is to compute measures of confidence for testing the null hypothesis . observing ,we compute a frequentist -value , denoted , and also a bayesian posterior probability with a noninformative prior density of .this is the _ problem of regions _ discussed in literature ; , , and .the confidence measures were calculated by the bootstrap methods for complicated application problems such as the variable selection of regression analysis and phylogenetic tree selection of molecular evolution .these model selection problems are motivating applications for the issues discussed in this paper , and the normal model of ( [ eq : ynorm ] ) is a simplification of reality .let be a sample of size in application problems .we assume there exists a transformation , depending on , from to so that is approximately normalized .we assume only the existence of such a transformation , and do not have to consider its details .since we work only on the transformed variable in this paper for developing the theory , readers may refer to the literature above for the examples of applications . before the problem formulation is given in section [ sec : formulation ] , our methodology is illustrated in simple examples below in this section .the simplest example of would be the half space of , where the notation , instead of , is used to distinguish this case from another example given in ( [ eq : h0s3 ] ) .only is involved in this , and one - dimensional normal model is considered .taking as an alternative hypothesis and denoting the cumulative distribution function of the standard normal as with density , the unbiased frequentist -value is given as .a slightly complex example of is for .the rejection regions are and with a critical constant , which is obtained as a solution of the equation for a specified significance level .the left hand side of ( [ eq : defc ] ) is the rejection probability when is on the boundary of , i.e. , or .the frequentist -value is defined as the infimum of such that can be rejected .this becomes for and for .considering the case , say , we obtain and .these two simple cases of and exhibit what called _ paradox _ of frequentist -values .our simple examples of ( [ eq : h0s2 ] ) and ( [ eq : h0s3 ] ) suffice for this purpose , although they had actually used the spherical shell example explained later in section [ sec : three ] . indicated that a confidence measure should be monotonically increasing in the order of set inclusion of the hypothesis .noting , therefore , it should be , but it is not .this kind of `` paradox '' can not occur with bayesian methods , and holds always .considering the flat prior const , say , the posterior distribution of given becomes and the posterior probabilities for the case ( [ eq : numex1 ] ) are and .the `` paradox '' of frequentist -values may be nothing surprise for a frequentist statistician , but a natural consequence of the fact that is for a one - sided test and is for a two - sided test ; the power of testing is higher , i.e. , -values are smaller , for an appropriately formulated one - sided test than a two - sided test . in this paper, we do not intend to argue the philosophical question of whether to be frequentist or to be bayesian , but discuss only computation of these two confidence measures .computation of the confidence measures is made by the bootstrap resampling of .let be a bootstrap sample of size obtained by resampling with replacement from .the idea of bootstrap probability , which is introduced first by to phylogenetic inference , is to generate many times , say , and count the frequency that a hypothesis of interest is supported by the bootstrap samples . the bootstrap probability is computed as .recalling the transformation to get from , we get by applying the same transformation to . for typical problems ,the variance of is approximately proportional to the factor as mentioned in .although we generate in practice , we only work on in this paper .more specifically , we formally consider the parametric bootstrap which is analogous to ( [ eq : ynorm ] ) but the scale is introduced for multiscale bootstrap .the bootstrap probability is defined as where denotes the probability with respect to ( [ eq : yboot ] ) . for computing a crude confidence measure , we set , or in terms of , so that the distribution ( [ eq : yboot ] ) for is equivalent to the posterior ( [ eq : postmuflat ] ) for .this gives an interpretation of the bootstrap probability that for any under the flat prior . in the multiscale bootstrap of ,however , we may intentionally alter the scale from , or to change from in terms of for computing .let be different values of scale , which we specify in advance . in our numerical examples , scales are equally spaced in log - scale between and . for each , we generate with scale for times , and observe the frequency .the observed bootstrap probability is .how can we use the observed for computing ?let us assume that can be expressed as ( [ eq : h0s3 ] ) but we are unable to observe the values of and . nevertheless , by fitting the model to the observed , we may compute an estimate of the parameter vector with constraints and .the confidence measures are then computed as and . in casewe are not sure which of ( [ eq : h0s2 ] ) and ( [ eq : h0s3 ] ) is the reality , we may also fit to the observed s and compare the aic values for model selection . in practice ,we prepare collection of such models describing the scaling - law of bootstrap probability , and choose the model which minimizes the aic value .the examples in section [ sec : intro ] were very simple because the boundary surfaces of the regions are flat . in the following sections , we work on generalizations of ( [ eq : h0s2 ] ) and ( [ eq : h0s3 ] ) by allowing curved boundary surfaces . for convenience ,we denote with and .similarly , we denote with . as shown in fig .[ fig : h03 ] , we consider the region of the form , where and are arbitrary functions of .this region will reduce to ( [ eq : h0s3 ] ) if for all .the region may be abbreviated as two other regions and as well as two boundary surfaces and are also shown in fig . [fig : h03 ] .we define , or equivalently as the boundary surfaces of the hypotheses are for the region , and for the region .we do not have to specify the functional forms of and for our theory , but assume that the magnitude of and is very small. technically speaking , and are _ nearly flat _ in the sense of . introducing an artificial parameter , a function is called nearly flat when and -norms of and its fourier transform are bounded .we develop asymptotic theory as , which is analogous to with the relation .the whole parameter space is partitioned into two regions as or three regions as .these partitions are treated as disjoint in this paper by ignoring measure - zero sets such as .bootstrap methods for computing frequentist confidence measures are well developed in the literature as reviewed in section [ sec : two ] .the main contribution of our paper is then given in section [ sec : three ] for the case of three regions . in section [ sec : bayes ] , this new computation method is used also for bayesian measures of . note that the flat prior const in the previous section was in fact carefully chosen so that for ( [ eq : h0s2 ] ) . this same led to for ( [ eq : h0s3 ] ) .our definition of given in ( [ eq : h03 ] ) is a simplest formulation , yet with a reasonable generality for applications , to observe such an interesting difference between the two confidence measures .multiscale bootstrap computation of the confidence measures for the three regions case is described in section [ sec : model ] .simulation study and some discussions are given in section [ sec : simulation ] and [ sec : discussion ] , respectively .mathematical proofs are mostly given in appendix .is the shaded area between surfaces and .,scaledwidth=75.0% ]in this section , we review the multiscale bootstrap of for computing a frequentist -value of `` one - sided '' test of . let be the inverse function of .the bootstrap -value of , defined as , is convenient to work with . by multiplying to it , is called the normalized bootstrap -value .theorem 1 of , as reproduced below , states that the -value of is obtained by extrapolating the normalized bootstrap -value to , or equivalently in terms of .[ thm : onesided ] let be a region of ( [ eq : h02 ] ) with nearly flat . given and ,consider the normalized bootstrap -value as a function of ; we denote it by .let us define a frequentist -value as and assume that the right hand side exists .then for and , meaning that the coverage error , i.e. , the difference between the rejection probability and , vanishes asymptotically as , and that the -value , or the associated hypothesis testing , is `` similar on the boundary '' asymptotically .[ [ proof ] ] proof + + + + + here we show only an outline of the proof by allowing the coverage error of , instead of , in ( [ eq : eh02 ] ) .this is a brief summary of the argument given in .first define the expectation operator for a nearly flat function as where on the right hand side denotes the expectation with respect to ( [ eq : yboot ] ) , that is , for with using the expectation operator , we next define two quantities and work on the bootstrap probability as the third equation is obtained by the taylor series around , and the last equation is obtained by . rearranging ( [ eq : bph02 ] ) , we then get the scaling - law of the normalized bootstrap -value as on the other hand , eq .( 5.10 ) of shows , by utilizing fourier transforms of surfaces , that ( [ eq : eh02 ] ) holds with coverage error for a -value defined as the proof completes by combining ( [ eq : scalelawh02 ] ) and ( [ eq : phinvh02 ] ) . a hypothesis testing is to reject when observing for a specified significance level , say , , and otherwise not to reject .the left hand side of ( [ eq : eh02 ] ) is the rejection probability , which should be for and for to claim the unbiasedness of the test . on the other hand ,the test is called similar on the boundary when the rejection probability is equal to for . in this paper, we implicitly assume that is decreasing as moves away from .the rejection probability increases continuously as moves away from .this assumption is justified when is sufficiently small so that the behavior of is not very different from that for ( [ eq : h0s2 ] ) .therefore , ( [ eq : eh02 ] ) implies that the -value is approximately unbiased asymptotically as .we can think of a procedure for calculating based on ( [ eq : ph02 ] ) . in the procedure ,the functional form of should be estimated from the observed s using parametric models .then an approximately unbiased -value is computed by extrapolating to .our procedure works fine for the particular of ( [ eq : h0s2 ] ) , because and .our procedure works fine also for any of ( [ eq : h02 ] ) when the boundary surface is smooth .the model is given as using parameters , and thus an approximately unbiased -value can be computed by .it may be interesting to know that the parameters are interpreted as geometric quantities ; is the distance from to the surface , is the mean curvature of the surface , and , , is related to -th derivatives of .however , the series expansion above does not converge , i.e. , does not exist , when is nonsmooth .for example , serves as a good approximating model for cone - shaped , for which does not take a value of .this observation agrees with the fact that an unbiased test does not exist for cone - shaped as indicated in the argument of . instead of ( [ eq : ph02 ] ) ,the modified procedure of calculates a -value defined as for an integer and a real number .this is to extrapolate back to by using the first terms of the taylor series around . the coverage error in ( [ eq : eh02 ] )should reduce as increases , but then the rejection region violates the desired property called monotonicity in the sense of and . for taking the balance , we chose and for numerical examples in this paper .the following theorem is our main result for computing a frequentist -value of `` two - sided '' test of . the proof is given in appendix [ app : proof - twosided ] .[ thm : twosided ] let be a region of ( [ eq : h03 ] ) with nearly flat and . given and ,consider the approximately unbiased -value by applying theorem [ thm : onesided ] to for .assuming these two -values exist , let us define a frequentist -value of as for example , ( [ eq : ph03 ] ) holds for the exact -value of ( [ eq : h0s3 ] ) defined in section [ sec : intro ] .then for and , meaning that is approximately unbiased asymptotically as . for illustrating the methodology ,let us work on the spherical shell example of , for which we can still compute the exact -values to verify our methods .the region of interest is as shown in panel ( a ) of fig .[ fig : exregions ] .we consider the case , say , so that this region is analogous to ( [ eq : numex1 ] ) except for the curvature .the exact -value for is easily calculated knowing that is distributed as the chi - square distribution with degrees of freedom and noncentrality .writing this random variable as , the exact -value is , that is , the probability of observing for .similarly , the exact -value for is . in a similar way as for ( [ eq : h0s3 ] ) , the exact -value for is computed numerically as , although the procedure is a bit complicated as explained below .we first consider the critical constants and for the rejection regions and . by equating the rejection probability to for ,that is , for , we may get the solution numerically as and for , say .the -value is defined as the infimum of such that can be rejected . to check if theorem [ thm : twosided ] is ever usable , we first compute ( [ eq : ph03 ] ) using the exact values of and . then we get , which agrees extremely well to the exact .the spherical shell is approximated by ( [ eq : h03 ] ) only locally in a neighborhood of but not as a whole .nevertheless , theorem [ thm : twosided ] worked fine .we next think of the situation that bootstrap probabilities of and are available but not their exact -values .we apply the procedure of section [ sec : two ] separately to the two regions for calculating the approximately unbiased -values . to work on the procedure ,here we consider a simple model with parameters for let be the normalized bootstrap -value of for . by assuming the simple model for , we fit to the observed multiscale bootstrap probabilities of for estimating the parameters . the actual estimation was done using the method described in section [ sec : highjointbp ] , but we would like to forget the details for the moment .we get , for , and similarly , for . s are interpreted as the distances from to the boundary surfaces , and the estimates agree well to the exact values for and for . then the approximately unbiased -values are computed by ( [ eq : ph02 ] ) as and , and thus ( [ eq : ph03 ] ) gives , which again agrees well to the exact .we finally think of a more practical situation , where the bootstrap probabilities are not available for and , but only for .this situation is plausible in applications where many regions are involved and we are not sure which of them can be treated as or in a neighborhood of ; see for an illustration .we consider a simple model , with parameters for by assuming that the two surfaces are curved in the same direction with the same magnitude of curvature . for estimating , ( [ eq : fh0 ] ) is fitted to the observed multiscale bootstrap probabilities of with constraints and , and is obtained as , , . then the approximately unbiased -values are computed by ( [ eq : ph02 ] ) as and and thus ( [ eq : ph03 ] ) gives .this is not very close to the exact , partly because the model is too simple .however , it is a great improvement over . )choosing a good prior density is essential for bayesian inference .we consider a version of noninformative prior for making the posterior probability acquire frequentist properties .first note that the sum of bootstrap probabilities of disjoint partitions of the whole parameter space is always 1 .for the two regions case , , and thus .therefore for the approximately unbiased -values computed by ( [ eq : ph02 ] ) , suggesting that we may think of a prior so that .this was the idea of to define a bayesian measure of confidence of .since each of and can be treated as by changing the coordinates , we may assume a prior satisfying it follows from that priors satisfying ( [ eq : pmatch2 ] ) are called probability matching priors .the theory has been developed in literature for posterior quantiles of a single parameter of interest .the examples are the flat prior const for the flat boundary case in section [ sec : intro ] , and for the spherical shell case in section [ sec : three ] .our multiscale bootstrap method provides a new computation to . we may simply compute ( [ eq : pp03 ] ) with the and used for computing of ( [ eq : ph03 ] ) .although we implicitly assumed the matching prior , we do not have to know the functional form of .for the spherical shell example , we may use the exact and to get , or more practically , use only bootstrap probabilities of to get .we first recall the estimation procedure of before describing our new proposals for improving the estimation accuracy in the following sections .let be a parametric model of bootstrap probability such as ( [ eq : fh1 ] ) for or ( [ eq : fh0 ] ) for . as already mentioned in section [ sec : intro ] , the model is fitted to the observed , .since is distributed as binomial with probability and trials , the log - likelihood function is .the maximum likelihood estimate is computed numerically for each model .let denote the number of parameters .then may be compared for selecting a best model among several candidate models . has devised the multistep - multiscale bootstrap as a generalization of the multiscale bootstrap .the usual multiscale bootstrap is a special case called as the one - step multiscale bootstrap .our new proposal here is to utilize the two - step multiscale bootstrap for improving the estimation accuracy of , although the two - step method was originally used for replacing the normal model of ( [ eq : ynorm ] ) with the exponential family of distributions .recalling that is obtained by resampling from , we may resample again from , instead of , to get a bootstrap sample of size , and denote it as .we formally consider the parametric bootstrap where is a new scale defined by . in , only the marginal distribution is considered to detect the nonnormality .for the second step , should have the same functional form as for the normal model .here we also consider the joint distribution of given .it is -dimensional multivariate normal with .we denote the probability and the expectation by and , respectively .then , the joint bootstrap probability is defined as let be a parametric model of or . to work on specific forms of , we need some notations .let be distributed as bivariate normal with mean , variance , and covariance .the distribution function is denoted as , where the joint density is explicitly given as .we also define . then a generalization of ( [ eq : scalelawh02 ] ) is given as follows .the proof is in appendix [ app : proof - jointbp ] .[ lem : jointbp ] for sufficiently small , the joint bootstrap probabilities for and are expressed asymptotically as where , , , , and .thus is specified for as ( [ eq : gh1 ] ) with , using the function of ( [ eq : fh1 ] ) .similarly , is specified for as ( [ eq : gh0 ] ) with , , , using and functions of ( [ eq : fh0 ] ) .we may specify sets of , denoted as . in our numerical examples , are specified as mentioned in section [ sec : intro ] and s are specified so that holds always , meaning . for each , we generate with many times , say , and observe the frequencies , , and .note that only one is generated from each here , whereas thousands of s may be generated from each in the double bootstrap method .the log - likelihood function becomes .in fact , we have used this two - step multiscale bootstrap , instead of the one - step method , in all the numerical examples .the one - step method had difficulty in distinguishing with very small from with moderate but heavily curved .the two - step method avoids this identifiability issue because a small value of indicates that is small ; it is automatically done , of course , by the numerical optimization of .the asymptotic errors of the scaling law of the bootstrap probabilities in ( [ eq : bph02 ] ) and ( [ eq : gh1 ] ) are of order .as shown in the following lemma , the errors can be reduced to by introducing correction terms of for improving the parametric model of . the proof is given in appendix [ app : proof - jointbph ] .[ lem : jointbph ] for sufficiently small , the bootstrap probabilities for are expressed asymptotically as where , , and are those defined in lemma [ lem : jointbp ] , and the higher order correction terms are defined as , , and using for deriving a very simple model for , we think of a situation and , and consider asymptotics as .this formulation is only for convenience of derivation .the two values and will be specified later by looking at the functional form of . a straightforward , yet tedious , calculation ( the details are not shown ) gives and this correction term was in fact already used for the simple model of the spherical shell example in section [ sec : three ] , where the parameter was actually instead of .we did not change the for adjusting and , meaning that , instead of , was modelled as . comparing the coefficients of , we get and , and thus . when ( [ eq : fh1 ] ) was fitted to , the estimated parameter was close to the true value .for the numerical example mentioned above , we have also fitted the same model but being fixed .the estimated parameters are , , and the -value is .these values are not much different from those shown in section [ sec : three ] . however , the aic value improved greatly by the introduction of , and the aic difference was 96.67 , mostly because improved fitting for the joint bootstrap probability of ( [ eq : achjoint ] ) .my experience suggests that consideration of the term is useful for choosing a reasonable model of .let us consider a cone - shaped region in with the angle at the vertex being as shown in panel ( b ) of fig .[ fig : exregions ] .this cone can be regarded , locally in a neighborhood of with appropriate coordinates , as of ( [ eq : h03 ] ) when is close to one of the edges but far from the vertex , or as of ( [ eq : h02 ] ) when is close to the vertex . in this section , the coneis labelled either by or depending on which view we are taking .cones in appear in the problem of multiple comparisons of three elements , say , and corresponds to the hypothesis that the mean of is the largest among the three .the angle at the vertex is related to the covariance structure of the elements .although an unbiased test does not exist for this region , we would like to see how our methods work for reducing the coverage error .contour lines of confidence measures , denoted in general , at the levels 0.05 and 0.95 are drawn in fig .[ fig : rejregion10 ] .the rejection regions of the cone and the complement of the cone are and , respectively , at .we observe that decreases as moves away from the cone in panels ( a ) , ( b ) , and ( c ) ; see appendix [ app : sim ] for the details of computation . on the other hand , figs .[ fig : rejprob10 ] and [ fig : rejprob20 ] show the rejection probability . for an unbiased test, it should be 5% for all the so that the coverage error is zero . in panel ( a ) of fig .[ fig : rejregion10 ] , is computed by the bootstrap samples of .this bootstrap probability , labelled as bp in fig .[ fig : rejprob10 ] , is heavily biased near the vertex , and this tendency is enhanced when the angle becomes in fig .[ fig : rejprob20 ] . in panel ( b ) of fig .[ fig : rejregion10 ] , is computed by regarding the cone as of ( [ eq : h02 ] ) .the dent of and the bump of become larger than those of panel ( a ) of fig .[ fig : rejregion10 ] near the vertex , confirming what we observed in . as seen in figs .[ fig : rejprob10 ] and [ fig : rejprob20 ] , the coverage error of , labelled as `` one sided '' there , is smaller than that of bp . in panel ( c ) of fig .[ fig : rejregion10 ] , is also computed by regarding the cone as of ( [ eq : h03 ] ) , and then one of and is selected as by comparing the aic values at each .this , labelled as `` two sided freq '' in figs .[ fig : rejprob10 ] and [ fig : rejprob20 ] , improves greatly on the one - sided -value .the coverage error is almost zero except for small s , verifying what we attempted in this paper .the corresponding bayesian posterior probability , labelled as `` two sided bayes , '' performs similarly .note that the coverage error was further reduced near the vertex by setting simply without the model selection ( the result is not shown here ) ; however , the shapes of and became rather weird then in the sense mentioned at the last paragraph of section [ sec : two ] . and .the cone - shaped region is rotated so that one of the edges is placed along the x - axis .solid curves are drawn for ( a ) the bootstrap probability with , and for ( b ) the frequentist -value for `` one - sided '' test . in panel ( c ) , is switched to the frequentist -value for `` two - sided '' test when appropriate . the dotted curve in panel ( c )is for the bayesian posterior probability.,scaledwidth=70.0% ] .,scaledwidth=75.0% ] .,scaledwidth=75.0% ]in this paper , we have discussed frequentist and bayesian measures of confidence for the three regions case , and have proposed a new computation method using the multiscale bootstrap technique . in this method , aic played an important role for choosing appropriate parametric models of the scaling - law of bootstrap probability .simulation study showed that the proposed frequentist measure performs better for controlling the coverage error than the previously proposed multiscale bootstrap designed only for the two regions case .a generalization of the confidence measures gives a frequentist interpretation of the bayesian posterior probability as follows .let us consider the situation of theorem [ thm : twosided ] .if we strongly believe that , we could use the one - sided -value , instead of the two sided .similarly , we might use if we believe that . by making the choice`` adaptively , '' someone may want to use , although it is not justified in terms of coverage error . by connecting and linearly using an index for the number of `` sides , '' we get is easily verified that and , indicating that the bayesian posterior probability defined in section [ sec : bayes ] can be interpreted , interestingly , as a frequentist -value of `` zero - sided '' test of .although we have no further consideration , this kind of argument might lead to yet another compromise between frequentist and bayesian .our formulation is rather restrictive .we have considered only the three regions case by introducing the surface in addition to the surface of the two regions case . also these two surfaces are assumed to be nearly parallel to each other .it is worth to elaborate on generalizations of this formulation in future work , but too much of complication may result in unstable computation for estimating the scaling - law of bootstrap probability .aic will be useful again in such a situation .first we consider rejection regions of testing for a specified by modifying the two rejection regions of ( [ eq : h0s3 ] ) .since and are nearly flat , the modified regions should be expressed as and using nearly flat functions and .the constant is the same one as defined in ( [ eq : defc ] ) .write , for brevity sake .we evaluate the rejection probability for .let for a moment , and put . by applying the argument of ( [ eq : bph02 ] ) to but ( [ eq : yboot ] )is replaced by ( [ eq : ynorm ] ) , we get .the same argument applied to gives . rearranging these two formula with the identity for an unbiased test, we get an equation . by exchanging the roles of and ,the equation becomes for with .these two equations are expressed as for solving this equation with respect to and , first apply the inverse matrix of the matrix from the left in ( [ eq : prftwo - e ] ) , and then apply the inverse operator of so that next we obtain an expression of -value corresponding to the rejection regions . is defined as the value of for which either of and holds .note that , , and depend on .let us assume and thus for a moment .write , for brevity sake . recalling ( [ eq : defc ] ) , , where in ( [ eq : prftwo - r ] ) can be expressed as therefore , . by applying ( [ eq : phinvh02 ] ) to and , respectively , we get and , and thus . by exchanging the roles of and , we have for . by taking the minimum of these two expressions of , we finally obtain ( [ eq : ph03 ] ) .this -value satisfies ( [ eq : prftwo - a ] ) with error , and thus ( [ eq : eh03 ] ) holds .the argument is very similar to ( [ eq : bph02 ] ) in the proof of theorem [ thm : onesided ] . given , the joint distribution of and is .therefore , , where and are defined in ( [ eq : e1e2 ] ) .taking the expectation with respect to , we have . for proving ( [ eq : gh1 ] ) , considering the taylor series around , we obtain with for completing the proof .next we show ( [ eq : gh0 ] ) . the conditional probability given is , where taking the expectation with respect to , we have .we only have to consider the taylor series with , for completing the proof . by considering a higher - order term of the taylor series in ( [ eq : bph02 ] ) , we obtain , proving ( [ eq : ach1 ] ) as well as ( [ eq : ach2 ] ) . on the other hand , ( [ eq : achjoint ] ) is shown by considering higher - order terms of the taylor series in ( [ eq : expjointbp ] ) as the proof completes by rearranging the above formula with contour lines in fig . [ fig : rejregion10 ] are drawn by computing -values at all grid points ( ) of step size 0.05 in the rectangle area ; this huge computation was made possible by parallel processing using up to 700 cpus .the computation takes a few minutes per each grid point per cpu .our algorithm is implemented as an experimental version of the scaleboot package of , which will be included soon in the release version available from cran .the rejection probabilities in figs .[ fig : rejprob10 ] and [ fig : rejprob20 ] are computed by generating according to ( [ eq : ynorm ] ) for 10000 times , and then counting how many times or is observed .this computation is done for each with the distance from the vertex , i.e. , in the coordinates of fig .[ fig : rejregion10 ] . for computing and , the two - step multiscale bootstrap described in section [ sec : twostep ] was performed with the sets of scales , , specified there .the parametric bootstrap , instead of the resampling , was used for the simulation .the number of bootstrap samples has increased to for making the contour lines smoother , while it was in the other results . for , we have considered the singular model of defined as for cones , and performed the model fitting method described in section [ sec : highjointbp ] . from the taylor series of this around , we get , for computing the higher order correction term . we have also considered submodels by restricting some of to specified values , and the minimum aic model is chosen at each .the frequentist -value is computed by ( [ eq : pk ] ) with and . for , we have considered the same singular model for the two surfaces by assuming they are curved in the opposite directions with the same magnitude of curvature .more specifically , the two functions in ( [ eq : fh0 ] ) are defined as and .the parameters are estimated by the model fitting method described in section [ sec : twostep ] .submodels are also considered and model selection is performed using aic .the frequentist -value is computed by ( [ eq : ph03 ] ) , and the bayesian posterior probability is computed by ( [ eq : pp03 ] ) . the rejection probabilities of other two commonly used measures are shown only for reference purposes ; see for the details .the rejection probability of the multiple comparisons , denoted mc here , is always below 5% in panel ( a ) , and the coverage error becomes zero at the vertex . on the other hand ,the rejection probability of the -test is always below 5% in panel ( b ) , and the coverage error reduces to zero as . | a new computation method of frequentist -values and bayesian posterior probabilities based on the bootstrap probability is discussed for the multivariate normal model with unknown expectation parameter vector . the null hypothesis is represented as an arbitrary - shaped region . we introduce new parametric models for the scaling - law of bootstrap probability so that the multiscale bootstrap method , which was designed for one - sided test , can also computes confidence measures of two - sided test , extending applicability to a wider class of hypotheses . parameter estimation is improved by the two - step multiscale bootstrap and also by including higher - order terms . model selection is important not only as a motivating application of our method , but also as an essential ingredient in the method . a compromise between frequentist and bayesian is attempted by showing that the bayesian posterior probability with an noninformative prior is interpreted as a frequentist -value of `` zero - sided '' test . |
let be a simple undirected graph and a subset of .consider the following repetitive polling game . at round 0 the vertices of colored white and the other vertices are colored black . at each round , each vertex is colored according to the following rule . if at round the vertex has more than half of its neighbors colored , then at round the vertex will be colored .if at round the vertex has exactly half of its neighbors colored white and half of its neighbors colored black , then we say there is a tie . in this case is colored at round by the same color it had at round .( peleg considered other models for dealing with ties .we will refer to these models in section [ remarks ] .additional models and further study of this game may be found at , , , and . )if there exists a finite so that at round all vertices in are white , then we say that is a _ dynamic monopoly _ , abbreviated _dynamo_. in this paper we prove [ main ] for every natural number there exists a graph with more than vertices and with a dynamic monopoly of 18 vertices .we shall use the following notation : if then denotes the set of neighbors of .we call the _ degree _ of . for every define as a function from to , so that if is white at round and if is black at this round .we also define , , and ( the small black circles are the vertices . ) ] [ jj ] let be the graph in figure 1 .let and let and .we construct a graph by duplicating times the vertices in .that is , where \times d\ ] ] and \ } \nonumber \\ \cup \ { ( ( i , u),(i , v ) ) : ( u , v ) \in j , \ : u , v \in d , i \in [ n ] \ } \nonumber \end{aligned}\ ] ] ( here , as usual , ] '' etc .the following table describes the evolution of .the symbol 1 stands for white and 0 stands for black .note that the table does _ not _ depend on .( this property is peculiar to the graph . in general graphs duplication of vertices may change the pattern of evolution of the graph ) . the table shows that at round 20 the entire system is white and therefore is a dynamo .the reader may go through the table by himself , but in order to facilitate the understanding of what happens in the table let us add some explanations as to the mechanism of `` conquest '' used in this graph .we say that round _ dominates _ round if .we shall make use of the following obvious fact : [ monotony ] if round dominates round ( ) then round dominates round . by applying this observation times, we find that if round dominates round then round dominates round ( ) . by looking at the tableone can see that in the graph round 2 dominates round 0 and thus we have [ blinking ] round dominates round in for every we say that a vertex _ blinks _ at round if for every .we say that a vertex is _ conquered _ at round if for every .examining rounds to in the table and using corollary [ blinking ] one can see that and are conquered at round 0 , and in addition and are conquered at round 2 .furthermore , every vertex in blinks either at round 1 or at round 2 .finally , we have [ tieconquer ] if at round a vertex in has at least half of its neighbors conquered then is conquered at round . _proof : _ every vertex in blinks either at round 1 or at round 2 , and hence is white either at round or at round . from this round on ,at least half of the neighbors of are white , so will stay white . now the vertices will be conquered in the following order : , , . eventually , the entire graph is colored white . is a graph with vertices and is a dynamo of size 18 , proving theorem [ main ] .the result of section [ proofmain ] gives rise to the following questions : does there exist an infinite graph with a finite dynamo ? the answer is _no_. this follows from the following theorem : if is finite then is finite for all . moreover , every vertex in has a finite degree ._ proof : _ the proof is by induction on . for the theorem is true because every vertex with an infinite degree becomes black at round 1 . for , if and has an infinite degree then by the induction hypotheses and .hence and .if has a finite degree then has a neighbor in . by the induction hypothesesonly finitely many vertices have such a neighbor , and thus is finite . the next question deals with other models considered by peleg : [ 2 g ] do we still have a dynamo of size o(1 ) if we change the rules of dealing with ties ? ( e.g.if a vertex becomes black whenever there is a tie . )the answer here is _yes_. if is a graph , introduce a new vertex for every and consider the graph where and if is a dynamo of according to the model in theorem [ main ] , then it is easy to prove that is a dynamo of .but all vertices of have odd degrees , and thus ties are not possible and is a dynamo of according to _ any _ rule of dealing with ties .therefore , for every the graph has a dynamo of size 36 .let be a real number . consider the following model , which will henceforth be called _ the -model_. at every round , for every vertex with neighbors colored black and neighbors colored white , if then is colored white at the next round , otherwise it is black . for the sake of simplicitywe will assume that is irrational and that there are no isolated vertices , so that is impossble .the most interesting question regarding this model is whether there exist graphs with o(1 ) dynamo like in theorem [ main ] .this question is as yet open .we only have some partial results , which can be summarized as follows : 1 .if is big enough then the size of a dynamo is .2 . if is small enough then there exist graphs in which the size of a dynamo is .if there exist graphs with o(1 ) dynamo then the number of rounds needed until the entire system becomes white is . for every ,let be the set of edges with one vertex in and the other not in .call .note that is the set of vertices which are white at both round 0 and round 1 .every is connected to at most vertices in and at most vertices outside of .therefore we have let be fixed . by definition .let , and let .more than of the neighbors of are white at round and more than of the neighbors of are white at round . thus more than of the neighbors of belong to .we therefore have which implies . by induction for allif we begin with a dynamo then for some finite we have and denote the sum of the degrees of the vertices in .recall that every is white at both round 0 and round 1 , and thus and .therefore , .again , let be fixed , let be as in the proof of theorem [ rho3 ] and let . more than of the neighbors of are white at round and more than of the neighbors of are white at round . thus more than the neighbors of belong to .therefore , we have let be as defined in the answer to question [ 2 g ] .construct by eliminating from and connecting to and ( but _ not _ to ) .note that in the vertex is connected only to and to .construct as in the construction of , where the duplicated vertices are all black vertices except for and .( note that the graphs are constructed separately , namely , the sets of vertices of and are disjoint for . )now connect the graphs in the following way .first , eliminate the copies of from all graphs except for .note that in there are copies of ( when ) . divide them into 32 disjoint sets , of size each .now connect the vertices in to the copy of in , connect to the copy of , and connect each one of to a respective white vertex in ( see in figure 3 ) . .the vertices under the numeral 1 are the 32 copies of in . under the numeral 2are the 32 unduplicated vertices in ( , and the initiallly white vertices ) . under the numeral 3are the 64 copies of in , under the numeral 4 are the 32 unduplicated vertices in , under the numeral 5 are the 128 copies of in , and so on . ] 1 .all vertices of the obtained graph blink either at round 1 or at round 2 . 2 .all vertices of are eventually conquered .( the evolution of this conquest is similar to the one in theorem [ main ] . )3 . if all copies of in are conquered at a certain round , then all vertices of are eventually conquered .( again , the evolution is similar to the one in theorem [ main ] .note that we need the bound in order to have and conquered . )bermond and d. peleg , the power of small coalitions in graphs , _ proc .2nd colloc .on structural information and communication complexity _ , olympia , greece , june 1995 , carleton univ .press , 173 - 184 . | the paper deals with a polling game on a graph . initially , each vertex is colored white or black . at each round , each vertex is colored by the color shared by the majority of vertices in its neighborhood , at the previous round . ( all recolorings are done simultaneously ) . we say that a set of vertices is a _ dynamic monopoly _ or _ dynamo _ if starting the game with the vertices of colored white , the entire system is white after a finite number of rounds . peleg asked how small a dynamic monopoly may be as a function of the number of vertices . we show that the answer is o(1 ) . |
in multiple input and multiple output ( mimo ) systems , channel adaptive techniques ( e.g. , water - filling , interference alignment , beamforming , etc . ) can enhance the spectral efficiency or the capacity of the system .however , these channel adaptive techniques require accurate channel conditions , often referred to channel state information ( csi ) . oftentimes , in a frequency - division duplex ( fdd ) setting , csi is estimated at the receiver and conveyed to the transmitter via a feedback channel . in recent years, csi feedback problems have been intensively studied , due to its potential benefits to the mimo systems .it is significant to explore how to reduce the feedback load , due to the uplink feedback channel limitation . in ,four feedback rate reduction approaches were reviewed , where the lossy compression using the properties of the fading process was considered best .when the wireless channel experiences temporal - correlated fading , modeled as a finite - state markov chain , the amount of csi feedback bits can be reduced by ignoring the states occurring with small probabilities .the feedback rate in frequency - selective fading channels was studied in , by exploiting the frequency correlation . in summary , all the above works mainly focus on feedback rate compression considering either temporal correlation or spectral correlation .however , doubly selective fading channels are more frequently encountered in wireless communications as the desired data rate and mobility grow simultaneously . to the best knowledge of the authors , the scheme of making full use of the two - dimensional correlationsis not yet well studied . using both of the orthogonal dimensional correlations in a cooperated way, the feedback overhead can be further reduced in the doubly selective fading channels .thus , in this paper , we derive the minimal feedback rate using both the temporal and spectral correlations .the main contributions of the present paper can be briefly summarized as:1 ) we discuss the minimal feedback rate without differential feedback .2 ) we propose a differential feedback scheme by exploiting the temporal and spectral correlations , and 3 ) we derive the minimal differential feedback rate expression over mimo doubly selective fading channel . the rest of the paper is organized as follows : in section ii , we describe the differential feedback model as well as the statistics of the doubly selective fading channel . in section iii ,we propose a differential feedback scheme by exploiting the two - dimensional correlations and derive the minimal feedback rate . in sectioniv , we provide some simulation results showing the performance of the proposed scheme .in this paper , we assume that the down - link channel is a mobile wireless channel which is always correlated in time and frequency domains , while the up - link channel is a limited feedback channel . since the channel corresponding to each antenna is independent and with the same statistics , we can describe the separation property of the channel frequency response at time for an arbitrary transmit and receive antenna pair where denotes expectation function , the superscript denotes complex conjugate . is the power of the channel frequency response . and denotes the temporal and spectral correlation functions , respectively . assuming that the channel frequency response stays constant within the symbol period and the subchannel spacing , the correlation function for different periods and subchannels is written as = \sigma_h^2 { r_t}\left [ \delta m \right]{r_f}\left[\deltan \right],\ ] ] where = { \kern 1pt } { r_t}\left ( { \delta m{t_s } } \right) ] .furthermore , if we just consider the time domain , the correlated channel can be modeled as a time - domain first - order autoregressive process ( ar1) where denotes the channel coefficient of the symbol interval and the subchannel , is a complex white noise variable , which is independent of , with variance .the parameter is the time autocorrelation coefficient , which is given by the zero - order bessel function of first kind = { j_0}\left ( { 2\pi { f_d}t_s } \right) ] , where is the root mean square delay spread . the system model with differential feedbackis illustrated in fig .[ fig : sm ] . by using differential feedback scheme, the receiver just feeds back the differential csi .we suppose that there are and antennas at the transmitter and receiver , respectively . the received signal vector at the symbol interval and the subchannel is given by in the above expression , denotes the received vector at the symbol interval and the subchannel . , a channel fading matrix , is the frequency response of the channel .the entries are assumed independent and identically distributed ( i.i.d . ) , obeying a complex gaussian distribution with zero - mean and variance .different antennas have the same characteristic in temporal and spectral correlations , and , respectively . besides , there is no spatial correlation between different antennas . denotes the transmitter signal vector and is assumed to have unit variance . is a additive white gaussian noise ( awgn ) vector with zero - mean and variance .both and are independent for different s and s . through csi quantization , the feedback channel output is written as where denotes the channel quantization matrix , and is the independent additive quantization distortion matrix whose entries are zero - mean and with variance , where represents the channel quantization distortion constraint .the differential feedback is under consideration as shown in fig .[ fig : sm ] .we can use the previous csi to forecast the present csi at the transmitter where and are the coefficients of the channel predictor which will be calculated by using the minimum mean square error ( mmse ) principle in the next section .meanwhile , the receiver calculates the differential csi , given the previous ones .the differential csi can be formulated as where represents the differential csi which obviously is the prediction error , and is the differential function .then through limited feedback channel , should be quantized and fed back .finally , the csi reconstructed by combining the differential one and the channel prediction is utilized by the channel adaptive techniques . in this paper , we adopt the water - filling precoder , however , the analysis and conclusions given in this paper are also valid for other adaptive techniques .the channel quantization matrix is decomposed as using singular value decomposition ( svd ) at the transmitter . and are unitary matrixes , and is a non - negative diagonal matrix composed of eigenvalues of . with the water - filling precoder , the closed - loop capacity can be obtained as ,\ ] ] where , , and ] and =\alpha_t ] , ( [ eq : mfr_orth_prin_direct1 ] ) can be simplified by from ( [ eq : mfr_orth_prin_substitue ] ) , are given by combing ( [ eq : mfr_a1_a2 ] ) and ( [ eq : mfr_1d_hmn ] ) , the one - dimensional mse of the channel estimator is finally , the channel estimator is given by and combining ( [ eq : mfr_hmn_rewrite ] ) and ( [ eq : mfr_chan_est_final ] ) , is given by then , through the feedback channel , the error of the channel predictor can be fed back from the transmitter to the receiver .similarly , from ( [ eq : mfr_info_entrpy ] ) , the feedback load is positive related with . because , , the feedback load can be much smaller than , the non - differential one , especially when the channel is highly correlated .for example , given , , then . from ( [ eq : mfr_hmn_fianl_expression ] ) ,taking quantization impact into consideration , the minimal differential feedback rate over doubly selective fading channels can be calculated by the rate distortion theory of continuous - amplitude sources in a similar way . where the channel predictor coefficients are determined by and .the average power of is .the detailed derivation is given in appendix a. the above expression gives the minimal differential feedback rate simultaneously utilizing the temporal and spectral correlations . from ( [ eq : mfr_minfb_tf ] ) , the minimal differential feedback rate is a function of and the channel quantization distortion , and much smaller than that of the non - differential one ( [ eq : mfr_no_finalrate ] ) .in this section , we first provide the relationship between the mse of the predictor and the two - dimensional correlations in fig .[ fig : mse ] . the minimal differential feedback rate over mimo doubly selective fading channelsis given in fig .[ fig : ft_mini_fb ] .then , a longitudinal section of fig . [ fig : ft_mini_fb ] is presented , where we assume the temporal correlation and spectral correlation is equal .finally , we verify our theoretical results by a practical differential feedback system with water - filling precoder and lloyd s quantization algorithm . for simplicity and without loss of generality, we consider , and . fig .[ fig : mse ] presents the mse between the predicted value and the true value . as the temporal or spectral correlation increases ,the mse decreases .furthermore , when either or comes to one , the mse tends to zero . and.,width=316 ] fig .[ fig : ft_mini_fb ] plots the relationship between the minimal differential feedback rate and the two - dimensional correlations with the channel quantization distortion .it is very similar to the mse shown in fig .[ fig : mse ] , because it presents the minimal bits required to quantize the differential csi . and .,width=316 ] additionally ,because and could be any value , we provide one of the longitudinal section of fig .[ fig : ft_mini_fb ] where the temporal correlation is equal to the spectral correlation in fig .[ fig : sect_mini_fb ] . for comparison ,the differential feedback compression only using one - dimensional correlation and the non - differential feedback scheme are also included in fig . [ fig : sect_mini_fb ] .it is observed from fig .[ fig : sect_mini_fb ] that the scheme using both temporal and spectral correlations is always better than the scheme using only one - dimensional correlation . as the correlations increase ,the two - dimensional differential feedback compression exhibits a significant improvement compared to one - dimensional one .this performance advantage even reaches up to with . and.,width=316 ] in this subsection , we consider the temporal correlation , with carrier frequency ghz , the normalized doppler shift hz and spectral correlation , with , which is a reasonable assumption .we design a differential feedback system using lloyd s quantization algorithm to verify our theoretical results .we use as a differential function , where , in the two - dimensional differential feedback compression and , in the one - dimensional one .the feedback steps can be summarized as follows .firstly , based on lloyd s quantization algorithm , the channel codebook can be generated according to the statistics of the corresponding differential feedback load at both transmitter and receiver .secondly , the receiver calculates the current differential csi .thirdly , the differential csi is quantized to the optimal coodbook value according to the euclidean distance .finally , the transmitter reconstructs the channel quantization matrix by . in fig .[ fig : cap_t_f ] , we give the simulation results of the ergodic capacity employing lloyd s algorithm .the theoretical capacity results are also provided in fig .[ fig : cap_t_f ] .we can see from fig.[fig : cap_t_f ] that the performance of the two - dimensional one are always better than the one - dimensional one , which verifies our theoretical analysis . andsnr .,width=316 ] as shown in fig.[fig : cap_t_f ] , with the increase of feedback rate , the ergodic capacities increase rapidly when is small , and then slow down in the large region , because when is large enough , the quantization errors tend to zero . also , the capacities of lloyd s quantization are lower than the theoretical ones .the reasons are as follows .the lloyd s algorithm is optimal only in the sense of minimizing a variable s quantization error , but not in data sequence compression while the channel coefficient is correlated in both temporal and spectral domain .however , the imperfection reduces as increases , because the quantization errors of both lloyd s algorithm and theoretical results tend to zero with sufficient feedback bits .in this paper , we have designed a differential feedback scheme making full use of both the temporal and spectral correlation and compared the performance with the scheme without differential feedback .we have derived the minimal differential feedback rate for our proposed scheme .the feedback rate to preserve the given channel quantization distortion is significantly small compared to non - differential one , as the channel is highly correlated in both temporal and spectral domain .finally , we provide simulations to verify our analysis .the minimal differential feedback rate over mimo doubly selective fading channel can also be derived by the rate distortion theory . given and at the transmitter , the differential feedback rate can be represented as \hspace{-1.2 mm } \le\hspace{-1.2 mm } d } \hspace{-0.6mm}\right\}\hspace{-0.8mm}.\ ] ] since the entries are i.i.d .complex gaussian variables , ( [ eq : apdixb_r_inf ] ) can be written as \hspace{-1.2 mm } \le\hspace{-1.2 mm } d } \hspace{-0.6mm}\right\}\hspace{-0.8mm}.\ ] ] the one - dimensional channel quantization equality can be written as similarly , ( [ eq : mfr_hmn_fianl_expression ] ) yields where , .the conditional mutual information can be written as first , we calculate . substituting ( [ eq : apdixb_1d_quan ] ) into ( [ eq : apdixb_1d_hmn ] ), it yields that substituting ( [ eq : apdixb_1d_hmnquan ] ) into ( [ eq : apdixb_1d_i ] ) , we obtain considering inequality ( [ eq : apdixb_1d_i_cal1 ] ) can be written as since , and are complex gaussian variables , and the information entropy of a gaussian variables with variance is , we calculate the variance of now we give the derivation of the correlation function of two noise terms . from ( [ eq : apdixb_1d_quan ] ), the quantization error can be decomposed into two parts where is a gaussian variable with zero - mean and variance , independent with .y. zhang , r. yu , m. nekovee , y. liu , s. xie , and s. gjessing , `` cognitive machine - to - machine communications : visions and potentials for the smart grid '' , _ ieee network magazine _ ,vol.26 , no.3 , pp.613 , may / jun . 2012 .w. h. chin and c. yuen , design of differential quantization for low bitrate channel state information feedback in mimo - ofdm systems , " in _ ieee vtc - spring 2008_. y. li , l. j. cimini , jr . , and n. r. sollenberger , `` robust channel estimation for ofdm systems with rapid dispersive fading channels , '' _ ieee trans .46 , no . 7 , pp. 902915 , jul .1998 .z. shi , s. hong , j. chen , k. chen and y. sun , `` particle filter based synchronization of chaotic colpitts circuits combating awgn channel distortion '' , _ circuits , systems and signal processing ( springer ) _ ,vol.27(6 ) : 833845 , dec . 2008 .l. zhang , l. song , m. ma , and b. jiao , `` on the minimum differential feedback rate for time - correlated mimo rayleigh block - fading channels , '' _ ieee trans .2 , pp . 411420 , feb . 2012 . | channel state information ( csi ) provided by limited feedback channel can be utilized to increase the system throughput . however , in multiple input multiple output ( mimo ) systems , the signaling overhead realizing this csi feedback can be quite large , while the capacity of the uplink feedback channel is typically limited . hence , it is crucial to reduce the amount of feedback bits . prior work on limited feedback compression commonly adopted the block fading channel model where only temporal or spectral correlation in wireless channel is considered . in this paper , we propose a differential feedback scheme with full use of the temporal and spectral correlations to reduce the feedback load . then , the minimal differential feedback rate over mimo time - frequency ( or doubly ) selective fading channel is investigated . finally , the analysis is verified by simulation results . differential feedback , correlation , mimo |
in the following , we consider the optimization of a state - to - state transfer problem where the figure of merit is the overlap of the final state after the evolution with the target state here is the target state and the time evolution is given by the schrdinger equation and control function is determined by the optimization algorithm in order to maximize the figure of merit .the crab algorithm builds on the fact that in most scenarios the resources available to solve an optimal control problem such as time , energy and bandwidth are limited : in particular , as the set of practically accessible wave functions is usually limited , one can show that also the control bandwidth of the control field can be upper bounded .this bound has very important consequences as optimal control problems can be practically solved by exploring a small subset of the a priori infinite dimensional search space of functions .that is , one can expand the control field in a truncated basis and the optimization can then be performed on this subspace of _ small _ dimension , resulting in an optimal set of coefficients .the optimization can be performed by standard tools , e.g. by the nelder - mead simplex algorithm that does not rely on gradients . a standard choice for the basis functions are trigonometric functions , often multiplied by a shape function that fixes the pulse boundary conditions and a guess pulse . due to the restriction of the search basis to dimensions given by the crab expansion in eq ., the algorithm might converge to a non - optimal fixed point , i.e. the algorithm is trapped in a local minimum arising due to the constraint - a so - called false trap . to overcome this problem , and escape from these false traps , we show in the next section that one can start from the non - optimal fixed point a new crab optimization with a new random basis and new coefficients .this is done in an iterative way so that in the -th super - iteration one optimizes the coefficients of where are new randomly chosen basis functions , for example sine or cosine functions with random frequencies within some interval ] and a control field tuning the interaction .the random coefficients lift the symmetry of the system and make it controllable .we investigate the control resources needed to drive the system from a random initial state to a random target state within a fixed time interval ] : we choose for respectively . as a function of the number of basis functions in the crab expansion . as we increase the size of the function spacefalse traps are removed .the symbols are the mean values over 10 different random pairs of final and initial states and with 10 different starting points and frequencies each .the error bars show the standard deviation over the different final and initial states .the total time was set to and the allowed bandwidth to for qubits respectively .an optimization was counted as success when the residual error was smaller than .the black lines indicate the empirical `` '' rule for the required number of coefficients ., scaledwidth=47.0% ] fig .[ fig : nt - vs - freq ] reports the success probability for the standard crab optimization as a function of the number of coefficients of the truncated basis expansion in eq . : as can be clearly seen , for large enough no false traps are present , resulting in a success probability of one . here , by large enough we mean the empirical `` '' rule for unconstrained optimization , where the number of real coefficients equals the number of independent real entries in the state vector .the correspondent analysis for the dcrab approach is reported in fig .[ fig : traps - removed ] : the success probability is always one regardless of , false traps are avoided and the fidelity always exceeds the threshold . despite the striking difference in terms of success probability , an important benchmark for any optimization method is given by the computational effort required to arrive at the optimal solution .here we focus on the number of function evaluation needed to achieve the global optimum as this is practically the only difference between the two methods : fig .[ fig : acrab-4qbt ] shows the number of function evaluations required by the two methods to exceed the threshold as a function of the coefficients in the crab expansion ( in the case of dcrab the coefficients of a single super - iteration ) .all points consist of the average number of function evaluations of the successful runs divided by the respective success probability , that is the number of function evaluations that on average one has to do to solve the optimal control problem using one of the two methods .note that the minimal effort does not follow the `` '' rule of fig .[ fig : nt - vs - freq ] for guaranteed convergence . for crabthe computational effort heavily depends on , which can be problematic as the best choice of is not known in advance . for dcrab instead there is only a minor dependance on in the order of magnitude of the error bars . furthermore , fig .[ fig : acrab-4qbt ] shows that even with the best choice of crab can not beat the performance of dcrab . as a function of the number of basis functions in a single call of crab within the dcrab super - iterations . independently of no false traps are encountered and .the total time , allowed bandwidth and error threshold are as in fig .[ fig : nt - vs - freq ] . , scaledwidth=47.0% ] for dcrab ( red ) and original crab ( black ) as a function of the number of basis functions involved in a single call of the respective algorithm .the error bars show the logarithmic standard deviation .optimization was stopped when the error crossed the threshold .the total time was , the allowed bandwidth and the system size qubits .similar results are obtained for other choices of the parameters ., scaledwidth=47.0% ] finally , we study how the convergence properties change in the presence of additional constraints like limited fluence and bandwidth typically present in experimental setups that violate the hypothesis of the analysis of control landscapes presented up to now . indeed , in this scenario, false traps might be present which might change performance and convergence speed of the algorithms .we consider separately two different kinds of constraints : bandwidth - limited control and limited pulse height .in the first case , we observe that eq . still holds even if the new random frequency is chosen only within the limited bandwidth interval $ ] .we can then study the performance of the optimization as a function of , as done in the unconstrained case and compare the crab and dcrab approaches . as a function of the limited bandwidth for and in the system of qubits and total time with fixed random coefficients , in the hamiltonian ( eq . ) .the grey diamonds show the infidelity for crab ( ) , while the green circles show it for dcrab ( ) .the black dashed line indicates the error threshold of .the error bars report the logarithmic standard deviation.,scaledwidth=47.0% ] the results are reported in fig .[ fig : bandwidth ] where we show the optimal infidelity reached from ten independent runs as a function of .the optimal control problem is to perform the state transfer from to given the hamiltonian of eq .for crab ( grey diamonds ) and dcrab ( green dots ) .one can see that for dcrab succeeds with probability one , as all instances reached the optimal fidelity .notice that this is less than half the bandwidth of the previous results reported in figs .[ fig : traps - removed ] and [ fig : acrab-4qbt ] .in addition , in a small intermediate regime around , some optimizations succeed while others fail indicating the presence of false traps ( note that the graph shows just the mean value of the infidelity and the standard deviation ) ; while for the final state can not be reached anymore , indicating that the bandwidth is too small to achieve the desired result . in the case of crab ,the three regimes are shifted toward larger frequencies .the lower bound observed in both cases is in agreement with an information theoretical argument given in : to achieve full control over the system the inequality has to be fulfilled , where is the dimension of the state space .this inequality basically says that the control has to contain enough information to distinguish the target state from the other reachable states and it yields , the value of the bandwidth where the infidelity in fig .[ fig : bandwidth ] starts to drop indicating that control over the system starts to be effective .we then perform a similar analysis for pulse height limited control with dcrab , where we study two scenarios to include such a constraint .we first introduce a smooth constraint as usually done , that is a penalty on the pulse height so that the control objective becomes as a second alternative , we limit the pulse height by a hard wall constraint , by using the update formula the results of these two procedures are reported in fig . [fig : pulseheight ] where we plot the infidelity as a function of the maximal pulse height : clearly the optimization works better with the hard boundaries than with the lagrange multiplier , as the fidelity threshold can be exceeded for about three times weaker pulses .this difference can be understood by the fact that the hard wall introduces pulses of higher bandwidth .compared to the unconstrained system we can decrease the maximal value of the pulse by a factor of fifteen , while keeping 100% success probability . for even smallercut - off the small error bars indicate that optimization failure is most probably more due to a loss of controlability than due to false traps . as a function of the limited pulse height for the transfer from to in the 2-qubit system with constrained dcrab .the black line indicates the error threshold of .the orange triangles are obtained with lagrange multipliers ( see eq . ) , while the brown circles are obtained with a cut - off at ( see eq . ) .the error bars indicate the standard deviation over 10 different starting points of the optimization .the orange triangles have also errorbars in since the lagrange multiplier is not a hard wall and instead we plot the maximum absolute value the optimal pulse takes in each realization . , scaledwidth=47.0% ]we have generalized the results presented in to the case of bandwidth - limited control pulses , under the condition that the bandwidth satisfy the theoretical bound introduced in .thanks to this theoretical result , we have modified the crab optimization algorithm to efficiently combine the advantages of gradient methods with those of truncated basis methods .we showed that it is possible to exploit both the guaranteed convergence to the global optimum that gradient algorithms exhibit in the frequent case in which the kinematic ( ) and the dynamical ( ) landscapes are equivalent and the numerical gradient - free truncated basis approach .moreover , we showed for two typical constraints , namely the limited fluence and the limited bandwidth constraints below the theoretical bound , that some of the convergence properties survive , if the constraints are carefully implemented in the optimization procedure .we expect that the presented results will allow to tackle in the near future both theoretically and experimentally even more complex many - body problems as done so far , as well as a broader variety of control objectives and constraints .10 f. schfer , i. herrera , s. cherukattil , c. lovecchio , f.s .cataliotti , f. caruso , and a. smerzi , nat .comm . 5 , 3194 ( 2014 ) . c. lovecchio , f. schfer , s. cherukattil , a. khan martuza , i. herrera , f.s .cataliotti , t. calarco , s. montangero , and f. caruso , arxiv:1405.6918 .i. bloch , nature 453 , 101622 ( 2008 ) .p. schau , m. cheneau , m. endres , t. fukuhara , s. hild , a. omran , t. pohl , c. gross , s. kuhr , and i. bloch , nature 491 , 97 - 91 ( 2012 ) .s. hoyer , f. caruso , s. montangero , m. sarovar , t. calarco , m.b .plenio , k.b .whaley , new j. phys .16 045007 ( 2014 ). f. caruso , s. montangero , t. calarco , s. f. huelga , and m. b. plenio , phys .a 85 , 042331 ( 2012 ) .y. o. dudin and a. kuzmich , science 336 , 887 ( 2012 ) .h. kbler , j. p. shaffer , t. baluktsian , r. lw , and t. pfau , nature photon . 4 , 112 ( 2010 ) .obrien , science 318 , 156770 ( 2007 ) .p. kok , w.j .munro , k. nemoto , t.c .ralph , j.p .dowling , and g.j .milburn , rev .79 , 135 ( 2007 ) .s. schmidt , j. koch .annalen der physik , 525 , 395 - 412 ( 2013 ) .houck , h.e .treci , and j. koch , nat .phys . 8 , 292 - 299 ( 2012 ) .m. hofheinz , h. wang , m. ansmann , r.c .bialczak , e. lucero , m. neeley , a. d. oconnell , d. sank , j. wenner , j.m .martinis , and a. n. cleland , nature 459 , 546 - 549 ( 2008 ) .su , c .- p .yang , s .- b .zheng , scientific reports 4 , 3898 ( 2014 ) .obrien , a. furusawa , j. vuckovi , nat .phot . 3 , 687 - 695 ( 2009 ) . g. kurizki , p. bertet , y. kubo , k. mlmer , d. petrosyan , p. rabl , and j. schmiedmayer , pnas 2015 112 ( 13 ) 3866 - 3873 ( 2015 ) .r. blatt and d.j .wineland , nature 453 , 100815 ( 2008 ) .d. nigg , m. mller , e. a. martinez , p. schindler , m. hennrich , t. monz , m. a. martin - delgado , and r. blatt , science 345 ( 69194 ) 302 - 305 ( 2014 ) . c. brif , r. chakrabarti , and h. rabitz , new journal of physics 12 , 075008 ( 2010 ) .a.i . konnov and v.f .automation and remote control , vol .60 , no . 10 ( 1999 ) .n. khaneja , t. reiss , c. kehlet , t. schulte - herbrggen , s. j. glaser , j. magn . reson .172 , 296 - 305 ( 2005 ) .d. dalessandro , introduction to quantum control and dynamics , taylor & francis ltd : hoboken , nj ( 1996 ) .herschel a. rabitz et al .science 303 , 1998 ( 2004 ) .t .- s . ho , h. rabitz , j. photochem .a 180 , 226 - 240 ( 2006 ) .p. de fouquieres , s. schirmer , infin .16 , 1350021 ( 2013 ) . k. w. moore , r. chakrabarti , g. riviello , h. rabitz , phys .a 83 , 012326 ( 2011 ) .r. b. wu , r. long , j. dominy , t. ho , and h. rabitz , phys .a 86 , 013405 ( 2012 ) .g. riviello , c. brif , r. long , r .- b .wu , k. moore tibbetts , t. s. ho , h. rabitz , phys .a 90 , 013404 ( 2014 ) .alexander n. pechen and david j. tannor , phys .106 , 120402 ( 2011 ) .h. rabitz , t .- s . ho , r. long , r. wu , and c. brif .108 , 198901 ( 2012 ) .a. n. pechen , d. j. tannor .phys . rev .108 , 198902 ( 2012 ) .a. n. pechen , n. ilin , phys .a 86 , 052117 ( 2012 ) .a. n. pechen , d. j. tannor , canadian journal of chemistry 92 , 157 - 159 ( 2014 ) .t. caneva , t. calarco , and s. montangero , new j. phys .14 , 093041 ( 2012 ) .p. watts , j. vala , m. m. mller , t. calarco , k. b. whaley , d. m. reich , m. h. goerz , c. p. koch .a 91 , 062306 ( 2015 ) .g. gualdi , and d. m. reich and c. p. koch , and f. motzoi and k. b. whaley , and j. vala , and m. m. mller , s. montangero and t. calarco .a 91 , 062307 ( 2015 ) .p. doria , t. calarco , and s. montangero , phys . rev106 , 190501 ( 2011 ) .t. caneva , t. calarco , and s. montangero , phys .a 84 , 022326 ( 2011 ) .t. caneva , a. silva , r. fazio , s. lloyd , t. calarco , s. montangero , phys .rev . a 89 , 042322 ( 2014 ) . m. m. mller , a. klle , r. lw , t. pfau , t. calarco , and s. montangero . phys .a 87 , 053412 ( 2013 ) .j. scheuer , x. kong , r. s. said , j. chen , a. kurz , l. marseglia , j. du , p. r. hemmer , s. montangero , t. calarco , b. naydenov , and f. jelezko , new j. phys .16 , 093022 ( 2014 ) .s. rosi , a. bernard , n. fabbri , l. fallani , c. fort , m. inguscio , t. calarco , and s. montangero , phys .a 88 , 021601(r ) ( 2013 ) .s. van frank , a. negretti , t. berrada , r. bcker , s. montangero , j .- f .schaff , t. schumm , t. calarco , and j. schmiedmayer .comm . 5 , 4009 ( 2014 ) . s. lloyd , s. montangero .113 , 010502 ( 2014 ) .moore , h. rabitz , j. chem .137 , 134113 ( 2012 ) .this is inspired but not to be confused with `` dressed crab '' , i.e. a dish where the crab meat is presented in the crab s cleaned shell to enjoy the meat without any effort .powell , the computer journal 7 , 155 ( 1964 ) .j. a. nelder and r. mead , computer journal 7 , 308 ( 1965 ) .g. riviello , k. moore tibbetts , c. brif , r. long , r .- b .wu , t .- s .ho , h. rabitz , arxiv:1502.00707 ( 2015 ) . m. hsieh , t.s . ho , and h. rabitz , chem .352 , 77 - 84 ( 2008 ) . to see this assume .then by the regularity of the point there is a pulse update so that the fidelity is improved by as , i.e. and thus . if instead we have , i.e. . c. altafini , j. math .phys , 43(5 ) , 2051 - 2062 ( 2002 ) .this work was performed on the computational resource bwunicluster funded by the ministry of science , research and arts and the universities of the state of baden - wrttemberg , germany , within the framework program bwhpc . | in quantum optimal control theory the success of an optimization algorithm is highly influenced by how the figure of merit to be optimized behaves as a function of the control field , i.e. by the control landscape . constraints on the control field introduce local minima in the landscape false traps which might prevent an efficient solution of the optimal control problem . rabitz et al . [ science 303 , 1998 ( 2004 ) ] showed that local minima occur only rarely for unconstrained optimization . here , we extend this result to the case of bandwidth - limited control pulses showing that in this case one can eliminate the false traps arising from the constraint . based on this theoretical understanding , we modify the chopped random basis ( crab ) optimal control algorithm and show that this development exploits the advantages of both ( unconstrained ) gradient algorithms and of truncated basis methods , allowing to always follow the gradient of the unconstrained landscape by bandwidth - limited control functions . we study the effects of additional constraints and show that for reasonable constraints the convergence properties are still maintained . finally , we numerically show that this approach saturates the theoretical bound on the minimal bandwidth of the control needed to optimally drive the system . the ability of achieving a desired transformation of a quantum system lies at the heart of the success of experiments in cold atoms , quantum optics , condensed matter , and in quantum technologies . together with the fast development of these fields and the increasing complexity of the experiments , to develop efficient protocols it is often necessary to automatize the optimization process if not the whole development of the experimental sequence . one possible way to perform such an optimization is by means of quantum optimal control , an approach that has proven to be very successful in solving this class of problems . the success of gradient methods to find global solutions in quantum control problems is largely due to the fact that local minima are very rare in the control landscapes of a large class of systems . however , to face the new challenges posed by the recent advancements in quantum science , gradient methods might be not the best option since to numerically calculate the gradient of the control objective might be quite inefficient . moreover in an increasing number of interesting applications , the control objective does not allow for an analytical calculation of the gradient . the crab optimal control algorithm operates with an expansion of the control field onto a truncated basis and a direct optimization of the coefficients of the expansion by means of gradient - free minimization . these characteristics allow for the solution of optimal control problems involving many - body quantum systems , as well as in the presence of long - range interactions , bandwidth - limited control according to experimental constraints , and highly nonlinear functionals where the gradient can not be calculated . however , the crab optimization does not necessarily fulfill the condition under which local minima in the optimization occur only rarely as it is by construction bandwidth - limited . in this paper we extend the results presented by rabitz and coworkers and show that bandwidth - limited optimal control can be made virtually free of local minima as in the case of unconstrained control . moreover , we numerically show that the global minima one always reaches correspond to an optimal solution if the bandwidth satisfy the theoretical bound given in reference . in particular , we present an extension of the crab algorithm the `` dressed crab '' ( dcrab ) that keeps the benefits of the original algorithm and comes with the additional property of guaranteed convergence to the global optimum in the cases where this is guaranteed also for gradient methods . we test the two versions of the algorithm by optimizing a state transfer for different instances of a random spin hamiltonian . being the dimension of the hilbert space the problem is defined on , for the standard crab algorithm false traps occur when the heuristic `` -rule '' for the number of required control coefficients is violated . on the contrary , using the dcrab presented hereafter , all false traps are removed regardless of the number of control coefficients , resulting also in a faster convergence to the global optimum . we present a theoretical explanation supporting these findings and showing that one can construct a set of random basis functions which follow the instantaneous gradient of the control landscape . finally , we examine the behavior of dcrab in the presence of additional constraints on the bandwidth and amplitude of the pulse , and show that the algorithm is well - behaving as false traps appear only in the presence of strong constraints . in the following , for simplicity we will focus on state to state transfer of pure states , however the theoretical arguments are valid in the general scenario of unitary gate generation and mixed - states optimal control . the structure of this paper is as follows : we first review the standard crab algorithm and introduce its extension , dcrab . in section ii we present the theoretical basis of dcrab , while in section iii we show that dcrab follows the instantaneous gradient of the landscape . finally , in section iv we apply both crab and dcrab to control problems with random spin hamiltonians and evaluate their performance with respect to success probability , computational effort and behavior under constraints . |
recently , symmetric positive definite ( spd ) matrices of real numbers appear in many branches of computer vision .examples include region covariance matrices for pedestrian detection and texture categorization , joint covariance descriptor for action recognition , diffusion tensors for dt image segmentation and image set based covariance matrix for video face recognition . due to the effectiveness of measuring data variations ,such spd features have been shown to provide powerful representations for images and videos .however , such advantages of the spd matrices often accompany with the challenge of their non - euclidean data structure which underlies a specific riemannian manifold . applying the euclidean geometry directly to spd matricesoften results in poor performances and undesirable effects , such as the swelling of diffusion tensors in the case of spd matrices . to overcome the drawbacks of the euclidean representation , recent works have introduced riemannian metrics , e.g. , affine - invariant metric , log - euclidean metric , to encode the riemannian geometry of spd manifold properly . by applying these classical riemannian metrics ,a couple of works attempt to extend euclidean algorithms to work on manifolds of spd matrices for learning more discriminative spd matrices or their vector - forms . to this end ,several studies exploit effective methods on one spd manifold by either flattening it via tangent space approximation ( see fig.[fig1 ] ( a)(b ) ) or mapping it into a high dimensional reproducing kernel hilbert space ( rkhs ) ( see fig.[fig1 ] ( a)(c)(b ) ) .obviously , both of the two families of methods inevitably distort the geometrical structure of the original spd manifold due to the procedure of mapping the manifold into a flat euclidean space or a high dimensional rkhs .therefore , the two learning schemes would lead to sub - optimal solutions for the problem of discriminative spd matrix learning .( b ) is to firstly flatten the original manifold by tangent space approximation and then learn a map to a discriminative euclidean space .the second one ( a)(c)(b ) is to firstly embed with an implicit map into an rkhs and then learn a mapping to a more discriminative euclidean space .the last one ( a)(d ) aims to learn a map from the original spd manifold to a more discriminative spd manifold . here , and are the spd matrices , and are the tangent spaces . ] [ fig1 ] to more faithfully respect the original riemannian geometry, another kind of spd - based discriminant learning methods aims to pursue a column full - rank transformation matrix mapping the original spd manifold to a more discriminative spd manifold , as shown in fig.[fig1 ] ( a)(d ) .however , as directly learning the manifold - manifold transformation matrix is hard , the work alternatively decomposes it to the product of an orthonormal matrix with a matrix in gl , and requires the employed riemannian metrics to be affine invariant . by doing so ,optimizing the manifold - manifold transformations is equivalent to optimizing over orthonormal projections .although the additional requirement simplifies the optimization of the transformation , this has not only reduced the original solution space but also inevitably excluded all non - affine invariant riemannian metrics such as the well - known log - euclidean metric , which has proved to be much more efficient than affine - invariant metric .while the work exploited the log - euclidean metric under the same scheme , it actually attempts to learn a tangent map , which implicitly approximate the tangent space and hence introduces some distortions of the true geometry of spd manifolds . in this paper , also under the last scheme ( see fig.[fig1 ] ( a)(d ) ) , we propose a new geometry - aware spd similarity learning ( spdsl ) framework to open a broader problem domain of learning discriminative spd features by exploiting either affine invariant or non - affine invariant riemannian metrics on spd manifolds . to realize the spdsl framework , there are three main contributions in this work : * by exploiting the riemannian geometry of the manifold of fixed - rank positive semidefinite ( psd ) matrices , our spdsl framework provides a new solution to directly learn the manifold - manifold transformation matrix . as no additional constraintis required , the optimal transformation will be pursued in a favorable solution space , enabling a wide range of well - established riemannian metrics to work as well . * to fulfill the solution , a new supervised spd similarity learning technique is proposed to learn the transformation by regressing the similarities of selected spd pairs to the target similarities on the resulting spd manifold .* we derive an optimization approach which exploits the classical riemannian conjugate gradient ( rcg ) algorithm on the psd manifold to optimize the proposed objective function .let be a set of real , symmetric matrices of size and be a set of spd matrices .the mapping space is endowed with usual euclidean metric ( i.e. , inner product ) . as noted in , the set of spd matrices is an open convex subset of .thus , the tangent space to at any spd matrix in it can be identified with the set .a smoothly - varying family of inner products on each tangent space is known as riemannian metric , endowing which the space of spd matrices would yield a riemannian manifold . with such riemannian metric , the geodesic distance between two elements on the spd manifold is generally measured by . several riemannian metrics and divergences have been proposed to equip spd manifolds . for example, affine - invariant metric , stein divergence , jeffereys divergence are designed to be invariant to affine transformation .that is , for any ( i.e. , the group of real invertible matrices ) , the metric function has the property .in contrast , log - euclidean metric , cholesky distance and power - euclidean metric are not affine invariant . among these metrics , only affine - invariant metric and log - euclidean metric a true geodesic distance on the spd manifold .in addition , the stein divergence are also widely used due to its favorable properties and high performances in visual recognition tasks .therefore , this paper focuses on studying such three representative riemannian metrics .* definition 1 . * _ by defining the inner product in the tangent space at the spd point on the spd manifold as and the logarithmic maps as , the geodesic distance between two spd matrices on the spd manifold is induced by affine - invariant metric ( aim ) as _* definition 2 . * _ the approximated geodesic distance between two spd matrices on the spd manifold is defined by using stein divergence as _* definition 3 . * _ by defining the inner product in the space at the spd point on the spd manifold as , \text{d}\log(\bm{x}_1)[\bm{h}_2 ] \rangle ] denotes the directional derivative ) and the logarithmic maps as ] , and a one - to - one correspondence with the rank- psd matrix . by quotienting this equivalence relation out , the set of rank- psd matrices reduced to the quotient of the manifold by the orthogonal group , i.e. , . with the studied relationship between and ,the function is able to derive the function defined as . here , is defined in the total space and descends as a well - defined function in the quotient manifold .therefore , optimizing over the total space is reduced to optimizing on the psd manifold , which is well - studied in several works .note that , as each element on the psd manifold is simply parameterized by , optimizing on the manifold actually deals directly with .to more easily understand this point , one can take the well - known grassmann manifold as an analogy , where each element can be similarly represented by the equivalence class ] , where the -th entry being 1 and other entries being 0 indicates that belongs to the -th class of classes in total . as discriminant learning techniques developed in euclidean space, we assume that prior knowledge is known regarding the distances between pairs of spd points on the new spd manifold .let s take the similarity or dissimilarity between pairs of spd points into account : two spd points are similar if the similarity based on the geodesic distance between them on the new manifold is larger , while two spd points are dissimilar if their similarity is smaller . given a set of the similarity constraints , our goal is to learn the manifold - manifold transformation matrix that parameterizes the similarities of spd points on the target spd manifold . to this end, we exploit the supervised criterion of centered kernel target alignment to learn discriminative features on the spd manifold by regressing the similarities of selected sample pairs to the target similarities .formally , our supervised spd similarity learning ( spdsl ) approach is to maximize the following objective function : where and are frobenius inner product and norm respectively .the elements of matrix encodes the similarities of spd data while the elements of presents the ground - truth similarities of the involved spd points .the matrix is used to select the pairs of spd points when the corresponding elements are 1 .the matrix is employed for centering the data similarity matrix and the similarity matrix on labels . is the number of samples , is the identity matrix of size , is the vector of size with all entries being ones , ^t ] can be derived by : \\ & = \frac{\langle \bm{u}\bm{g}\circ d_w k(\bm{w})[\dot{\bm{w}}]\bm{u } , \bm{g}\circ(\bm{y}\bm{y}^{t } ) \rangle_{\mathcal{f } } \|\mathcal{l}\|_{\mathcal{f}}}{\|\mathcal{l}\|_{\mathcal{f}}^2}\\ - & \frac{\langle \mathcal{l } , \bm{g}\circ(\bm{y}\bm{y}^{t } ) \rangle_{\mathcal{f } } \langle \frac{\mathcal{l}}{\|\mathcal{l}\|_{\mathcal{f } } } , \bm{u}\bm{g}\circ d_w k(\bm{w})[\dot{\bm{w}}]\bm{u } \rangle_{\mathcal{f}}}{\|\mathcal{l}\|_{\mathcal{f}}^2 } \\ & = \langle d_w k(\bm{w})[\dot{\bm{w } } ] , \bm{u } \left ( \frac{\bm{g}\circ(\bm{y}\bm{y}^{t})}{\|\mathcal{l}\|_{\mathcal{f}}}- \frac{\mathcal{j}(\bm{w})\mathcal{l}}{\|\mathcal{l}\|_{\mathcal{f}}^2 } \right ) \bm{u } \rangle_{\mathcal{f } } , \label{eq12 } \end{aligned}\ ] ] where , indicates frobenius inner product , denotes frobenius norm .accordingly , the key issue in eqn.[eq12 ] is to estimate , where is formulated by eqn.[eq6 ] .when in eqn.[eq4 ] is the geodesic distance of aim defined in eqn.[eq1 ] , the euclidean gradient of can be derived as : where , . for other affine invariant metrics such as stein divergence , the corresponding euclidean gradient of withthe geodesic distance function being defined in eqn.[eq1.0 ] can be computed by : where , and hence be able to work in our new proposed framework . when endowing the spd manifold with the non - affine invariant metric lem , it not easy to calculate the euclidean gradient of due to the matrix logarithms in it .thus , we need to study the problem of the computation of the euclidean gradient for the lem case in the following .first , we decompose the derivative of lem w.r.t . into three derivatives with the trace form : * proposition 1 .* _ the derivatives of the three trace forms in eqn.[eq13 ] can be respectively computed by ( here , , ) _ :d_w(tr(^2(_i ) ) = 4_i ( _ i)[(_i ) ] .[ eq14 ] + d_w(tr(^2(_j ) ) = 4_j ( _ j)[(_j ) ] .[ eq15 ] + & d_w(tr((_i)(_j ) ) + & = 2_i ( _ i)[(_j ) ] + 2_j ( _ j)[(_i ) ] .[ eq16 ] _ proof .the three formulas for the gradients with the matrix logarithm correspond to the three ones with rotation matrices in ( section 5.3 ) , where a detailed proof is given . _ by using * proposition 1 . *( i.e. eqn.[eq14 ] , eqn.[eq15 ] , eqn.[eq16 ] ) and the sum rule of the directional derivatives , we derive with being the geodesic distance of lem in eqn.[eq4 ] as : \\ & + \bm{b}_j \text{d}\log(\bm{\hat{x}}_j)[\log(\bm{\hat{x}}_j)-\log(\bm{\hat{x}}_i ) ] ) \beta k_{ij}(\bm{w } ) .\label{eq17 } \end{aligned}\ ] ] to calculate the formula eqn.[eq17 ] , we then apply a function of block triangular matrix developed in to compute the form of ] is simply listed as : n = size(x , 1 ) ; z = zeros(n ) ; a = log([x , h ; z , x ] ) ; d = a(1:n , ( n+1):end ) , where ] . for lek, there are three implements based on polynomial , exponential and radial basis kernels , which are respectively denoted as lek- , lek- and lek- . for lek- and lek- , we densely sampled the from 1 to 50 . the parameters in lek- and the in the three lek versions were all tuned in the same way as rsr . for leml, the parameter is tuned in the range of [ 0.1 , 1 , 10 ] , and is tuned from 0.1 to 0.5 .for spdml and our method spdsl , the maximum iteration number of the optimization algorithm is set to 50 , the parameters is fixed as the minimum number of samples in one class , the dimensionality of the lower - dimensional spd manifold and were tuned by cross - validation .the parameter in our method is set to , where is equal to the mean distance of all pairs of training data .[ fig3 ] [ cols="^,^,^,^,^,^,^",options="header " , ] [ tab4 ] we then employ the hdm05 database to handle with the problem of human action recognition from motion capture sequences . as shown in fig.[fig3 ] , this dataset contains 2,337 sequences of 130 motion classes , e.g. , _ `clap above head',`lie down floor',`rotate arms ' , ` throw basket ball ' _ , in 10 to 50 realizations executed by various actors .the 3d locations of 31 joints of the subjects are provided over time acquired at the speed of 120 frames per second . following the previous works , we represent an action of a joints skeleton observed over m frames by its joint covariance descriptor .this descriptor is an form of spd matrix of size , which is computed by the second order statistics of 93-dimensional vectors concatenating the 3d coordinates of the 31 joints in each frame . as the evaluation protocol on uiuc , on this dataset, we also conduct 10 times random evaluations , in which half of sequences ( around 1,100 sequences ) are randomly selected for training data , and the rest are used for testing . on the hdm05 database ,the work only used 14 motion classes for evaluation while we tested these methods for identifying 130 action classes .table.[tab4 ] summarizes the performances of the comparative algorithms on the uiuc dataset . in the evaluation ,the dimensions of resulting manifolds achieved by dimensionality reduction methods are all set as 30 .different from the last two evaluations , cdl and rsr performance worse than other competing methods .the proposed spdsl again improves the existing dimensionality reduction based methods leml and spdml with 1%-3% , and achieve state - of - the - art performance on the hdm05 database . since our method spdsl and the two methods spdml , leml adopt the same spd matrix learning scheme , we here mainly make two pieces of discussions between them .first , compared with the related manifold learning method spdml , our spdsl framework proposes a more general solution and a more favorable objective function .this point has been validated by the three evaluations . as can be seen from table [ tab2 ] , table [ tab3 ] and table [ tab4 ] ,there are two key conclusions observed from the three visual recognition tasks : \a ) as for the new solution , its main benefits lie in enlarging the search domain and opening up the possibility of using non - affine invariant metrics ( e.g. lem ) . while spdml * for affine invariant metrics aim and stein improves spdml mildly ( this may depend on the data ) , the gains of spdml*-lem over the aim and stein cases are relatively obvious , i.e. 1.65% , 2.15% , 6.21% on average , respectively for the three datasets .\b ) the new objective function ( for similarity regression ) is quite different from that ( for graph embedding ) used in [ 5 ] . while it s hard to theoretically prove the gains , we have empirically studied its priority . by comparing spdsl with spdml * , the improvements for the three datasets are 2.13% , 1.03% , 6.34% on average for the three used databases , respectively .second , in contrast to leml which focuses on metric learning , our spdsl learns discriminative similarities on spd manifolds . besides , while leml performs metric learning on the tangent space of spd manifolds , the proposed spdsl learns similarity directly on the spd manifolds .intuitively , our learning scheme would more faithfully respect the riemannian geometry of the data space , and thus could lead to more favorable spd features for classification tasks . from the above three evaluations , we can see some improvements of spdsl over leml .we have proposed a geometry - aware spd similarity learning ( spdsl ) framework for more robust visual classification tasks . under this framework , by exploiting the riemannian geometry of psd manifolds , we open the possibility of directly learning the manifold - manifold transformation matrix . to achieve the discriminant learning on the spd features , this work devises a new spdsl technique working on spd manifolds . with the objective of the proposed spdsl, we derive an optimization algorithm on psd manifolds to pursue the transformation matrix .extensive evaluations have studied both the effectiveness of efficiency of our spdsl on three challenging datasets . for future work , the study on the relationship between the selected riemannian metrics of psd manifolds and spd manifolds would be interesting for the problem of supervised spd similarity learning . besides , if neglecting the designed discriminant function on spd features , learning the transformation on spd features for object sets is equal to learning the projection on single object features .thus , this work can be extended to learn hierarchical representations on object feature by leveraging the current powerful deep learning techniques .this work has been carried out mainly at the institute of computing technology ( ict ) , chinese academy of sciences ( cas ) .it is partially supported by 973 program under contract no .2015cb351802 , natural science foundation of china under contracts nos .61390511 , 61173065 , 61222211 , and 61379083 .m. t. harandi , c. sanderson , r. hartley , and b. c. lovell , `` sparse coding and dictionary learning for symmetric positive definite matrices : a kernel approach , '' in _ proc . eurocomput . vision _ , 2012 .m. e. hussein , m. torki , m. a. gowayyed , and m. el - saban , `` human action recognition using a temporal hierarchy of covariance descriptors on 3d joint locations , '' in _ international joint conf . on artificial intelligence _, 2013 .v. arsigny , p. fillard , x. pennec , and n. ayache , `` geometric means in a novel vector space structure on symmetric positive - definite matrices , '' _ siam j. matrix analysis and applications _ , vol . 29 , no . 1 ,pp . 328347 , 2007 .m. faraki , m. t. harandi , and f. porikli , `` approximate infinite - dimensional region covariance descriptors for image classification , '' in _ international conference on acoustics , speech and signal processing _ , 2015 , pp .13641368 .m. t. harandi , r. hartley , b. lovell , and c. sanderson , `` sparse coding on symmetric positive definite manifolds using bregman divergences , '' _ ieee transactions on neural networks and learning systems _ , vol .27 , no . 6 , pp .12941306 , 2016 .z. huang , r. wang , s. shan , x. li , and x. chen ., `` log - euclidean metric learning on symmetric positive definite manifold with application to image set classification , '' in _ proc ._ , 2015 .s. bonnabel and r. sepulchre , `` riemannian metric and geometric mean for positive semidefinite matrices of fixed rank , '' _ siam journal on matrix analysis and applications _ ,31 , no . 3 , pp .10551070 , 2009 .s. yan , d. xu , b. zhang , h. zhang , q. yang , and s. lin , `` graph embedding and extensions : a general framework for dimensionality reduction , '' _ ieee trans . on pattern_ , vol . 29 , no . 1 ,pp . 4051 , 2007 .a. al - mohy and n. higham ,`` computing the frchet derivative of the matrix exponential , with an application to condition number estimation , '' _ siam journal on matrix analysis and applications _30 , no . 4 , pp .16391657 , 2009 . | symmetric positive definite ( spd ) matrices have been widely used for data representation in many visual recognition tasks . the success mainly attributes to learning discriminative spd matrices with encoding the riemannian geometry of the underlying spd manifold . in this paper , we propose a geometry - aware spd similarity learning ( spdsl ) framework to learn discriminative spd features by directly pursuing manifold - manifold transformation matrix of column full - rank . specifically , by exploiting the riemannian geometry of the manifold of fixed - rank positive semidefinite ( psd ) matrices , we present a new solution to reduce optimizing over the space of column full - rank transformation matrices to optimizing on the psd manifold which has a well - established riemannian structure . under this solution , we exploit a new supervised spd similarity learning technique to learn the transformation by regressing the similarities of selected spd data pairs to their ground - truth similarities on the target spd manifold . to optimize the proposed objective function , we further derive an algorithm on the psd manifold . evaluations on three visual classification tasks show the advantages of the proposed approach over the existing spd - based discriminant learning methods . discriminative spd matrices , riemannian geometry , spd manifold , geometry - aware spd similarity learning , psd manifold . |
the stochastic block model is the simplest statistical model for networks with a community ( or cluster ) structure . as such ,it has attracted considerable amount of work across statistics , machine learning , and theoretical computer science .a random graph from this model has its vertex set partitioned into groups , which are assigned distinct labels .the probability of edge being present depends on the group labels of vertices and . in the context of social network analysis, groups correspond to social communities . for other data - mining applications , they represent latent attributes of the nodes . in all of these cases ,we are interested in inferring the vertex labels from a single realization of the graph .in this paper we develop an information - theoretic viewpoint on the stochastic block model .namely , we develop an explicit ( ` single - letter ' ) expression for the per - vertex conditional entropy of the vertex labels given the graph .equivalently , we compute the asymptotic per - vertex mutual information between the graph and the vertex labels .our results hold asymptotically for large networks under suitable conditions on the model parameters .the asymptotic mutual information is of independent interest , but is also intimately related to estimation - theoretic quantities . for the sake of simplicity, we will focus on the symmetric two group model .namely , we assume the vertex set \equiv \{1,2,\dots , n\} ] . ) in words , is the minimum error incurred in estimating the relative sign of the labels of two given ( distinct ) vertices .equivalently , we can assume that vertex has label .then is the minimum mean square error incurred in estimating the label of any other vertex , say vertex .namely , by symmetry , we have ( see section [ sec : estimation ] ) ^ 2\big|x_1=+1\big\}\label{eq : alternativemmse1}\\ & = \min_{\hx_{2|1 } : \cg_n\to\reals } \e\big\{\big[x_2-\hx_{2|1}(\bg)\big]^2|x_1=+1\big\}\ , .\label{eq : alternativemmse2 } \ ] ] in particular ] .when possible , we will follow the convention of denoting random variables by upper - case letters ( e.g. ) , and their values by lower case letters ( e.g. ) .we use boldface for vectors and matrices , e.g. for a random vector and for a deterministic vector .the graph will be identified with its adjacency matrix .namely , with a slight abuse of notation , we will use both to denote a graph , e) ] the vertex set , and the edge set , i.e. a set of unordered pairs of vertices ) , and its adjacency matrix .this is a symmetric zero - one matrix with entries throughout we assume by convention .we write to mean that for a universal constant .we denote by a generic ( large ) constant that is independent of problem parameters , whose value can change from line to line .we say that an event holds _ with high probability _ if it holds with probability converging to one as .we denote the norm of a vector by and the frobenius norm of a matrix by .the ordinary scalar product of vectors is denoted as .unless stated otherwise , logarithms will be taken in the natural basis , and entropies measured in nats .the stochastic block model was first introduced within the social science literature in . around the same time , it was studied within theoretical computer science , under the name of ` planted partition model . 'a large part of the literature has focused on the problem of _ exact recovery _ of the community ( cluster ) structure .a long series of papers , establishes sufficient conditions on the gap between and that guarantee exact recovery of the vertex labels with high probability .a sharp threshold for exact recovery was obtained in , showing that for , , , exact recovery is solvable ( and efficiently so ) if and only if .efficient algorithms for this problem were also developed in .for the sbm with arbitrarily many communities , necessary and sufficient conditions for exact recovery were recently obtained in .the resulting sharp threshold is efficiently achievable and is stated in terms of a ch - divergence .a parallel line of work studied the _ detection _ problem . in this case , the estimated community structure is only required to be asymptotically positively correlated with the ground truth . for this requirement , two independent groups proved that detection is solvable ( and so efficiently ) if and only if , when , .this settles a conjecture made in and improves on earlier work .results for detection with more than two communities were recently obtained in .a variant of community detection with a single hidden community in a sparse graph was studied in . in a sense , the present paper bridges detection and exact recovery , by characterizing the minimum estimation error when this is non - zero , but for smaller than for random guessing .an information - theoretic view of the sbm was first introduced in .there it was shown that in the regime of , , and ( i.e. , disassortative communities ) , the normalized mutual information admits a limit as .this result is obtained by showing that the condition entropy is sub - additive in , using an interpolation method for planted models . while the result of holds for arbitrary ( possibly small ) and extend to a broad family of planted models ,the existence of the limit in the assortative case is left open .further , sub - additivity methods do not provide any insight as to the limit value . for the partial recovery of the communities, it was shown in that the communities can be recovered up to a vanishing fraction of the nodes if and only if diverges .this is generalized in to the case of more than two communities . in these regimes, the normalized mutual information ( as studied in this paper ) tends to nats . for the constant degree regime , it was shown in that when is sufficiently large , the fraction of nodes that can be recovered is determined by the broadcasting problem on tree .namely , consider the reconstruction problem whereby a bit is broadcast on a galton - watson tree with poisson( ) offspring and with binary symmetric channels of bias on each branch .then the probability of recovering the bit correctly from the leaves at large depth gives the fraction of nodes that can be correctly labeled in the sbm . in terms of proof techniques , our arguments are closest to .we use the well - known lindeberg strategy to reduce computation of mutual information in the sbm to mutual information of the gaussian observation model .we then compute the latter mutual information by developing sharp algorithmic upper bounds , which are then shown to be asymptotically tight via an area theorem .the lindeberg strategy builds from while the area theorem argument also appeared in .we expect these techniques to be more broadly applicable to compute quantities like normalized mutual information or conditional entropy in a variety of models .let us finally mentioned that the result obtained in this paper are likely to extend to more general sbms , with multiple communities , to the censored block model studied in , the labeled block model , and other variants of block models . in particular , it would be interesting to understand which estimation - theoretic quantities appear for these models , and whether a general result stands behind the case of this paper .while this paper was in preparation , lesieur , krzakala and zdborov studied estimation of low - rank matrices observed through noisy memoryless channels .they conjectured that the resulting minimal estimation error is universal across a variety of channel models .our proof ( see section [ sec : strategy ] below ) establishes universality across two such models : the gaussian and the binary output channels .we expect that similar techniques can be useful to prove universality for other models as well .in this section we discuss how to evaluate the asymptotic formulae in theorem [ thm : main ] and theorem [ thm : mainestimation ] .we then discuss the consequences of our results for various estimation metrics . before passing to these topics, we will derive a simple upper bound on the per - vertex mutual information , which will be a useful comparison for our results .it is instructive to start with an elementary upper bound on .[ lemma : elementary ] assume , satisfy the assumptions of theorem [ thm : main ] ( in particular and ) .then we have where follows since are conditionally independent given and because only depends on through the product ( notice that there is no comma but product in . from our model , it is easy to check that the claim follows by substituting , and by taylor expansion for all large enough . ] . ) .the ` effective signal - to - noise ratio ' is given by the intersection of the curve , and the line .,title="fig : " ] ( -90,120) . the dashed lines are simple upper bounds : ( cf. lemma [ lemma : elementary ] ) and .right frame : asymptotic estimation error under different metrics ( see section [ sec : est ] ) . note the phase transition at in both frames.,title="fig : " ] .the dashed lines are simple upper bounds : ( cf .lemma [ lemma : elementary ] ) and .right frame : asymptotic estimation error under different metrics ( see section [ sec : est ] ) . note the phase transition at in both frames.,title="fig : " ] our asymptotic expression for the mutual information , cf .theorem [ thm : main ] , and for the estimation error , cf .theorem [ thm : mainestimation ] , depends on the solution of eq .( [ eq : mainequation ] ) which we copy here for the reader s convenience : here we defined the effective signal - to - noise ratio that enters theorem [ thm : main ] and theorem [ thm : mainestimation ] is the largest non - negative solution of eq .( [ eq : mainequation ] ) .this equation is illustrated in figure [ fig : formulaevaluation ] .it is immediate to show from the definition ( [ eq : gdef ] ) that is continuous on with , and .this in particular implies that is always a solution of eq .( [ eq : mainequation ] ) .further , since is monotone decreasing in the signal - to - noise ratio , is monotone increasing . as shown in the proof of remark [ rem : unique ] ( see appendix [ sec : proofunique ] ) , is also strictly concave on .this implies that eq .( [ eq : mainequation ] ) as at most one solution in , and a strictly positive solution only exists if .we summarize these remarks below , and refer to figure [ fig : informationestimation ] for an illustration .[ lemma : fixedpoint ] the effective snr , and the asymptotic expression for the per - vertex mutual information in theorem [ thm : main ] have the following properties : * for , we have and . * for , we have strictly with as . +further , strictly with as .all of the claims follow immediately form the previous remarks , and simple calculus , except the claim for .this is direct consequence of the variational characterization established below .we next give an alternative ( variational ) characterization of the asymptotic formula which is useful for proving bounds .under the assumptions and definitions of theorem [ thm : main ] , we have the function is differentiable on with as .hence , the is achieved at a point where the first derivative vanishes ( or , eventually , at ) . using the i - mmse relation , we get hence the minimizer is a solution of eq .( [ eq : mainequation ] ) .as shown above , for , the only solution is , which therefore yields as claimed .for , eq . ( [ eq : mainequation ] )admits the two solutions : and .however , by expanding eq .( [ eq : derpsi ] ) for small , we obtain and hence is a local maximum , which implies the claim for as well .we conclude by noting that eq .( [ eq : mainequation ] ) can be solved numerically rather efficiently .the simplest method consists is by iteration .namely , we initialize and then iterate . this approach was used for figure [ fig : informationestimation ] .theorem [ thm : mainestimation ] establishes that a phase transition takes place at for the matrix minimum mean square error defined in eq .( [ eq : mmsedef ] ) . throughout this section, we will omit the subscript to denote the limit ( for instance , we write ) .figure [ fig : informationestimation ] reports the asymptotic prediction for stated in theorem [ thm : mainestimation ] , and evaluated as discussed above .the error decreases rapidly to for . in this sectionwe discuss two other estimation metrics . in both caseswe define these metrics by optimizing a suitable risk over a class of estimators : it is understood that randomized estimators are admitted as well . *the first metric is the _ vector minimum mean square error _ : note the minimization over the sign : this is necessary because the vertex labels can be estimated only up to an overall flip .of course ] ( but now large overlap corresponds to good estimation ) . indeed by returning uniformly at random , we obtain .+ note that the main difference between overlap and vector minimum mean square error is that in the latter case we consider estimators taking arbitrary real values , while in the former we assume estimators taking binary values . in order to clarify the relation between various metrics ,we begin by proving the alternative characterization of the matrix minimum mean square error in eqs .( [ eq : alternativemmse1 ] ) , ( [ eq : alternativemmse2 ] ) . letting be defined as per eq . ( [ eq : mmsedef ] ) , we have ^ 2\big|x_1=+1\big\ } \label{eq : alternativemmse1_bis}\\ & = \min_{\hx_{2|1 } : \cg_n\to\reals } \e\big\{\big[x_2-\hx_{2|1}(\bg)\big]^2|x_1=+1\big\}\ , .\label{eq : alternativemmse2_bis } \ ] ] first note that eq .( [ eq : alternativemmse2_bis ] ) follows immediately from eq .( [ eq : alternativemmse1_bis ] ) since conditional expectation minimizes the mean square error ( the conditioning only changes the prior on ) .in order to prove eq .( [ eq : alternativemmse1_bis ] ) , we start from eq .( [ eq : mmse_2vert ] ) .since the prior distribution on is uniform , we have where in the second line we used the fact that , conditional to , is distributed as .continuing from eq .( [ eq : mmse_2vert ] ) , we get ^ 2\big\}\\ & = \frac{1}{2}\ , \e\big\{\big[x_1x_2-\e\{x_2|x_1=+1,\bg\}\big]^2\big|x_1 = + 1\big\}\nonumber\\ & \phantom{aaaa}+ \frac{1}{2}\ , \e\big\{\big[x_1x_2-\e\{x_2|x_1=+1,\bg\}\big]^2\big|x_1 = -1\big\}\\ & = \e\big\{\big[x_2-\e\{x_2|x_1=+1,\bg\}\big]^2\big|x_1 = + 1\big\}\ , , \ ]] which proves the claim .the next lemma clarifies the relationship between matrix and vector minimum mean square error .its proof is deferred to appendix [ sec : vectorvsmatrix ] .[ lemma : vectorvsmatrix ] with the above definitions , we have finally , a lemma that relates overlap and vector minimum mean square error , whose proof can be found in appendix [ sec : overlapvsvector ] .[ lemma : overlapvsvector ] with the above definitions , we have as an immediate corollary of these lemmas ( together with theorem [ thm : mainestimation ] and lemma [ lemma : fixedpoint ] ) , we obtain that is the critical point for other estimation metrics as well .[ coro : metrics ] the vector minimum mean square error and the overlap exhibit a _ phase transition at . namely , under the assumptions of theorem [ thm : main ] ( in particular , and ) , we have * if , then estimation can not be performed asymptotically better than without any information : * if , then estimation can be performed better than without any information , even in the limit : this section we describe the main elements used in the proof of theorem [ thm : main ] : * we describe a gaussian observation model which has asymptotically the same mutual information as the sbm introduced above . *we state an asymptotic characterization of the mutual information of this gaussian model . *we describe an approximate message passing ( amp ) estimation algorithm that plays a key role in the last characterization .we then use these technical results ( proved in later sections ) to prove theorem [ thm : main ] in section [ sec : prooftheoremmain ] .we recall that .define the gap .we will assume for the proofs that ( i.e. the assortative model ) but the results also hold for in an analogous fashion .the edges are conditionally independent given the vertex labels , with distribution : as a first step , we compare the sbm with an alternate gaussian observation model defined as follows .let be a gaussian random symmetric matrix generated with independent entries and , independent of .consider the noisy observations defined by note that this model matches the first two moments of the original model .more precisely , if we define the rescaled adjacency matrix , then and .our first proposition proves that the mutual information between the vertex labels and the observations agrees to leading order across the two models . [ prop : gausseq ] assume that , as , and .then there is a constant independent of such that the proof of this result is presented in section [ sec : gausseq ] .the next step consists in analyzing the gaussian model ( [ eq : gaussobs ] ) , which is of independent interest .it turns out to be convenient to embed this in a more general model whereby , in addition to the observations , we are also given observations of through a binary erasure channel with erasure probability , .we will denote by the output of this channel , where we set every time the symbol is erased .formally we have where are independent random variables , independent of , . in the special case , all of these observations are trivial , and we recover the original model .the reason for introducing the additional observations is the following .the graph has the same distribution conditional on or , hence it is impossible to recover the sign of . as we will see , the extra observations allow to break this trivial symmetry and we will recover the required results by continuity in as the extra information vanishes . indeed , our next result establishes a single letter characterization of in terms of a recalibrated _ scalar _ observation problem .namely , we define the following observation model for a rademacher random variable : here , , , are mutually independent .we denote by , the minimum mean squared error of estimating from , , conditional on .recall the definitions ( [ eq : infodef ] ) , ( [ eq : mmsedef ] ) of , , and the expressions ( [ eq : infoformula ] ) , ( [ eq : mmseformula ] ) .a simple calculation yields [ prop : singleletter ] for any , ] , since , by proposition [ prop : singleletter ] has a well - defined limit as , and is arbitrary , we have that : it is immediate to check that is continuous in , and as defined in theorem [ thm : main ] . furthermore , as , the unique positive solution of eq .( [ eq : epsfixedpoint ] ) converges to , the largest non - negative solution to of eq .( [ eq : mainequation ] ) , which we copy here for the readers convenience : this follows from the smoothness and concavity of the function ( see lemma [ rem : unique ] ) .it follows that and therefore this proves theorem [ thm : singlelettergauss ] .theorem [ thm : main ] follows by applying proposition [ prop : gausseq ] .given a collection of random variables defined on the same probability space as , and a non - negative real number , we define the following hamiltonian and log - partition function associated with it : [ lem : gaussentro ] we have the identity : by definition : since the two distributions and are absolutely continuous with respect to each other , we can write the above simply in terms of the ratio of ( lebesgue ) densities , and we obtain : we modify the final term as follows : substituting this in eq .we have as required .[ lem : sbmentro ] define the ( random ) hamiltonian by : then we have that : this follows directly from the definition of mutual information : as in lemma [ lem : gaussentro ] we can write this in terms of densities as : substituting this in the mutual information formula eq .yields the lemma .define the random variables as follows : the following lemma shows that , to compute it suffices to compute the log - partition function with respect to the approximating hamiltonian .assume that , as , and .then , we have [ lem : sbmapproxrel ] we concentrate on the log - partition function for the hamiltonian . first , using the fact that when : now when , for small enough , we have by taylor expansion the following approximation for ] , } ] , the following properties hold for the function : 1 .it is continuous , monotone increasing and concave in .it satisfies the following limit behaviors & = 1\ , .\end{aligned}\ ] ] as a consequence we have the following for all ] ( in particular ) .we will use to denote elements of . for for each are given a one - parameter family of discrete noisy channels indexed by ( with a non - empty interval ) , with input alphabet and finite output alphabet .concretely , for any , we have a transition probability which is differentiable in .we shall omit the subscript since it will be clear from the context .we then consider a random vector in , and a set of observations in that are conditionally independent given .further is the noisy observation of through the channel . in formulae ,the joint probability density function of and is this obviously include the two - groups stochastic block model as a special case , if we take to be the uniform distribution over , and output alphabet . in that case is just the adjacency matrix of the graph . in the following we write for the set of observations excluded , and for .[ lemma : differentiation ] with the above notation , we have : \big\}\ , .\label{eq : bigderivative } \ ] ] fix . by linearity of differentiation ,it is sufficient to prove the claim when only depends on .writing by chain rule in two alternative ways we get where in the last identity we used the conditional independence of from , , given . differentiating with respect to , and using the fact that is independent of , we get the first term . singling out the dependence of on we get in the second line we used the fact that the distribution of is independent of , and the normalization condition .we follow the same steps for the second term ( [ eq : derivativefirst ] ) : \log \big[\sum_{x'_e}p_{x_e , y_e|\by_{-e}}(x'_e , y_e|\by_{-e})\big]\big\}\\ & \phantom{aa}=-\frac{\d\phantom{a}}{\d\theta}\sum_{y_e } \e\big\{\big[\sum_{x_e}p_e(y_e|x_e)p_{x_e|\by_{-e}}(x_e|\by_{-e})\big]\log \big[\sum_{x'_e}p_e(y_e|x'_e)p_{x_e|\by_{-e}}(x'_e|\by_{-e})\big]\big\}\\ & \phantom{aa}=-\sum_{x_e , y_e}\frac{\d p_e(y_e|x_e)}{\d\theta } \e\big\{p_{x_e|\by_{-e}}(x_e|\by_{-e})\log \big[\sum_{x'_e}p_e(y_e|x'_e)p_{x_e|\by_{-e}}(x'_e|\by_{-e})\big]\big\}\ , .\label{eq : derivative3 } \ ] ] taking the difference of eq .( [ eq : derivative2 ] ) and eq .( [ eq : derivative3 ] ) we obtain the desired formula .we next apply the general differentiation lemma [ lemma : differentiation ] to the stochastic block model .as mentioned above , this fits the framework in the previous section , by setting be the adjacency matrix of the graph , and taking to be the uniform distribution over . for the sake of convenience, we will encode this as .in other words and ( respectively ) encodes the fact that edge is present ( respectively , absent ) . we then have the following channel model for all : we parametrize these probability kernels by a common parameter by letting we will be eventually interested in setting to make contact with the setting of theorem [ thm : mainestimation ] . [ lemma : differentiationsbm ]let be the mutual information of the two - groups stochastic block models with parameters and given by eq .( [ eq : pnqnparam ] ) . then there exists a numerical constant such that the following happens . for any there exists such that , if then for all ] , and , we obtain the following bounds by taylor expansion -b_0-\frac{\delta_n}{\op_n(1-\op_n)}\,\hx_e(\by_{-e})\right|\le c\ , \frac{\theta}{\op_n(1-\op_n)n}\ , , \\\left|\log\big[\frac{p_e(+|+)p_e(-|-)}{p_e(+|-)p_e(-|+)}\big]- \frac{2\delta_n}{\op_n(1-\op_n)}\right|\le c\ , \frac{\theta}{\op_n(1-\op_n)n}\ , , \ ] ] where and will denote a numerical constant that will change from line to line in the following .such bounds hold for all ] . from eq .( [ eq : hxe ] ) we therefore get ( recalling ) substituting this in eq .( [ eq : dhfinal ] ) , we get finally we rewrite the sum over explicitly as sum over and recall that to get since is equivalent to ( up to a change of variables ) and , with is independent of , this is equivalent to our claim ( recall the definition of , eq .( [ eq : mmse_2vert ] ) ) . from lemma [ lemma : differentiationsbm ] and theorem [ thm : main ] , we obtain , for any , from lemma [ lem : fixedpt ] and [ lem : psiintegral ] .d . and a.m. were partially supported by nsf grants ccf-1319979 and dms-1106627 and the afosr grant fa9550 - 13 - 1 - 0036 .part of this work was done while the authors were visiting simons institute for the theory of computing , uc berkeley .let us begin with the upper bound on . by using in eq .( [ eq : vmsedef ] ) , we get where the equality on the second line follows because is distributed as .the last inequality yields the desired upper bound . in order to prove the lower bound on assume , for the sake of simplicity , that the infimum in the definition ( [ eq : vmsedef ] ) is achieved at a certain estimator .if this is not the case , the argument below can be easily adapted by letting be an estimator that achieves error within of the infimum . under this assumption , we have ,from ( [ eq : vmsedef ] ) , where the last identity follows since the minimum over is achieved at .consider next the matrix minimum mean square error .let } ] with independently across $ ] .( formally , with a probability space , but we prefer to avoid unnecessary technicalities . )we then have , by central limit theorem with the uniform in .this yields the desired lower bound since , by dominated convergence , prove the claim for ; the other claim follows from an identical argument .since , we have by triangle inequality , that . applying bernstein inequality to the sum of random variables bounded by 1 : setting for large enough yields the required result .let us start from point , .since , it is sufficient to prove this claim for where , for the rest of the proof , we keep .we start by noting that , for all , this identity can be proved using the fact that .indeed this yields where the first and last equalities follow by symmetry . differentiating with respect to ( which can be justified by dominated convergence ) : now applying stein s lemma ( or gaussian integration by parts ) : using the trigonometric identity , the shorthand and identity above : now , let , whereby we have note now that satisfies is even with , is continuously differentiable and and are bounded . consider the function , where .we have the identities : hence , to prove that is concave on , it suffices to show that , are non - positive for . by properties and above we can differentiate with respect to and interchange differentiation and expectation . computing the derivative with respect to yields where the last line follows from the fact that is odd and is even in . consequently since and for , the integrand is negative and we obtain the desired result .for any random variable we have since ( given there are exactly possible choices for ) , this implies the claim ( [ eq : ixx ] ) follows by applying the last inequality once to and once to and taking the difference . for the second claim, we prove that where as , whence the claim follows since .we claim that we can construct an estimator and a function with , such that , defining then we have to prove this claim , it is sufficient to consider where is the principal eigenvector of . then implies that , for , almost surely , hence the above claim holds , for instance , with .then expanding with the chain rule ( whereby ) , we get : since is a function of , .furthermore since is binary .hence : when , differs from in at most positions , whence .when , we trivially have . consequently : the second claim then follows by dividing with and letting on the right hand side .by definition , we have : define a related sequence as follows : } f'_t(s_i^t + \mu_t x_i , x(\eps)_i)\ , , \\\bs^0 & = \bx^0 + \mu_0\bx\ , . \ ] ] here is defined via the state evolution recursion : we call a function is pseudo - lipschitz if , for all where is a constant . in the rest of the proof , we will use to denote a constant that may depend on and but not on , and can change from line to line .we are now ready to prove lemma [ lem : stateevollem ] . since the iteration for is in the form of , we have for any pseudo - lipschitz function : letting , this implies that , almost surely : it then suffices to show that , for any pseudo - lipschitz function , almost surely : = 0\ , .\ ] ] we instead prove the following claims that include the above . for any fixed , almost surely : & = 0,\label{eq : induc0}\\ \lim_{n\to \infty}\frac{1}{n}\,\|\bdelta^t\|_2 ^ 2 & = 0 , \label{eq : induc1}\\ \limsup_{n\to\infty } \frac{1}{n}\,\|\bs^t + \mu_t \bx\|_2 ^ 2 & < \infty\ , , \label{eq : induc2 } \ ] ] where we let .we can prove this claim by induction on .the base case of is trivial for all three claims : and is satisfied by our initial condition , .now , assuming the claim holds for we prove the claim for . by the pseudo - lipschitz property and triangle inequality ,we have , for some : consequently : }\right\rvert } & \le \frac{l}{n } \sum_{i=1}^n\left ( { \left\lvert{\delta^t_i}\right\rvert } + { \left\lvert{s^t_i + \mu_t x_i}\right\rvert } { \left\lvert{\delta^t_i}\right\rvert } + { \left\lvert{\delta^t_i}\right\rvert}^2\right)\\ & \le \frac{l}{n } \big(\|\bdelta^t\|_2 ^ 2 + \sqrt{n}\|\bdelta^t\|_2 + \|\bdelta^t\|_2\|\bs^t+\mu_t\bx\|_2\big)\ , . \ ] ] hence the induction claim eq . at from claims eq .and eq . at , with the standard inequality : using the fact that are lipschitz : by the induction hypothesis , ( specifically at , wherein it is immediate to check that is pseudo - lipschitz by the boundedness of ) : thus the first term in eq . vanishes .for the second term to vanish , using the induction hypothesis for , it suffices that almost surely : this follows from standard eigenvalue bounds for wigner random matrices . for the third term in eq . to vanish , we have by that : hence it suffices that a.s . , for which we expand their definitions to get : .\end{aligned}\ ] ] by assumption , is lipschitz and we can apply the induction hypothesis with to obtain that the limit vanishes .indeed , by a similar argument is bounded asymptotically in , and so is .along with the induction hypothesis for this implies that the fourth term in eq .asymptotically vanishes .this establishes the induction claim eq ..e. abbe , a. s. bandeira , a. bracher , and a. singer , _ decoding binary node labels from censored edge measurements : phase transition and efficient recovery _ , ieee transactions on network science and engineering * 1 * ( 2014 ) , no . 1 .e. abbe , a.s .bandeira , a. bracher , and a. singer , _ linear inverse problems on erds - rnyi graphs : information - theoretic limits and efficient recovery _, information theory ( isit ) , 2014 ieee international symposium on , june 2014 , pp .12511255 .mireille capitaine , catherine donati - martin , and delphine fral , _ the largest eigenvalues of finite rank deformation of large wigner matrices : convergence and nonuniversality of the fluctuations _ , the annals of probability * 37 * ( 2009 ) , no . 1 , 147 . cyril masson , andrea montanari , thomas j richardson , and rdiger urbanke , _ the generalized area theorem and some of its consequences _ , information theory , ieee transactions on * 55 * ( 2009 ) , no . 11 , 47934821 .andrea montanari and david tse , _ analysis of belief propagation for non - linear problems : the example of cdma ( or : how to prove tanaka s formula ) _ , information theory workshop , 2006 .itw06 punta del este .ieee , ieee , 2006 , pp . | we develop an information - theoretic view of the stochastic block model , a popular statistical model for the large - scale structure of complex networks . a graph from such a model is generated by first assigning vertex labels at random from a finite alphabet , and then connecting vertices with edge probabilities depending on the labels of the endpoints . in the case of the symmetric two - group model , we establish an explicit ` single - letter ' characterization of the per - vertex mutual information between the vertex labels and the graph . the explicit expression of the mutual information is intimately related to estimation - theoretic quantities , and in particular reveals a phase transition at the critical point for community detection . below the critical point the per - vertex mutual information is asymptotically the same as if edges were independent . correspondingly , no algorithm can estimate the partition better than random guessing . conversely , above the threshold , the per - vertex mutual information is strictly smaller than the independent - edges upper bound . in this regime there exists a procedure that estimates the vertex labels better than random guessing . |
optimization for multiple conflicting objectives results in more than one optimal solutions ( known as pareto - optimal solutions ) .although one of these solutions is to be chosen at the end , the recent trend in evolutionary and classical multi - objective optimization studies have focused on approximating the set of pareto - optimal solutions . however , to assess the quality of pareto approximation set , special measures are needed .hypervolume indicator is a commonly accepted quality measure for comparing approximation set generated by multi - objective optimizers .the indicator measures the hypervolume of the dominated portion of the objective space by pareto approximation set and has received more and more attention in recent years .there have been some studies that discuss the issue of fast hypervolume calculation .these algorithms partition the covered space into many cuboid - shaped regions , within which the approach considering the dominated hypervolume as a special case of klee s measure problem is regarded as the current best one .this approach adopts orthogonal partition tree which requires storage and streaming variant .conceptual simplification of the implementation are concerned and thus the algorithm achieves an upper bound of for the hypervolume calculation . ignoring the running time of sorting the points according to the -th dimension, , the running time of this approach is exponential of the dimension of space .this paper develops novel heuristics for the calculation of hypervolume indicator .special technologies are applied and the novel approach yields upper bound of runtime and consumes storage .the paper is organized as follows . in the next section, the hypervolume indicator is defined , and some background on its calculation is provided . then , an algorithm is proposed which uses the so - called vertex - splitting technology to reduce the hypervolume .the complexities of the proposed algorithm are analyzed in section [ sec : complexity ] .the last section concludes this paper with an open problem .without loss of generality , for multi - objective optimization problems , if the objective functions are considered with to be minimized , not one optimal solution but a set of good compromise solutions are obtained since that the objectives are commonly conflicting .the compromise solutions are commonly called pareto approximation solutions and the set of them is called the pareto approximation set . for a pareto approximation set produced in a run of a multi - objective optimizer , where , all the solutions are non - comparable following the well - known concept of pareto dominance . specially , we say that dominates at the -th dimension if .the unary hypervolume indicator of a set consists of the measure of the region which is simultaneously dominated by and bounded above by a reference point such that . in the context of hypervolume indicator , we call the solutions in as the dominative points . as illustrated in fig .[ fig:2d - set ] , the shading region consists of an orthogonal polytope , and may be seen as the union of three axis - aligned hyper - rectangles with one common vertex , i.e. , the reference point .another example in three dimensional space is shown in fig .[ fig:3d - set ] , where five dominative points , , and the reference point are considered .the volume is the union of the volumes of all the cuboids each of which is bounded by a vertex , where the common regions are counted only once .if a point is dominated by another point , the cuboid bounded by is completely covered by the cuboid bounded by . and thus only the non - dominated points contribute to the hypervolume .in other works , e.g. the work of beume and rudolph , the hyper - cuboid in -dimensional space are partitioned into child hyper - cuboids along the -th dimension and then all these child hypervolumes are gathered together by the inclusion - exclusion principle . in this paper , we step in another way .the hyper - cuboid is partitioned into child hyper - cuboids at some splitting reference points and then all the child hypervolumes are gathered directly .more detailed , given a point , each of other points in must dominated at some dimensions for the non - comparable relation .if the parts over are handled , the problem of calculating the hypervolume bounded by and the reference point is figured out .the additional part partitioned out at the -th dimension is also a -dimensional hyper - cuboid whose vertices are ones beyond at such dimension .their projections on the hyperplane orthogonal to dimension are all dominated by , and thus are free from consideration .it should be noted that the reference point of child hyper - cuboid is altered to , namely the -th coordinate is replaced by .the other child hyper - cuboids are handled in the similar way . in these processes ,the given point is called the splitting reference point . obviously, the hyper - cuboids with more dominative points require more run time to calculate the hypervolumes . to reduce the whole run time for calculating all these child hyper - cuboids ,the splitting reference point should be carefully selected .the strategy adopted in this paper is described as follows .\(1 ) let and choose a point with the least dimensions on which the point dominated by other points .\(2 ) if some points tie , update as and then within these points , choose a point with the least dimensions on which the point dominated by other points .\(3 ) repeat the similar process until only single point is left or . andif and several points are left , the first found point is selected . by the above principle , as an example , not or other points but is chosen as the first splitting reference point for the case shown in fig .[ fig:3d - set ] .two child cuboids each bounded by one points and another child cuboid bounded by two points are generated by splitting along .this is the optimal strategy in such case .the algorithm to calculate the hypervolume is shown in algorithm [ algo : calchypervolume ] .some major parameters are as follows . ** int[n][d ] order * the orders of all the dominative points at each dimension are represented by a two - dimensional array of integer .* * int split * the index of the point at which the hyper - cuboid is cut to generate multiple child hyper - cuboids is called . ** int[n ] splitcount * the numbers of present in the -th row of the array are saved in , where .* * int[n ] coveredcount * the numbers of present in the current checked row of the array are save in , where .moreover , some conventions are explained as follows . *the subscript of begins with while the index of array begins with .thus is same as [j-1] ] means setting each element of as , while \leftarrow b [ ] ] ; sort ;[j-1 ] \leftarrow ] ; ] ; * break * ; ; ; ; ; \leftarrow y_{split , j}$ ] ; ; ; ; ; in fact , when the hyper - cuboid is cut into two child hyper - cuboids , there may be some points dominated by the splitting reference point in the bigger cuboid , and thus such points could be removed from the points set . in the proposed algorithm , it does not matter whether those points are removed or not .before discussing the time - space complexity of the proposed algorithm , some properties are presented firstly .[ lemma : delta ] let be the number of points dominating at the -th dimension .then \(1 ) for and each , .\(2 ) for and each , .\(3 ) for and each , .\(4 ) .\(5 ) for and each , .it is clear that ( 2 ) ( 4 ) ( 5 ) .the follows show ( 1 ) , ( 2 ) and ( 3 ) .\(1 ) ( by contradiction . )assume to the contrary there is some , .if this is the case , there are at least one where such that each dominates for all .it follows that dominates , which contradicts our assumption that all the elements in are non - comparable .\(2 ) given , sort all where and label each a sequence number which ranges from 0 to . thus .there are two cases to consider .firstly , if all are different each other , then .it follows that . secondly , if there are same elements within , without loss of generality , suppose and .then , it follows that .this completes the proof .\(3 ) ( by contradiction . ) for any , is excluded by ( 1 ) of this lemma .thus for some is considered .if this is the case , we obtain , contradicting ( 2 ) of this lemma , which implies , namely .[ lemma : omega ] let be the amount of in all where , namely .then \(1 ) for any and ; \(2 ) for any ; \(3 ) for any . by the definition of ,it is clear that all statements follows lemma [ lemma : delta ] .[ lemma : runtime ] let be the runtime of algorithm [ algo : calchypervolume ] to compute a hypervolume with dominative points in a -dimensional space .then \(1 ) where ; \(2 ) where ; \(3 ) is minimal when and for any and ; \(4 ) is maximal when for any and for any and each .\(1 ) and ( 2 ) are clear .\(3 ) by the process of algorithm [ algo : calchypervolume ] , given some , by ( 1 ) of lemma [ lemma : delta ] , .it is clear that for a given , it is necessary that to minimize .in addition , all the must share alike , i.e. for any and .if this is not the truth , suppose .thus by ( 1 ) of this lemma , let and . and can be modified in the similar way until .this completes the proof .\(4 ) by ( 5 ) of lemma [ lemma : delta ] , .it is clear that for a given , it is necessary that to maximize .hence eqn .( [ eqn : fnd1 ] ) is written as follows , suppose is the splitting reference point chosen by algorithm [ algo : calchypervolume ] , , or else contradicting . to maximize in eqn .( [ eqn : fnd2 ] ) , let . similarly , we get , , , and so on .it is exactly .this completes the proof .first of all , it is clear that . by ( 3 ) of lemma [ lemma : runtime ] , the algorithm performs best when each shares alike for the chosen . if , for any .thus which implies . if , for any . in the rough ,we get it can be obtained from eqn .( [ eqn : lowerruntime ] ) that even when is relaxed to .fredman and weide have shown that klee s measure problem has a lower bound of for arbitrary . just as beume and rudolph have mentioned , although it is unknown what the lower bound for calculating the hypervolume is , it is definitely not harder than solving kmp because it is a special case of kmp . therefore , there is a gap between the lower bound of the proposed algorithm and the actual lower bound of calculating the hypervolume . in the average cases ,suppose that for the given splitting reference point , .meanwhile , each shares alike , i.e. .thus , which implies the runtime of the proposed algorithm is at the given cases . by ( 2 ) of lemma [ lemma : runtime ] , for any . and by ( 4 ) of lemma [ lemma : runtime ] , at the worst cases , we have which implies that the proposed algorithm for computing the hypervolume bounded by points and a reference point in -dimensional space has a runtime of at the worst cases .let be the used storage by algorithm [ algo : calchypervolume ] . in the proposed algorithm ,every child hypervolume is calculated one by one . since the storage can be reused after the former computation has been completed , is only related to the maximum usage of all the computations of child hypervolumes .hence , thus the upper bound of space is as follows . where .it is easy to obtain an space upper bound for the proposed algorithm . combining the above analyses together, we obtain the time - space complexity of the proposed algorithm .the hypervolume of a hyper - cuboid bounded by non - comparable points and a reference point in -dimensional space can be computed in time using storage .a fast algorithm to calculate the hypervolume indicator of pareto approximation set is proposed . in the novel algorithm ,the hyper - cuboid bounded by non - comparable points and the reference point is partitioned into many child hyper - cuboids along the carefully chosen splitting reference point at each dimension .the proposed approach is very different to the technique used in other works where the whole -dimensional volume is calculated by computing the -dimensional volume along the dimension .such difference results in very different time bounds , namely for our work and for the best previous result .neither kind of technique can exceed the other completely and each has his strong point .additionally , the amount of storage used by our algorithm is only even no special technique is developed to reduce the space complexity .as the context has mentioned , it is very important to choose appropriate splitting reference point for our algorithm .well selected point can reduce number of points in separated parts and thus cut down the whole runtime .we do not know whether the strategy adopted in this paper is optimal or near optimal .further investigations should be worked on .zitzler , e. , thiele , l. , laumanns , m. , fonseca , c.m . ,da fonseca , v.g .: performance assessment of multiobjective optimizers : an analysis and review .ieee transactions on evolutionary computation * 7*(2 ) ( 2003 ) 117132 zitzler , e. , thiele , l. : multiobjective optimization using evolutionary algorithms a comparative study . in eiben , a.e . , ed . :parallel problem solving from nature v , amsterdam , springer - verlag ( 1998 ) 292301 zitzler , e. , brockhoff , d. , thiele , l. : the hypervolume indicator revisited : on the design of pareto - compliant indicators via weighted integration . in : proceedings of the 4th international conference on evolutionary multi - criterion optimization ( emo 2007 ) .volume 4403 ., springer - verlag ( 2007 ) 862876 while , l. , bradstreet , l. , barone , l. , hingston , p. : heuristics for optimising the calculation of hypervolume for multi - objective optimisation problems . in : the 2005 ieee congress on evolutionary computation. volume 3 .( 2005 ) 22252232 beume , n. , rudolph , g. : faster s - metric calculation by considering dominated hypervolume as klee s measure problem . in kovalerchuk ,b. , ed . : proceedings of the second iasted conference on computational intelligence , anaheim , acta press ( 2006 ) 231236 | hypervolume indicator is a commonly accepted quality measure for comparing pareto approximation set generated by multi - objective optimizers . the best known algorithm to calculate it for points in -dimensional space has a run time of with special data structures . this paper presents a recursive , vertex - splitting algorithm for calculating the hypervolume indicator of a set of non - comparable points in dimensions . it splits out multiple child hyper - cuboids which can not be dominated by a splitting reference point . in special , the splitting reference point is carefully chosen to minimize the number of points in the child hyper - cuboids . the complexity analysis shows that the proposed algorithm achieves time and space complexity in the worst case . |
the standard quickest change detection problem is set to detect some unknown time point at which certain signal probability distribution changes over a sequence of observations .recently , with the development of wireless sensor networks , multiple sensors can be deployed to execute the quickest change detection , and the sensors can send quantized or unquantized observations or certain local decisions to a control center , who then makes a final decision . most of the existing work is based on an assumption that the statistical properties of observations at all sensors change simultaneously . however , in certain scenarios , this assumption may not hold well .for instance , when multiple sensors are used to detect the occurrence of the chemical leakage , the sensors that are closer to the leakage source usually observe the change earlier than those far away from the source . in such cases , two interesting problems arise : one is to detect the change as soon as possible ; the other is to identify which sensor is the closest to the source , such that we could have a first - order inference over the leakage source location .as far as we know , currently there are few work studying the case of change occurring non - simultaneously . in the related work , the authors in proposed a scheme that each sensor makes a local decision with the computing burden at the local sensors , where they did not consider the identification problem . in , the authors modeled the change propagation process as a markov process to derive the optimal stopping rule and assumed that the change pattern always first reaches a predetermined sensor , such that the identification problem is ignored . in ,the identification problem for the special case of two sensors was studied , where the sufficient statistic is proven as a markov process and a joint optimal stopping rule and terminal decision rule are proposed . in this paper, we study the joint change point detection and identification problem over a linear array of sensors , where the change first occurs near an unknown sensor , then propagates to sensors further away .we assume that all sensors send their observations to a control center . with the sequential observation signals , the control center first operates a stopping rule to decide when to alarm that the change has occurred; then the control center deploys a terminal decision rule to determine which sensor that the change pattern reaches first . in our setup , three performance metrics are of interest : i ) detection delay , which is the time interval between the moment that the change occurs and the moment that an alarm is raised ; ii ) false alarm probability , which is the probability that an alarm is raised before the actual change occurs ; and iii ) false identification probability , which is the probability that the control center does not correctly identify the sensor that the change pattern first reaches .we apply the markov optimal stopping time theory to design the optimal decision rules to minimize a weighted sum of the above three metrics .furthermore , we derive a scheme with a much simpler structure and certain performance guarantee . the rest of this paper is organized as follows . in section [ sec :model ] , we introduce the system model . in section [ sec : solution ] , we derive the optimal decision rules . in section [ sec : approx ] , we propose a scheme approximate to the optimal decision rules with a much lower complexity . in section[ sec : num ] , we present some numerical results , with conclusions in section [ sec : con ] .we consider a scenario with sensors constructing a linear array to monitor the environment , as shown in fig .[ array ] . at an unknown time point ,a change occurs at an unknown location and propagates , where we use change point time to denote the time that the change pattern reaches sensor .we further use to denote the index of the sensor that the change pattern first reaches .we focus on the bayesian setup and use to denote the prior probability of , simply with . conditioned on the event that the change pattern first reaches sensor , is assumed to bear a geometric distribution with parameter , i.e. , =\rho(1-\rho)^k , k\geq0,\ ] ] where denotes the discretized time and takes integer values .we consider the practical factors in the environment , such as the wind or the blockers , which will affect the propagation speed of the change .for instance , see in fig .[ array ] , if the direction of the wind is from the left to the right side in the monitored scenario , then the propagation of the air pollution will be much faster at the right side of sensor than that of the left side . and at the same side, the propagation follows the deterministic order shown as we further assume that after the change patten reaches the first sensor , for the right side of sensor , it will propagate from one sensor to another sensor following the geometric propagation models as =\rho_1(1-\rho_1)^{k_{2}},j > i , k_2\geq0,\ ] ] while for the left side of sensor , the propagation follows =\rho_2(1-\rho_2)^{k_{2}},j< i , k_2\geq0,\ ] ] where and are used to model possibly different propagation speed along each direction , e.g. , means the propagation speed is higher at the right side that that of the left side .taking above assumption , for , we define all possible events at time as follows : where denote the events that after the change pattern first reaching sensor , it propagates across the sensors sequentially at the right side of sensor .the number of events equals to the number of sensors at the right side plus 1 , i.e. , . the events that after the change pattern reaching sensor and , it propagates across the sensors sequentially at the right side of sensor , and the number of events is also , which is the same for the case that after the change pattern reaching sensor , , and so on .since there are sensors at the left side of sensor and the event of no change pattern reaches any sensor is , the total number of possible events is . at each time , we assume that the observations ] small .in addition , we also require the control center to identify which sensor the change pattern reaches first .we adopt to denote the -measurable terminal decision rule used by the control center to make the identification , and to denote the index of the sensor identified , i.e. , .a false identification occurs if , such that we also want to keep $ ] small .we use to denote the sequence of terminal decision rules . summarizing above ,our goal is to design a stopping time and a terminal decision rule that minimize the aggregated risk function defined as +c_1\mathbb{e}\{(\tau-\gamma)^{+}\}+c_2p[\hat{b}_1\neq b_1],\ ] ] where and are appropriate constants that balance the three costs ., scaledwidth=50.0% ]with the posterior probabilities defined above , we denote .we first have the following theorem regarding the optimal terminal decision rule . for any stopping time ,the optimal terminal decision rule is and we have =\mathbb{e}\left\{1-\max\left\{p_\tau^{1}, ... ,p_\tau^{n}\right\}\right\}.\end{aligned}\ ] ] the proof follows from proposition 4.1 of .theorem 1 implies that the optimal terminal decision rule is simply to choose the sensor that has the largest posterior probability .a similar situation also arises in the multiple hypothesis testing problem considered in . using above optimal terminal decision rule, we can further express the optimization objective in as a function of the posterior probabilities defined in ( [ def_pi ] ) and ( [ def_p ] ) , as shown below .[ lem : cost ] for any stopping time , can be written as based on the bayesian s rule , we have = p[t_{0,k}\left| { { { \cal f}_k } } \right . ] & = \sum\limits_{i = 1}^n { p[t_{0,k}\left| { { { \cal f}_k } } \right.,b = i ] } p_k^i \nonumber\\ & = \sum\limits_{i = 1}^n { { \pi _ { 0,k\left| i \right . } } } p_k^i.\end{aligned}\ ] ] further , according to proposition 5.1 in , +c_1\mathbb{e}\{(\tau-\gamma)^{+}\}\nonumber\\ & = \mathbb{e}\left\{p[\tau<\gamma|\mathcal{f}_\tau]+c_1\sum \limits_{k=0}\limits^{\tau-1}p[\gamma\leq k|\mathcal{f}_k]\right\}\nonumber\\ & = \mathbb{e } \left\ { \sum\limits_{i = 1}^n { { \pi _ { 0,\tau \left| i \right . } } } p_\tau ^i+ { c_1}\sum\limits_{k = 0}^{\tau - 1 } \left(1 - \sum\limits_{i = 1}^n { { \pi _ { 0,k\left| i \right . } } } p_k^i \right)\right\}.\end{aligned}\ ] ] by combining ( [ decisionrule1 ] ) , we complete the proof .furthermore , we have the following lemma regarding .there is a time - invariant function such that .we have \nonumber\\ & = \frac{{f({{\bf{z}}_k}\left| { { { \cal f}_{k - 1}},t_{j , k } } \right.,{s } = i)p[t_{j , k}\left| { { { \cal f}_{k - 1}},s = i } \right.]}}{{\sum\limits_{j = 0}^{i(n -i+ 1 ) } { f({{\bf{z}}_k}\left| { { { \cal f}_{k - 1}},t_{j , k } } \right.,{s } = i)p[t_{j , k}\left| { { { \cal f}_{k - 1}},s = i } \right . ] } } } , \end{aligned}\ ] ] in which for another element in , we have \\ & = \frac{{f({{\bf{z}}_k}\left| { { { \cal f}_{k - 1}},{s } = i } \right.)p[{s } = i\left| { { { \cal f}_{k - 1 } } } \right.]}}{{\sum\limits_{n = 0}^{m-1 } { f({{\bf{z}}_k}\left| { { { \cal f}_{k - 1}},{s } = n } \right.)p[{s } = n\left| { { { \cal f}_{k - 1 } } } \right . ] } } } , \end{split}\ ] ] where ,\end{aligned}\ ] ] which can be calculated by using ( [ direcltywrite ] ) and ( [ transition ] ) .hence , can also be computed by and .this lemma implies that the posterior probabilities can be recursively computed from and .combined with lemma [ lem : cost ] , we know that is a sufficient statistic for the problem of minimizing ( [ formulation ] ) . thus , the problem at the hand is a markov stopping time problem .therefore , we could borrow results from the optimal stopping time theory to design the optimal decision rules for our problem .we first consider a finite time horizon case , in which one has to make a decision before a deadline , i.e. , .it is easy to check that the cost - to - go functions are where applying the optimal stopping time theory , we have the following theorem for the optimal decision rules .[ thm2 ] the optimal stopping time is obtained as with the optimal terminal decision rule is given in ( [ decisonrule ] ) .in the infinite time horizon case when , we have defined as since we have , , and the fact that all strategies allowed with deadline are also allowed with deadline .since the observations are memoryless and conditionally iid , is the same for all ; we then use to denote .thus , is derived as in which the interchange of and is allowed due to the dominated convergence theorem .therefore , when the deadline is infinite , the optimal stopping rule becomes with the optimal terminal decision rule is given in ( [ decisonrule ] ) .when is large , the optimal stopping rule does not have a simple structure , which makes the implementation highly costly . in this section ,we propose a much simpler rule which approximates to the optimal stopping rule .[ lemma3 ] the sequence is a supermartingale , i.e. , the proof follows from page 477 of , by using fatou s lemma .we can use lemma [ lemma3 ] to derive the following approximation of the optimal stopping rule . in the asymptotic case of the rare change occurring with , one approximation of the optimal stopping rule has the following simple structure where .and we use the optimal terminal decision rule specified in ( [ decisonrule ] ) .for the second part of ( [ a(t-1 ) ] ) , according to lemma [ lemma3 ] , (\mathbf{z}_t|\mathcal{f}_{t-1})d\mathbf{z}_t \leq c_2(1-p_{t-1}^{i_{t-1}}).\ ] ] plugging the above two results ( [ firstpart ] ) and ( [ p2 ] ) into ( [ a(t-1 ) ] ) , we have in the sequel , we assume that equals to the right side of ( [ inequ ] ) . according to ( [ cost - to - go ] ) , we have if , if , we define the following transformation as then further we have and then , can be rewritten as we define and as then straightly we see that , , and \mathbb{i}\left(\left\{\sum\limits_{j = 1}^{m-1 } { v_{t-1,j } } \leq \frac{1}{c_1}\right\}\right).\ ] ] for the next steps , we follow the proof of theorem 2 of , which is skipped here . andadditionally we use lemma [ lemma3 ] .finally , it can be derived that and the test structure reduces to stopping when therefore , we have the structure of the stopping rule as stated in theorem 3 .regarding to theorem 3 , we have several notes as follows .\1 ) from lemma [ lemma3 ] and theorem [ thm2 ] , we see that is a lower bound of the optimal stopping time , i.e. , in the case of .the supermartingale property shown in lemma [ lemma3 ] plays an important role in deriving .the tightness of this lower bound is related to the relationship between and .the simulation results in section [ sec : num ] show that and are quite close , which indicates that would be close to .\2 ) from ( [ eq1_lm1 ] ) and ( [ v_kl ] ) , we have the testing statistic as }{\rho p[\gamma > k\left| { { { \cal f}_k } } \right.]}.\end{aligned}\ ] ] this structure conforms to the well - known shiryaev s procedure , which is the optimal stopping rule for single sensor with iid observations and bayesian setting .given that it is hard to efficiently compute the solution structure in ( [ stoprule ] ) , we compute the approximate optimal stopping rule in ( [ appro_stoppingrule ] ) and simulate its performance .we assign 5 nodes constructing a linear sensor array and assume that and .the change point time is generated according to the geometric distribution with , and , respectively . according to ( [ eq1_lm1 ] ) , the false alarm probability with is =\mathbb{e}\left\{{\sum\limits_{i = 1}^n{{\pi _ { n+1,\tau_{app}\left| i \right . } } } p_{\tau_{app}}^i}\right\}\leq\frac{c_1}{c_1+\rho}=\alpha.\ ] ] thus we have , where is the maximum allowance for the false alarm probability , which could determine the required select value . in fig .[ pfavsadd ] , we illustrate the relationships among the false alarm probability , the false identification probability , and the averaged detection delay .we see that as the averaged detection delay increases , the false alarm probability decreases .when the averaged detection delay becomes large , the false identification probability does not decrease much and a probability floor appears , which is due to the fact that only the samples between the time when the change pattern reaches the first sensor and the time when it reaches the second sensor can be used to effectively distinguish the sensor that the change pattern first reaches . since this part of the samples is limited , which will not increase with the detection delay , a false identification probability floor exists . in fig .[ p_figure ] , we draw the posterior probability over time , where we assume that the change pattern first reaches node 3 , and then propagates to node 4 . we see that as time goes , gradually becomes larger than the others , which indicates that node 3 should be identified . in fig .[ bound_figure ] , we show the relation between and in ( [ p2 ] ) .since ( [ p2 ] ) is the key in deriving the our simplified rule , the fact that these two curves are close suggests that the performance of our low - complexity rule might be close to that of the optimal stopping rule in ( [ stoprule ] ) and ( [ stoprule_infi ] ) . vs. have studied the quickest change point detection problem and the closest - node identification problem over a sensor array .we have proposed an optimal decision scheme combing the stopping rule and the identification rule to alarm the change happening and to determine the sensor closest to the change source .since the structure the obtained optimal scheme is complex and impractical to implement , we have further proposed a scheme with a much simpler structure . | in this paper , we consider the problem of quickest change point detection and identification over a linear array of sensors , where the change pattern could first reach any of these sensors , and then propagate to the other sensors . our goal is not only to detect the presence of such a change as quickly as possible , but also to identify which sensor that the change pattern first reaches . we jointly design two decision rules : a stopping rule , which determines when we should stop sampling and claim a change occurred , and a terminal decision rule , which decides which sensor that the change pattern reaches first , with the objective to strike a balance among the detection delay , the false alarm probability , and the false identification probability . we show that this problem can be converted to a markov optimal stopping time problem , from which some technical tools could be borrowed . furthermore , to avoid the high implementation complexity issue of the optimal rules , we develop a scheme with a much simpler structure and certain performance guarantee . |
we represent word - meaning and meaning - meaning relations uncovered by translation dictionaries between each language in the unbiased sample and major modern european languages by constructing a network structure . two meanings ( represented by a set of english words )are linked if they are translated from one to another and then back , and the link is weighted by the number of paths of the translation , or the number of words that represent both meanings ( see methods for detail ) .figure [ fig : schematic ] illustrates the construction in the case of two languages , lakhota ( primarily spoken in north and south dakota ) and coast tsimshian ( mostly spoken in northwestern british columbia and southeastern alaska ) .translation of sun in lakhota results _ w _ and _ . while the later picks up no other meaning , _ w _ is a polysemy that possesses additional meanings of moon and month , hence they are linked to sun .such polysemy is also observed in coast tsimshian where _ gyemk _ , translated from sun , covers additional meanings including , thus additionally linking to , heat .each language has its own way of partitioning meanings by words , captured in a semantic network of the language .it is conceivable , however , that a group of languages bear structural resemblance perhaps because the speakers share historical or environmental features .a link between sun and moon , for example , reoccurs in both languages , but does not appear in many other languages .sun is instead linked to divinity and time in japanese , and to thirst and day / daytime in !the question then is the degree to which the observed polysemy patterns are general or sensitive to the environment inhabited by the speech community , phylogenetic history of the languages , and intrinsic linguistic factors such as literary tradition .we test such question by grouping the individual networks in a number of ways according to properties of their corresponding languages .we first analyze the networks of the entire languages , and then of sub - groups . in fig .[ fig : connectance_graph ] , we present the network of the entire languages exhibiting the broad topological structure of polysemies observed in our data .it reveals three almost - disconnected clusters , groups of concepts that are indeed more prone to polysemy within , that are associated with a natural semantic interpretation .the semantically most uniform cluster , colored in blue , includes concepts related to water . a second , smaller cluster , colored in yellow , associates solid natural substances ( centered around stone / rock ) with their topographic manifestation ( mountain ) .the third cluster , in red , is more loosely connected , bridging a terrestrial cluster and a celestial cluster , including less tangible substances such as wind , sky , and fire , and salient time intervals such as day and year . in keeping with many traditional oppositions between earth and sky / heaven , or darkness , and light , the celestial , and terrestrial components form two sub - clusters connected most strongly through cloud , which shares properties of both .the result reveals a coherent set of relationships among concepts that possibly reflects human cognitive conceptualization of these semantic domains .we test whether these relationships are universal rather than particular to properties of linguistic groups such as physical environment that human societies inhabit .we first categorized languages by nonlinguistic variables such as geography , topography , climate , and the existence or nonexistence of a literary tradition ( table [ tab : groups ] in appendix ) and constructed a network for each group . a spectral algorithmthen clusters swadesh entries into a hierarchical structure or dendrogram for each language group . using standard metrics on trees , we find that the dendrograms of language groups are much closer to each other than to dendrograms of randomly permuted leaves : thus the hypothesis that languages of different subgroups share no semantic structure in common is rejected ( , see methods)sea / ocean and salt are , for example , more related than either is to sun in every group we tried . in addition , the distances between dendrograms of language groups are statistically indistinguishable from the distances between bootstrapped languages ( ) .figure 3 shows a summary of the statistical tests of 11 different groups .thus our data analyses provide consistent evidences that all languages share semantic structure , the way concepts are clustered in fig .2 , with no significant influence from environmental or cultural factors .another structural feature apparent in fig .[ fig : connectance_graph ] is the heterogeneity of the node degrees and link weights .the numbers of polysemies involving individual meanings are uneven , possibly toward a heavy - tailed distribution ( fig .[ fig : word_degree_rank ] ) .this indicates concepts not only form clusters within which they are densely connected , but also exhibit different levels of being polysemous .for example , earth / soil has more than hundreds of polysemes while salt has only a few .having shown that some aspects of the semantic network are universal , we next ask whether the observed heterogeneous degrees of polysemy , possibly a manifestation of varying densities of near conceptual neighbors , arise as artifacts of language family structure in our sample , or if they are inherent to the concepts themselves .simply put , is it an intrinsic property of the concept , earth / soil , to be extensively polysemous , or is it a few languages that happened to call the same concept in so many different ways .suppose an underlying `` universal space '' relative to which each language randomly draws a subset of polysemies for each concept . the number of polysemies should then be linearly proportional to both the tendency of the concept to be polysemous for being close to many other concepts , and the tendency of the language to distinguish word senses in basic vocabulary .in our network representation , a proxy for the former is the weighted degree of node , and a proxy for the latter is the total weight of links in language .then the number of polysemies is expected ( see methods ) : this simple model indeed captures the gross features of the data very well ( fig .[ fig : productmodel_matrix ] in the appendix ) . nevertheless , the kullback - leibler divergence between the prediction and the empirical data identifies deviations beyond the sampling errors in three concepts moon , sun and ashes that display nonlinear increase in the number of polysemies ( ) with the tendency of the language distinguish word senses as fig .[ fig : saturating_words ] in the appendix shows . accommodating saturation parameters ( table [ tab : fitvalue ] in the appendix ) enables the random sampling model to reproduce the empirical data in good agreement keeping the two parameters independent , hence retain the universality over language groups .the similarity relations between word meanings through common polysemies exhibit a universal structure , manifested as intrinsic closeness between concepts , that transcends cultural or environmental factors .polysemy arises when two or more concepts are fundamental enough to receive distinct vocabulary terms in some languages , yet similar enough to share a common term in others .the highly variable degree of these polysemies indicates such salient concepts are not homogeneously distributed in the _ conceptual _ space , and the intrinsic parameter that describes the overall propensity of a word to participate in polysemies can then be interpreted as a measure of the local density around such concept .our model suggests that given the overall semantic ambiguity observed in the languages , such local density determines the degree of polysemies .universal structures in lexical semantics would greatly aid another subject of broad interest , namely reconstruction of human phylogeny using linguistic data .much progress has been made in reconstructing the phylogenies of word forms from known cognates in various languages , thanks to the ability to measure phonetic similarity and our knowledge of the processes of sound change .however , the relationship between semantic similarity and semantic shift is still poorly understood .the standard view in historical linguistics is that any meaning can change to any other meaning , and that no constraint is imposed on what meanings can be compared to detect cognates .it is , however , generally accepted among historical linguists that language change is gradual , and that words in transition from having one meaning to being extended to another meaning should be polysemous .if this is true , then the weights on different links reflect the probabilities that words in transition over these links will be captured in `` snapshots '' by language translation at any time .such semantic shifts can be modeled as diffusion in the conceptual space , or along a universal polysemy network where our constructed networks can serve an important input to methods of inferring cognates . .entries from the initial swadesh list are distinguished with capital letters .( a ) in - strengths of concepts : sum of weighted links to a node .( b ) out - strengths of swadesh entries : sum of weighted links from a swadesh entry .( c ) degree of the concepts : sum of unweighted links to a node ( d ) degree of swadesh entries : sum of unweighted links to a node .a node strength in this context indicates the total number of polysemies associated with the concept in 81 languages while a node degree means the number of other concepts associated with the node regardless of the number of synonymous polysemies associated with it .heaven , for example , has the largest number of polysemies , but most of them are with sun , so that its degree is only three .[ fig : word_degree_rank ] ] the absence of significant cladistic correlation with the patterns of polysemy suggests a possibility to extend the constructed conceptual space by utilizing digitally archived dictionaries of the major languages of the world with some confidence that their expression of these features is not strongly biased by correlations due to language family structure .large - corpus samples could be used to construct the semantic space in as yet unexplored domains using automated means .high - quality bilingual dictionaries between the object language and the semantic metalanguage for cross - linguistic comparison are used to identify polysemies .the 81 object languages were selected from a phylogenetically and geographically stratified sample of low - level language families or _ genera _ , listed in tab .[ tab : languages ] in the appendex .translations into the object language of each of the 22 word senses from the swadesh basic vocabulary list were first obtained ( see appendix-[subsec : meanings ] ) ; all translations ( that is , all synonyms ) were retained .polysemies were identified by looking up the metalanguage translations ( back - translation ) of each object - language term .the selected swadesh word senses , and the selected languages are listed in the appendix .we use modern european languages as a semantic metalanguage , _i.e. , _ bilingual dictionaries between such languages and the other languages in our sample . this could be problematic if these languages themselves display polysemies ; for example , english _ day _ expresses both daytime , and 24hr period .in many cases , however , the lexicographer is aware of these issues , and annotates the translation of the object language word accordingly . in the lexical domain chosen for our study ,standard lexicographic practice was sufficient to overcome this problem . a hierarchical spectral algorithm clusters the swadesh word senses .each sense is assigned to a position in based on the components of the eigenvectors of the weighted adjacency matrix .each eigenvector is weighted by the square of its eigenvalue , and clustered by a greedy agglomerative algorithm to merge the pair of clusters having the smallest squared euclidean distance between their centers of mass , through which a binary tree or _ dendrogram _ is constructed we construct a dendrogram for each subgroup of languages according to nonlinguistic variables such as geography , topography , climate , and the presence or absence of a literary tradition ( table [ tab : groups ] in appendix ) . the structural distance between the dendrograms of each pair of language subgroupsis measured by two standard tree metrics . the triplet distance is the fraction of the distinct triplets of senses that are assigned a different topology in the two trees : that is , those for which the trees disagree as to which pair of senses are more closely related to each other than they are to the third .the robinson - foulds distance is the number of `` cuts '' on which the two trees disagree , where a cut is a separation of the leaves into two sets resulting from removing an edge of the tree . for each pair of subgroups ,we perform two types of bootstrap experiments .first , we compare the distance between their dendrograms to the distribution of distances we would see under a hypothesis that the two subgroups have no shared lexical structure .were this null hypothesis true , the distribution of distances would be unchanged under the random permutation of the senses at the leaves of each tree ( for simplicity , the topology of the dendrograms are kept fixed . ) comparing the observed distance against the resulting distribution gives a -value , called in figure [ fig : bootstrap ] .these -values are small enough to decisively reject the null hypothesis . indeed , for most pairs of groups the robinson - foulds distance is smaller than that observed in any of the 1000 bootstrap trials ( ) marked as in the table .this gives overwhelming evidence that the semantic network has universal aspects that apply across language subgroups : for instance , in every group we tried , sea / ocean , and salt are more related than either is to sun . in the second bootstrap experiment ,the null hypothesis is that the nonlinguistic variables have no effect on the semantic network , and that the differences between language groups simply result from random sampling : for instance , the similarity between the americas and eurasia is what one would expect from any disjoint subgroups of the 81 languages of given sizes 29 and 20 respectively . to test this null hypothesis, we generate random pairs of disjoint language subgroups with the same sizes as the groups in question , and measure the distribution of their distances .the -values , called in figure [ fig : bootstrap ] , are not small enough to reject this null hypothesis .thus , at least given the current data set , there is no statistical distinction between random sampling and empirical data further supporting our thesis that it is , at least in part , universal .the model treats all concepts as independent members of an unbiased sample that the aggregate summary statistics of the empirical data reflects the underlying structure .the simplest model perhaps then assumes no interaction between concept and languages : the number of polysemies of concept in language , that is , is linearly proportional to both the tendency of the concept to be polysemous and the tendency of the language to distinguish word senses ; and these tendencies are estimated from the marginal distribution of the observed data as the fraction of polysemy associated with the concept , , and the fraction of polysemy in the language , , respectively .the model can , therefore , be expressed as , , a product of the two . to test the model ,we compare the kullback - leibler ( kl ) divergence of ensembles of the model with the observation .ensembles are generated by the multinominal distribution according to the probability .the kl divergence is an appropriate measure for testing typicality of this random process because it is the leading exponential approximation ( by stirling s formula ) to the log of the multinomial distribution produced by poisson sampling ( see appendix [ sec : model ] ) .the kl divergence of ensembles is where is the number of polysemies that the model generates divided by , and the kl divergence of the empirical observation is .note that is and it is a different value from an expected value of the model , .the -value is the cumulative probability of to the right of .hy acknowledges support from cabdyn complexity centre , and the support of research grants from the national science foundation ( no .sma-1312294 ) .wc and ls acknowledge support from the university of new mexico resource allocation committee .tb , jw , es , cm , and hy acknowledge the santa fe institute , and the evolution of human languages program .authors thank ilia peiros , george starostin , and petter holme for helpful comments .w.c . and t.b. conceived of the project and participated in all methodological decisions .l.s . and w.c .collected the data , h.y . , j.w ., e.s . , and t.b .did the modeling and statistical analysis .i.m . and w.c .provided the cross - linguistic knowledge ., e.s . , and c.m .did the network analysis .the manuscript was written mainly by h.y ., c.m . , and t.b . , and all authors agreed on the final version .99 whorf bl , _ language , thought and reality : selected writing ._ ( mit press , cambridge , 1956 ) .fodor ja , _ the language of thought . _( harvard univ . , new york , 1975 ) .wierzbicka , a. , _ semantics : primes and universals . _( oxford university press .1996 ) lucy ja , _ grammatical categories and cognition : a case study of the linguistic relativity hypothesis . _( cambridge university press , 1992 ) .levinson sc , _ space in language and cognition : explorations in cognitive diversity ._ ( cambridge university press , 2003 ) choi s , bowerman m ( 1991 ) learning to express motion events in english and korean : the influence of language - specific lexicalization patterns ._ cognition _ * 41 * , 83 - 121 .majid a , boster js , bowerman m ( 2008 ) _ cognition _ * 109 * , 235 - 250 .croft w ( 2010 ) relativity , linguistic variation and language universals ._ cognitextes _ * 4 * , 303 .evans n , levinson sc ( 2009 ) the myth of language universals : language diversity and its importance for cognitive science .brain sci . _* 21 * 429 - 492 .comrie b _ language universals and linguistic typology , 2nd ed ._ ( university of chicago press . , 1989 ) .croft w , _ typology and universals , 2nd ed . _ ( cambridge university press .2003 ) .henrich j , heine sj , norenzayan a ( 2010 ) the weirdest people in the world ? _ behav .brain sci . _ * 33 * 1 - 75 ( 2010 ) .shopen t ( ed . ) , _ language typology and syntactic description _ , 2nd ed .( 3 volumes ) ( cambridge university press , cambridge , 2007 ) .croft w , cruse da , _ cognitive linguistics . _( cambridge university press .2004 ) .koptjevskaja - tamm m , vanhove m ( 2012 ) new directions in lexical typology ._ linguistics _ * 50 * , 3 .brown ch ( 1976 ) general principles of human anatomical partonomy and speculations on the growth of partonomic nomenclature .ethnol . _ * 3 * , 400 - 424 ( 1976 ) .witkowski sr , brown ch ( 1978 ) lexical universals , _ ann . rev . of anthropol . _ * 7 * 427 - 51 .brown ch ( 1983 ) where do cardinal direction terms come from ?_ anthropological linguistics _ * 25 * , 121 - 161 .viberg ( 1983 ) the verbs of perception : a typological study ._ linguistics _ * 21 * , 123 - 162 .evans n , multiple semiotic systems , hyperpolysemy , and the reconstruction of semantic change in australian languages . in _diachrony within synchrony : language history and cognition _( peter lang .frankfurt , 1992 ) .derrig s ( 1978 ) metaphor in the color lexicon .chicago linguistic society , the parasession on the lexicon _85 - 96 .swadesh m ( 1952 ) lexico - statistical dating of prehistoric ethnic contacts ._ p. am . philos .soc . _ * 96 * , 452 - 463 .vygotsky l , thought and language .( mit press , cambridge , ma , 2002 ) .critchlow de , pearl dk , qian cl ( 1996 ) the triples distance for rooted bifurcating phylogenetic trees .biol . _ * 45 * , 323334 .dobson aj , comparing the shapes of trees , _combinatorial mathematics iii _ , ( springer - verlag , new york 1975 ) .robinson df , foulds lr ( 1981 ) comparison of phylogenetic trees .biosci . _ * 53 * , 131147 .dunn m , _ et al . _( 2011 ) evolved structure of language shows lineage - specific trends in word - order universals ._ nature _ * 473 * , 79 - 82 .bouckaert r , _ et al . _( 2012 ) mapping the origins and expansion of the indo - european language family ._ science _ * 337 * , 957 .fox a , _ linguistic reconstruction : an introduction to theory and method . _( oxford university press .1995 ) .hock hh _ principles of historical linguistics ._ ( mouton de gruyter , berlin , 1986 ) .nichols j , the comparative method as heuristic . in _the comparative method reviewed : regularity and irregularity in language change _( oxford university press , 1996 ) .dryer ms ( 1989 ) large linguistic areas and language sampling ._ studies in language _ * 13 * , 257 - 292 .cover tm , and thomas ja , elements of information theory , ( wiley , new york , 1991 ) .brown ch , a theory of lexical change ( with examples from folk biology , human anatomical partonomy and other domains ) ._ anthropol . linguist ._ * 21 * , 257 - 276 ( 1979 ) .brown ch & witkowski sr , figurative language in a universalist perspective ._ * 8 * 596 - 615 ( 1981 ) .our translations use only lexical concepts as opposed to grammatical inflections or function words . for the purpose of universality and stability of meanings across cultures ,we chose entries from the swadesh 200-word list of basic vocabulary . among these , we have selected categories that are likely to have single - word representation for meanings , and for which the referents are material entities or natural settings rather than social or conceptual abstractions .we have selected 22 words in domains concerning natural and geographic features , so that the web of polysemy will produce a connected graph whose structure we can analyze , rather than having an excess of disconnected singletons .we have omitted body parts which by the same criteria would provide a similarly appropriate connected domain because these have been considered previously .the final set of 22 words are as follows : * celestial phenomena and related time units : + star , sun , moon , year , day / daytime , night * landscape features : + sky , cloud(s ) , sea / ocean , lake , river , mountain * natural substances : + stone / rock , earth / soil , sand , ash(es ) , salt , smoke , dust , fire , water , wind the languages included in our study are listed in tab .[ tab : languages ] .notes : oceania includes southeast asia ; the papuan languages do not form a single phylogenetic group in the view of most historical linguists ; other families in the table vary in their degree of acceptance by historical linguists . the classification at the genus level , which is of greater importance to our analysis , is generally agreed upon . to accommodate such characteristic , we revise the model eq .( [ seq : product ] ) to the following function : where degree numbers for each swadesh is proportional to and language size , but is bounded by , the number of proximal concepts .the corresponding model probability for each language then becomes as all we recover the product model , with and . a first - level approximation to fit parameters and given by minimizing the weighted mean - square error the function ( [ eq : error_sat ] ) assigns equal penalty to squared error within each language bin , proportional to the variance expected from poisson sampling .the fit values obtained for and do not depend sensitively on the size of bins except for the swadesh entry moon in the case where all 81 single - language bins are used .moon has so few polysemies , but the moon / month polysemy is so likely to be found , that the language itelman , with only one link , has this polysemy .this point leads to instabilities in fitting in single - language bins . for bins of size 39the instability is removed .representative fit parameters across this range are shown in table [ tab : fitvalue ] .examples of the saturation model for two words , plotted against the 9-language binned degree data in fig .[ fig : saturating_words ] , show the range of behaviors spanned by swadesh entries . ) and the saturation model ( [ eq : sat2 ] ) .parameters and have been adjusted ( as explained in the text ) to match the word- and language - marginals . from 10,000 random samples , ( green ) histogram for the product model ; ( blue ) histogram for the saturation model ; ( red dots ) data .the product model rejects the 9-language joint binned configuration at the at level ( dark shading ) , while the saturation model is typical of the same configuration at ( light shading ) .[ fig : sample_kl_hist_data_9_joint ] ] the least - squares fits to and do not directly yield a probability model consisent with the marginals for language size that , in our data , are fixed parameters rather than sample variables to be explained .they closely approximate the marginal n ( deviations link for every ) but lead to mild violations .we corrected for this by altering the saturation model to suppose that , rather than word properties interacting with the exact value , they interact with a ( word - independent but language - dependent ) multiplier , so that the model for in each language becomes becomes in terms of the least - squares coefficients and of table [ tab : fitvalue ] .the values of are solved with newton s method to produce , and we checked that they preserve within small fractions of a link .the resulting adjustment parameters are plotted versus for individual languages in fig .[ fig : varphis_vs_nls ] .although they were computed individually for each , they form a smooth function of , possibly suggesting a refinement of the product model , but also perhaps reflecting systematic interaction of small - language degree distributions with the error function ( [ eq : error_sat ] ) . versus for individual languages in the probability model used in text , with parameters and shown in table [ tab : fitvalue ] .although values were individually solved with newton s method to ensure that the probability model matched the whole - language link values , the resulting correction factors are a smooth function of .[ fig : varphis_vs_nls ] ] is now marginally plausible for the joint configuration of 27 three - language bins in the data , at the level ( light shading ) . for reference, this fine - grained joint configuration rejects the null model of independent sampling from the product model at ( dark shading in the extreme tail ) .4000 samples were used to generate this test distribution .the blue histogram is for the saturation model , the green histogram for the product model , and the red dots are generated data .[ fig : sample_kl_hist_data_27_joint ] ] with the resulting joint distribution , tests of the joint degree counts in our dataset for consistency with multinomial sampling in 9 nine - language bins are shown in fig . [fig : sample_kl_hist_data_9_joint ] , and results of tests using 27 three - language bins are shown in fig .[ fig : sample_kl_hist_data_27_joint ] .binning nine languages clearly averages over enough language - specific variation to make the data strongly typical of a random sample ( ) , while the product model ( which also preserves marginals ) is excluded at the level .the marginal acceptance of the data even for the joint configuration of three - language bins ( ) suggests that language size is an excellent explanatory variable and that residual language variations cancel to good approximation even in small aggregations .the preceding subsection showed intermediate scales of aggregation of our language data are sufficiently random that they can be used to refine probability models for mean degree as a function of parameters in the globally - aggregated graph .the saturation model , with data - consistent marginals and multinomial sampling , is weakly plausible by bins of as few as three languages .down to this scale , we have therefore not been able to show a requirement for deviations from the independent sampling of links entailed by the use of the aggregate graph as a summary statistic . however , we were unable to find a further refinement of the mean distribution that would reproduce the properties of single language samples . in this sectionwe characterize the nature of their deviation from independent samples of the saturation model , show that it may be reproduced by models of non - independent ( clumpy ) link sampling , and suggest that these reflect excess synonymous polysemy .+ * power tests and uneven distribution of single - language -values * to evaluate the contribution of individual languages versus language aggregates to the acceptance or rejection of random - sampling models , we computed -values for individual languages or language bins , using the kl - divergence ( [ eq : d_kl_l ] ) .a plot of the single - language -values for both the null ( product ) model and the saturation model is shown in fig . [fig : one_lang_p_vals ] .histograms for both single languages ( from the values in fig .[ fig : one_lang_p_vals ] ) and aggregate samples formed by binning consecutive groups of three languages are shown in fig .[ fig : p_dists_marg_sat ] . for samples from a random model, -values would be uniformly distributed in the unit interval , and histogram counts would have a multinomial distribution with single - bin fluctuations depending on the total sample size and bin width .therefore , fig .[ fig : p_dists_marg_sat ] provides a power test of our summary statistics .the variance of the multinomial may be estimated from the large--value body where the distribution is roughly uniform , and the excess of counts in the small--value tail , more than one standard deviation above the mean , provides an estimate of the number of languages that can be confidently said to violate the random - sampling model . from the upper panel of fig .[ fig : p_dists_marg_sat ] , with a total sample of 81 languages , we can estimate a number of excess languages at the lowest -values of 0.05 and 0.1 , with an additional 23 languages rejected by the product model in the range -value .comparable plots in fig .[ fig : p_dists_marg_sat ] ( lower panel ) for the 27 three - language aggregate distributions are marginally consistent with random sampling for the saturation model , as expected from fig .[ fig : sample_kl_hist_data_27_joint ] above .we will show in the next section that a more systematic trend in language fluctuations with size provides evidence that the cause for these rejections is excess variance due to repeated attachment of links to a subset of nodes .by kl divergence , relative to 4000 random samples per language , plotted versus language rank in order of increasing . product model ( green )shows equal or lower -values for almost all languages than the saturation model ( blue ) .three languages basque , haida , and yorb had value consistently across samples in both models , and are removed from subsequent regression estimates .a trend toward decreasing is seen with increase in .[ fig : one_lang_p_vals ] ] -values from the 81 languages plotted in fig .[ fig : one_lang_p_vals ] . the saturation model ( blue )produces a fraction languages in the lowest -values above the roughly - uniform background for the rest of the interval ( shaded area with dashed boundary ) .a further excess of 23 languages with -values in the range ] for the product model ( green ) reflects the part of the mismatch corrected through mean values in the saturation model .( lower panel ) corresponding histogram of -values for 27 three - language aggregate degree distributions .saturation model ( blue ) is now marginally consistent with a uniform distribution , while the product model ( green ) still shows slight excess of low- bins .coarse histogram bins have been used in both panels to compensate for small sample numbers in the lower panel , while producing comparable histograms .[ fig : p_dists_marg_sat ] , title="fig : " ] + if we define the size - weighted relative variance of a language analogously to the error term in eq .( [ eq : error_sat ] ) , as fig .[ fig : one_lang_pval_relvar_corr ] shows that has high rank correlation with and a roughly linear regression over most of the range . )that the leading quadratic term in the kl - divergence differs from in that it presumes poisson fluctuation with variance at the level of each _ word _ , rather than uniform variance across all words in a language .the relative variance is thus a less specific error measure . ]two languages ( itelmen and hindi ) , which appear as large outliers relative to the product model , are within the main dispersion in the saturation model , showing that their discrepency is corrected in the mean link number .we may therefore understand a large fraction of the improbability of languages as resulting from excess fluctuations of their degree numbers relative to the expectation from poisson sampling .plotted versus relative variance from eq .( [ eq : rel_var ] ) for the 78 languages with non - zero -values from fig .[ fig : one_lang_p_vals ] . ( blue ) saturation model ; ( green ) product model . two languages ( circled ) which appear as outliers with anomalously small relative variance in the product model itelman and hindi disappear into the central tendency with the saturation model .( lower panel : ) an equivalent plot for 26 three - language bins .notably , the apparent separation of individual large- langauges into two groups has vanished under binning , and a unimodal and smooth dependence of on is seen .[ fig : one_lang_pval_relvar_corr ] , title="fig : " ] + plotted versus relative variance from eq .( [ eq : rel_var ] ) for the 78 languages with non - zero -values from fig .[ fig : one_lang_p_vals ] . ( blue ) saturation model ; ( green ) product model .two languages ( circled ) which appear as outliers with anomalously small relative variance in the product model itelman and hindi disappear into the central tendency with the saturation model .( lower panel : ) an equivalent plot for 26 three - language bins .notably , the apparent separation of individual large- langauges into two groups has vanished under binning , and a unimodal and smooth dependence of on is seen .[ fig : one_lang_pval_relvar_corr ] , title="fig : " ] fig .[ fig : variance_test_sat_p_value_excludes ] then shows the relative variance from the saturation model , plotted versus total average link number for both individual languages and three - language bins .the binned languages show no significant regression of relative variance away from the value unity for poisson sampling , whereas single languages show a systematic trend toward larger variance in larger languages , a pattern that we will show is consistent with `` clumpy '' sampling of a subset of nodes .the disappearance of this clumping in binned distributions shows that the clumps are uncorrelated among languages at similar . for 78 languages excluding basque , haida , and yorb .least - squares regression are shown for three - language bins ( green ) and individual languages ( blue ) , with regression coefficients inset .three - language bins are consistent with poisson sampling at all , whereas single languages show systematic increase of relative variance with increasing .[ fig : variance_test_sat_p_value_excludes ] ] we may retain the mean degree distributions , while introducing a systematic trend of relative variance with , by modifying our sampling model away from strict poisson sampling to introduce `` clumps '' of links . to remain within the use of minimal models , we modify the sampling procedure by a single parameter which is independent of word , language - size , or particular language .we introduce the sampling model as a function of two parameters , and show that one function of these is constrained by the regression of excess variance .( the other may take any interior value , so we have an equivalence class of models . ) in each language , select a number of swadesh entries randomly . let the swadesh indices be denoted .we will take some fraction of the total links in that language , and assign them only to the swadeshes whose indices are in this privileged set .introduce a parameter that will determine that fraction .we require correlated link assignments be consistent with the mean determined by our model fit , since binning of data has shown no systematic effect on mean parameters .therefore , for the random choice , introduce the normalized density on the privileged links and otherwise . denote the aggregated weight of the links in the priviledged set by then introduce a modified probability distribution based on the randomly selected links , in the form multinomial sampling of links from the distribution will produce a size - dependent variance of the kind we see in the data .the expectated degrees given any particular set will not agree with the means in the aggregate graph , but the ensemble mean over random samples of languages will equal , and binned groups of languages will converge toward it according to the central - limit theorem .the proof that the relative variance increases linearly in comes from the expansion of the expectation of eq .( [ eq : rel_var ] ) for random samples , denoted } ^2 \right > \nonumber \\ & = & \left < \frac{1}{n^l } \sum_s { \left ( { \hat{n}}_s^l - n^l { \tilde{p}}_{s \mid l } \right ) } ^2 \right > + \nonumber\\ & & \qquad n^l \left < \sum_s { \left ( { \tilde{p}}_{s \mid l } - p_{s \mid l}^{\mbox{\scriptsize model } } \right ) } ^2 \right > .\label{eq : rel_var_clumpy}\end{aligned}\ ] ] the first expectation over is constant ( of order unity ) for poisson samples , and the second expectation ( over the sets that generate ) does not depend on except in the prefactor .cross - terms vanish because link samples are not correlated with samples of .both terms in the third line of eq .( [ eq : rel_var_clumpy ] ) scale under binning as .the first term is invariant due to poisson sampling , while in the second term , the central - limit theorem reduction of the variance in samples over cancels growth in the prefactor due to aggregation . because the linear term in eq .( [ eq : rel_var_clumpy ] ) does not systematically change under binning , we interpret the vanishing of the regression for three - language bins in fig .[ fig : variance_test_sat_p_value_excludes ] as a consequence of fitting of the mean value to binned data as sample estimators . ) , fitting a saturation model to binned sample configurations using the same algorithms as we applied to our data , and then performing regressions equivalent to those in fig .[ fig : variance_test_sat_p_value_excludes ] . in about of cases the fitted model showed regression coefficients consistent with zero for three - language bins .the typical behavior when such models were fit to random sample data was that the three - bin regression coefficient decreased from the single - language regression by .] therefore , we require to choose parameters and so that regression coefficients in the data are typical in the model of clumpy sampling , while regressions including zero have non - vanishing weight in models of three - bin aggregations . fig .[ fig : multinom_samp_hists_78_26_words ] compares the range of regression coefficients obtained for random samples of languages with the values in our data , from either the original saturation model , or the clumpy model randomly re - sampled for each language in the joint configuration .parameters used were ( , ) . ranging from 317 . was chosen as an intermediate value , consistent with the typical numbers of nodes appearing in our samples by inspection . ] with these parameters , of links were assigned in excess to of words , with the remaining of links assigned according to the mean distribution .either generated by poisson sampling from the saturation model fitted to the data ( blue ) , or drawn from clumped probabilities defined in eq .( [ eq : p_tilde_def ] ) , with the set of privileged words independently drawn for each language ( green ) .solid lines refer to joint configurations of 78 individual languages with the values in fig .[ fig : variance_test_sat_p_value_excludes ] .dashed lines are 26 non - overlapping three - language bins .[ fig : multinom_samp_hists_78_26_words ] ] the important features of the graph are : 1 ) binning does not change the mean regression coefficient , verifying that eq .( [ eq : rel_var_clumpy ] ) scales homogeneously as .however , the variance for binned data increases due to reduced number of sample points ; 2 ) the observed regression slope 0.012 seen in the data is far out of the support of multinomial sampling from , whereas with these parameters , it becomes typical under while still leaving significant probability for the three - language binned regression around zero ( even without ex - post fitting ) . | how universal is human conceptual structure ? the way concepts are organized in the human brain may reflect distinct features of cultural , historical , and environmental background in addition to properties universal to human cognition . semantics , or meaning expressed through language , provides direct access to the underlying conceptual structure , but meaning is notoriously difficult to measure , let alone parameterize . here we provide an empirical measure of semantic proximity between concepts using cross - linguistic dictionaries . across languages carefully selected from a phylogenetically and geographically stratified sample of genera , translations of words reveal cases where a particular language uses a single polysemous word to express concepts represented by distinct words in another . we use the frequency of polysemies linking two concepts as a measure of their semantic proximity , and represent the pattern of such linkages by a weighted network . this network is highly uneven and fragmented : certain concepts are far more prone to polysemy than others , and there emerge naturally interpretable clusters loosely connected to each other . statistical analysis shows such structural properties are consistent across different language groups , largely independent of geography , environment , and literacy . it is therefore possible to conclude the conceptual structure connecting basic vocabulary studied is primarily due to universal features of human cognition and language use . the space of concepts expressible in any language is vast . this space is covered by individual words representing semantically tight neighborhoods of salient concepts . there has been much debate about whether semantic similarity of concepts is shared across languages . on the one hand , all human beings belong to a single species characterized by , among other things , a shared set of cognitive abilities . on the other hand , the 6000 or so extant human languages spoken by different societies in different environments across the globe are extremely diverse and may reflect accidents of history as well as adaptations to local environments . most psychological experiments about this question have been conducted on members of `` weird '' ( western , educated , industrial , rich , democratic ) societies , yet there is reason to question whether the results of such research are valid across all types of societies . thus , the question of the degree to which conceptual structures expressed in language are due to universal properties of human cognition , the particulars of cultural history , or the environment inhabited by a society , remains unresolved . the search for an answer to this question has been hampered by a major methodological difficulty . linguistic meaning is an abstract construct that needs to be inferred indirectly from observations , and hence is extremely difficult to measure ; this is even more apparent in the field of lexical semantics . meaning thus contrasts both with phonetics , in which instrumental measurement of physical properties of articulation and acoustics is relatively straightforward , and with grammatical structure , for which there is general agreement on a number of basic units of analysis . much lexical semantic analysis relies on linguists introspection , and the multifaceted dimensions of meaning currently lack a formal characterization . to address our primary question , it is necessary to develop an empirical method to characterize the space of lexical meanings . we arrive at such a measure by noting that translations uncover the alternate ways that languages partition meanings into words . many words have more than one meaning , or sense , to the extent that word senses can be individuated . words gain meanings when their use is extended by speakers to similar meanings ; words lose meanings when another word is extended to the first word s meaning , and the first word is replaced in that meaning . to the extent that words in transition across similar , or possibly contiguous , meanings account for the polysemy ( multiple meanings of a single word form ) revealed in cross - language translations , the frequency of polysemies found across an unbiased sample of languages can provide a measure of semantic similarity among word meanings . the unbiased sample of languages is carefully chosen in a phylogenetically and geographically stratified way , according to the methods of typology and universals research . this large , diverse sample of languages allows us to avoid the pitfalls of research based solely on `` weird '' societies and to separate contributions to the empirically attested patterns in the linguistic data , arising from universal language cognition versus those from artifacts of the speaker - groups history or way of life . there have been several cross - linguistic surveys of lexical polysemy , and its potential for understanding semantic shift , in the domains such as body parts , cardinal directions , perception verbs , concepts associated with fire , and color metaphors . we add a new dimension to the existing body of research by providing a comprehensive mathematical method using a systematically stratified global sample of languages to measure degrees of similarity . our cross - linguistic study takes the swadesh lists as basic concepts as most languages have words for them . among those concepts , we chose 22 meanings associated with two domains : celestial objects ( e.g. ` sun , moon , star ` ) and landscape objects ( e.g. ` fire , water , mountain , dust ` ) . for each word expressing one of these meanings , we examined what other concepts were also expressed by the word . since the semantic structures of these two domains are very likely to be influenced by the physical environment that human societies inhabit , any claim of universality of lexical semantics needs to be demonstrated here . |
consider an inextensible string of unit tension placed along the axis with its left endpoint at , and let be the total mass of the string segment from to .the small vertical oscillations of the string obey the equation where is the square of the oscillation frequency .the natural frequencies of the string depend on how it is tied at the ends ; suppose where is the supremum of the points of increase of , and is some `` tying constant '' .now , let , and denote by and the two particular solutions of equation ( [ stringequation ] ) satisfying the _ characteristic function _ of the string is then defined by or , equivalently , where is , up to an unimportant factor , the unique non - negative solution of the string equation that is decreasing and satisfies .the definition of may be extended to by analytic continuation ; is the analog of the weyl titchmarsh function in the sturm liouville theory .the present paper is devoted to the inverse spectral problem for this vibrating string equation , namely the problem of recovering from .we call this _ krein s inverse problem_. we describe and illustrate a novel algorithm that is applicable to a class of characteristic functions associated with a certain continued fraction .the remainder of this introduction provides a summary of the approach and discusses its relationship with other works . in a series of papers published in the soviet union during the 1950 s, m. g. krein made a detailed study of the existence of spectral expansions associated with the vibrating string equation .for some useful accounts of this work in the english language , see ; in particular , contains a rough outline of the historical development of krein s ideas and their overlap with the work of feller aimed at a unified analytical treatment of some discrete and continuous stochastic processes .denote by the set of functions \rightarrow [ 0,\infty] x \rightarrow l- ] + or 2 . and ^{1+\varepsilon } = 0 x \rightarrow l- ] such that the problem considered by borcea is to recover , normalised by , from . now, suppose that then is the characteristic function of a string such that it is an easy consequence of theorem [ stringtheorem ] that this string belongs to .knowing , can in principle be obtained via equation ( [ impedance ] ) .given the continued fraction coefficients , one can ascertain whether is in the determinate class by using stieltjes criterion ( [ stieltjesseries ] ) . to give a more concrete example , consider the case then the series diverges , and so is in the determinate class .[ impedanceexample ]we proceed to compute the string corresponding to the finite case of the continued fraction expansion on the right - hand side of equation ( [ katscontinuedfraction ] ) . in what follows, it will always be assumed that we shall use the compact notation : = \frac{s_j}{-z } + \frac{1}{\displaystyle s_{j+1}+ \frac{1}{\displaystyle \frac{s_{j+2}}{-z}+ \cdots + \frac{1}{\displaystyle s_{j+2k-1}+ \frac{1}{\displaystyle \frac{s_{j+2k}}{-z}}}}}\ ] ] and : = \frac{s_j}{-z } + \frac{1}{\displaystyle s_{j+1}+ \frac{1}{\displaystyle \frac{s_{j+2}}{-z}+ \cdots + \frac{1}{\displaystyle s_{j+2k-1}+ \frac{1}{\displaystyle \frac{s_{j+2k}}{-z}+\frac{1}{\displaystyle s_{j+2k+1}}}}}}\ ] ] and an analogous notation for the corresponding strings .these characteristic functions are in the determinate subclass of because their images under the map are in the determinate subclass of . for every , = \frac{1}{s_k}\ ] ] and , for every , ( x ) = \frac{m^{\ast}[s_{j+1},\,\ldots,\,s_{k}](t)}{1 + s_j m^{\ast}[s_{j+1},\,\ldots,\,s_{k}](t ) } \label{basicrecurrencerelation}\ ] ] where (\tau ) \right \}^2\,d \tau\ , .\label{xintermsoft}\ ] ] [ recurrencetheorem ] consider the case where is _odd_. then = \frac{s_j}{-z } + \frac{1}{\displaystyle s_{j+1}+\frac{1}{\displaystyle \frac{s_{j+2}}{-z}+ \cdots+ \frac{1}{\displaystyle \frac{s_{k-1}}{-z}+\frac{1}{\displaystyle s_k}}}}\,.\ ] ] an easy calculation shows that = w[s_{j+1},\ , \ldots,\,s_k]\,.\ ] ] by taking the dual on both sides , we obtain = w^\ast [ s_{j+1},\ , \ldots,\,s_k]\,.\ ] ] it then follows from proposition [ zeromassproposition ] that ( x ) = \frac{m[s_j,\,\ldots,\,s_k](t)}{1 - s_j m[s_j,\,\ldots,\,s_k](t)}\,,\ ] ] where (\tau ) \right \}^2 d \tau\,.\ ] ] when we `` turn this around '' and express ] and in terms of , we obtain the desired result . the case where iseven is analogous .for every and every , ] is a discrete string . by equation ( [ xintermsoft ] ), is then an increasing continuous piecewise linear function of , and the fact that ] via the sequence \rightarrow m [ s_{n-1},\,s_n ] \rightarrow \cdots \rightarrow m[s_0,\,\ldots,\,s_{n}]\ , .\label{recurrencerelation}\ ] ] each term in that sequence is a right - continuous piecewise constant function with finitely many jumps , so we need only keep track of the location , say , of these jumps , and of the value , say , of the function there .we always include and , whether or not .we have (x ) = \frac{1}{s_n } \;\;\mbox{for every }\,.\ ] ] so we set also (x ) = \cases { 0 & \mbox{if } \\\frac{1}{s_{n-1 } } & \mbox{if } } \ ] ]so we set then , for , we have the recurrence relations ^ 2 \left [ y_j^{(2k-1)}-y_{j-1}^{(2k-1)}\right ] , \ ; 1 \le j \le k\ , .\label{firstgeneralformulaforx}\ ] ] ^ 2 \left [ y_{j}^{(2k)}-y_{j-1}^{(2k)}\right ] , \ ; 1 \le j \le k\ , .\label{secondgeneralformulaforx}\ ] ] the computation terminates when the superscript reaches the value .in this section , we describe in broad terms the probabilistic version of krein s inverse problem expounded by knight in , and illustrate our inversion algorithm by means of an example taken from .the reader unfamiliar with diffusion processes will find a useful summary of the theory in . consider a one - dimensional diffusion process started at the origin , with values in some interval .suppose that the origin is instantaneously reflecting and that is in natural scale .such a process is completely determined by its infinitesimal generator , i.e. by the differential operator acting on a suitable set of functions . here , is the speed measure , and the domain of consists of twice differentiable functions satisfying the condition ; an additional condition may be imposed on at the right boundary , depending on the behaviour of the diffusion there .the process is called the _ local time _ ( at the origin ) ; it measures the time spent by the diffusion in the vicinity of the origin , up to time .its right - continuous inverse is called the _ inverse local time _ ( at the origin ) ; it is a positive , non - decreasing process that jumps at the times when begins an excursion away from the origin .the height of the jump is the _ length _ of the excursion , i.e. the time that elapses before returns to its starting point .it may be shown that the inverse local time is in fact a lvy process . from the theory of such processes ,one deduces the existence of a _lvy exponent _ defined implicitly by \;\;\mbox{for }\ , .\label{levyexponent}\ ] ] furthermore , the lvy exponent is necessarily of the form for some numbers and some _ lvy measure _ , i.e. a measure such that knight shows that , if is a so - called `` gap diffusion '' , then the function appearing in equation ( [ infinitesimalgenerator ] ) may be identified uniquely with a string , still denoted .the lvy exponent is related to the characteristic function of the string via furthermore krein s inverse problem may thus be rephrased as : `` find the gap diffusion , given the lvy exponent of the inverse local time '' . to give an example ,let the lvy exponent be given by equation ( [ levykhintchineformula ] ) with and donati martin and yor showed that the corresponding diffusion is , up to a homeomorphism , a bessel process with drift .the corresponding string may be expressed in terms of the modified bessel functions .set where by using the well - known continued fraction expansion for the binomial ( see for instance , p. 343 ) , it is easy to see that is expressible in the form ( [ katscontinuedfraction ] ) with where denotes pochhammer s symbol and we use the convention for .stieltjes criterion ( [ stieltjesseries ] ) implies that this continued fraction converges .hence belongs to the determinate subclass of . in what follows ,we find approximations of the string by truncating the continued fraction after terms , and computing the corresponding discrete string by the algorithm of the previous section .take then and the diffusion is a brownian motion with drift in natural scale .the dots in figure [ brownianwithdriftfigure ] are the points such that corresponding to the discrete string $ ] constructed by the algorithm of [ inversesection ] .the superimposed continuous curve is a plot of .( 0,0 ) ( 220,0) ( 75,120) [ brownianmotionwithdriftexample ] by taking and letting , we obtain the string with this characteristic function was found explicitly by donati - martin and yor : let be the -valued diffusion process , reflected at the origin , with infinitesimal generator \frac{d}{dy}\,.\ ] ] the scale function of is the diffusion is in natural scale and its generator is given by equation ( [ infinitesimalgenerator ] ) , where is the string corresponding to ( [ logarithmicfunction ] ) .figure [ besselwithdriftfigure ] shows a plot of the string for small , together with an approximation obtained by our algorithm .( 0,0 ) ( 220,0) ( 75,120) [ alphaiszeroexample ]we end with some comments on the numerical issues arising from the algorithm : by design , the approximation is _ exact _ in the limit .more generally , our computations suggest that there is also some numerical evidence that greater accuracy may be obtained by averaging at the jumps , namely where as one would expect , in order to compute the discrete string from the truncated continued fraction , it is necessary to have good approximations of the coefficients. it will seldom be the case that exact formulae are available . instead, one will need to compute these coefficients by using some numerical algorithm ; see for instance and the comments therein . finally , the inversion method we have described generates only piecewise constant approximations of , and so the recovery of in cases where the string is absolutely continuous is not entirely straightforward .it would be of interest to adapt to our case the ingenious recovery technique that borcea devised in the context of the sturm liouville problem in impedance form .it is a pleasure to thank alain comtet for his encouragement and for discussions of the material , and the laboratoire de physique thorique et modles statistiques , universit paris - sud , for its hospitality while some of this work was carried out .thanks also to the anonymous referees , whose thoughtful comments and suggestions helped improve an earlier version of the paper .dym h and kravitsky n. 1978 on recovering the mass distribution of a string from its spectral function , in _ topics in functional analysis ( essays dedicated to m. g. krein on the occasion of his 70th birthday ) _ , 45 - 90 , _ adv . math .stud . _ * 3 * , academic press , new york . | the spectral data of a vibrating string are encoded in its so - called characteristic function . we consider the problem of recovering the distribution of mass along the string from its characteristic function . it is well - known that stieltjes continued fraction leads to the solution of this inverse problem in the particular case where the distribution of mass is purely discrete . we show how to adapt stieltjes method to solve the inverse problem for a related class of strings . an application to the excursion theory of diffusion processes is presented . |
it is quite plausible that the structure of space - time in the vicinity of planck scale is described by a fuzzy quantum space - time " . as shown by doplicher _ , this quantum structure of space - time , where the space - time coordinates are operator - valued and satisfy a non - commuting coordinate algebra , can be one of the most plausible ways to prevent the gravitational collapse arising from the attempt to localise a space - time event within a planck length scale . since making a guess regarding the non - commutative structure of the coordinate algebrais difficult , one tries to postulate a simple structure of this form and study the geometry of the resulting non - commutative spaces .the simplest examples are the 2d moyal plane ( ) : = i \theta , \label{mp}\ ] ] and fuzzy : = i \lambda \epsilon_{ijk } \hat{x}_k .\label{fs}\ ] ] fuzzy sphere corresponds to a 2d subspace of with radius quantized as . because of the inherent uncertainty relations satisfied by these coordinates the usual concepts like points , lines etc loose their meaning in these kinds of spaces .it thus becomes essential to use the mathematical formalism of non - commutative geometry(ncg ) as developed by connes to study the geometry of such spaces .+ we would like to mention in this context that connes himself , along with his other collaborators , have invested a lot of effort in formulating a completely new mathematical framework to describe the standard model of particle physics , by invoking the so - called almost commutative spaces " - built out of the usual commutative ( euclideanised ) 4d curved space - time .this is expected to describe physics upto gut scale ( ) . however , as we mentioned in the beginning , one perhaps has to take quantum space - times , i.e. spaces where the coordinates become operators satisfying a non - commutative algebra and for which ( [ mp ] ) and ( [ fs ] ) provide prototype examples , more seriously in the higher energy scales - like in the vicinity of planck scales .( see for example for a review ) .it is only recently that the above mentioned mathematical framework of ncg , _ a la _ connes , has been used to compute the spectral distances on quantized spaces " like the moyal plane , fuzzy sphere etc . ,- . on the other hand ,an algorithm was devised in to compute this distance using the hilbert - schmidt operatorial formulation of quantum mechanics , .this hilbert - schmidt operatorial formulation has the advantage that it bypasses the use of any star product , like moyal or voros , and is therefore free from any ambiguities that can arise from there .furthermore , it has an additional advantage that the above - mentioned algorithm / formula is adaptable to this hilbert - schmidt operatorial formulation and infinitesimally it has essentially the same structure as that of the induced metric from the hilbert space inner product , obtained by provost and vallee ( see also ) , when expressed in terms of the density matrix and yields for the moyal plane the correct distance in the harmonic oscillator basis " and the flat metric in the coherent state , upto an overall numerical factor .however , the corresponding finite distance can not simply be obtained just by integrating " along the geodesics , as the very concept of geodesics in the conventional sense ( i.e. like on a commutative differentiable manifold ) may not exist at all . this motivates us to undertake the task of extending our algorithm of so that one is able to compute finite distances as well .this new formula is shown to involve the ` transverse ' component ( ) , in addition to the longitudinal " component ( ) where is the difference between normal states represented by density matrices , as in .this in turn shows that the formula of actually corresponds only to the lower bound of the distance and not the actual distance .indeed , on the way to our derivation of the generalised formula in section [ sec5 ] we point out a flaw , commented on in through a counter example , in the analysis in where the same expression was shown to correspond to the upper bound as well .it should , however , be mentioned that the counter example in does not satisfy the boundedness condition imposed in .this error was not serious in examples studied previously to compute infinitesimal distances as these distances coincided with the exact distance for discrete states and differed from the exact distance by a numerical factor for the coherent states for both and .however , as one can easily see , a straightforward calculation to compute _ finite _ distances , using the same formula does not yield any sensible result indicating a non - trivial role for the transverse component . in the generic casethere can be many choices of to a given ( in fact an infinite number of them in ) .consequently the computation of the infimum involving occurring in the revised formula turns out to be quite non - trivial .one therefore has to try with different forms of and improve the estimate of the distance as best as one can . + on the other hand , we can emulate and adapt the approach of to our hilbert - schmidt operatorial formulation to obtain an upper bound to the distance and then look for an optimal element in the algebra saturating this upper bound . in caseat least one such optimal element can be identified , this upper bound itself can obviously be recognised as the true distance .otherwise one has to be content with the above - mentioned best possible estimate only .in fact this paper deals with an interplay of both the approaches , as they seem to complement each other in some sense .this brings out some stark differences between and . particularly for corresponding to any finite -representation of one can not define a geodesic in the conventional sense ; it reduces to commutative only in the limit .the distance turns out to be much smaller than the geodesic distance of , the latter coinciding with the above - mentioned upper bound .further in the case of maximal non - commutativity i.e. for , even the finite distance is shown to coincide exactly with the lower bound with playing no role and any pair of pure states is shown to be interpolated by a one - parameter family of mixed states , lying in the interior of .the distance between any mixed state to the nearest pure state can be taken as a measure of the ` mixedness ' .+ the whole analysis , particularly the `` ball condition '' is carried out in the eigen - spinor basis furnished by the respective dirac operators .this is clearly a natural choice of basis , as the ball condition involves the dirac operator .this definitely simplifies the computations considerably and even allows us to study the geometry of for , apart from reproducing many of the existing results in the literature , albeit with the help of _mathematica_. however the corresponding analysis for remains quite intractable , even with _mathematica_. + the paper is organised as follows . in section [ sec2 ], we provide a brief review of the hilbert - schmidt operatorial formulation of non - commutative quantum mechanics on the 2d - moyal plane and fuzzy sphere and the associated spectral triples , introduced in - required to study the geometrical aspects of them .we also introduce the corresponding dirac operators and their eigen - spinors for both non - commutative spaces .to begin , we revisit the derivation of the formula given in in section [ sec5 ] and derive the corrected form .we then provide a computation of ( finite ) distances on the moyal plane in the coherent and the harmonic oscillator " basis in sections [ sec3 ] and [ sec4 ] , respectively .we then proceed onto the case of the fuzzy sphere in quite the same way adopted as in the moyal plane in section [ sec6 ] .we note some fundamental differences from the case of the moyal plane and adopt a different algorithm using the dirac eigen - spinors and study the and representations of the fuzzy sphere algebra .we finally conclude in section [ sec_con ] .the hilbert - schmidt operatorial formulation of non - commutative quantum mechanics ( ncqm ) on the 2d moyal plane , described by the non - commutative heisenberg algebra i.e. the co - ordinate algebra augmented by the following commutation relations involving linear momenta operators satisfying ( in units of ) = i \delta_{ij } ~~;~~ [ \hat{p}_i , \hat{p}_j ] = 0 \label{ha}\ ] ] begins by introducing an auxiliary hilbert space furnishing a representation of just the coordinate algebra ( [ mp ] ) . in this particular situation , since the algebra ( [ mp ] ) is isomorphic to the algebra = i ] and the `` ground state '' satisfy .this , however , can not furnish a representation of the linear momentum operators . as shown in ,we need to introduce the space comprised of hilbert - schmidt operators acting on . loosely speaking ,these are essentially the trace - class bounded set of operators and forms a hilbert space on its own referred to as the quantum hilbert space .physical states ( denoted by round kets , rather than angular kets ( [ hc ] ) ) , having generic forms as is defined as where the subscript in indicates that the trace has to be computed over .+ we reserve to denote the hermitian conjugation on ( [ hc ] ) , while denotes the hermitian conjugation on .note that has a natural tensor product structure as ( being the dual of ) , enabling one to express the elements of in the form or their linear spans .one can refer to ( respectively ) as the left ( respectively right ) hand sector . +a unitary representation of the non - commutative heisenberg algebra ( [ mp ] ) and ( [ ha ] ) is obtained by the following actions : ) \label{urep}.\ ] ] note that we are using capital letters ( without hats ) to distinguish them from the operators acting on .apart from the harmonic oscillator " basis , introduced in , satisfying , one can also introduce normalized coherent states in terms of a dimensionless complex number as where is a unitary operator furnishing a projective representation of the translation group .these states provide an over - complete basis in .the corresponding non - orthogonal projection operator is an operator acting on and is an eigenstate of ( the representation of in ) : .this represents a quantum state with maximal localization , where the position measure must now be interpreted in the context of a weak measurement ( positive operator - valued measurement , povm ) rather than a strong measurement ( projective valued measurement , pvm ) .as was shown in this quantum hilbert space has a built - in structure of an algebra in the sense that under the multiplication map the usual operator product of any arbitrary pair of elements yields another element of : in the coherent state i.e. is obtained by composing the respective representations of individual states i.e. and by using the voros star product : furthermore , it was shown in that also provides an over - complete basis on - the counterpart of , provided that they are composed using the above mentioned voros star product : correspondingly , one can introduce the unnormalized projection operators which are positive ( i.e. ) but unnormalized ( ) and non - orthogonal .they , however , form a complete basis and therefore provide a povm ( positive operator valued measure ) that one can use to provide a consistent probability interpretation by assigning the probability density of finding the outcome of a position measurement to be if the system is in a state described by the density matrix .in particular , if is a pure state density matrix , then which clearly goes into the corresponding commutative result in the limit .+ we briefly mention in this context that , just as the basis has a natural association with the voros star product , it was shown in that one can like - wise construct an appropriate basis , which is naturally associated with the moyal star product ( in the cartesian basis ) .however , this basis is somewhat unphysical in the sense that it is the common eigenstate of mathematically constructed unphysical commuting position - like observables , obtained by taking the average of left and right actions of as satisfying = 0 ] , then the states , are such that , , was devised to compute connes spectral distance .the present analysis will also assume the above conditions .the first two conditions are actually quite mild and just imply that a generic state can be represented by a density matrix .furthermore , if the state is pure , then the corresponding density matrix too will be pure . more specifically , to illustrate through an example , let us consider the case of the moyal plane - parametrized by the complex number .it has a one - to - one correspondence with the coherent state or more precisely the density matrix .like - wise , the harmonic oscillator " state is associated with the density matrix . or are density matrices from the perspective of and belong to . they should not be confused with real quantum density matrices , which should be constructed by taking outer products of states as .the fact that allows us to treat them as vectors , facilitating the analysis of the present paper .this is precisely the advantage of this hilbert - schmidt operatorial formulation . ]note that both and can be regarded as pure states in the -algebraic framework and can indeed be identified with an involutive algebra , which is a dense sub - algebra of a -algebra , where the hermitian conjugation ( ) plays the role of involution operator .since both and are elements of , the connes distance between a pair of states , now represented by density matrices and , can be recast in terms of the inner product as on the other hand the third condition implies a certain irreducibility condition , as explained in , . in this context , it is worthwhile to recall another important role of the third condition in the present analysis .if this condition were to be violated then we can find an such that .however , since the ball condition places no constraint on , it is clear that no upper bound exists for this distance function and the resulting distance diverges .when this condition holds , the spectral distance is always finite .this point was illustrated through the example of the model in .this example , which was first considered in , is described by the following spectral triple : note that the dirac operator has been written here in diagonal form with , being its eigenvalues .it was shown there that the space of pure states corresponds to . now with , the dirac operator will become proportional to the identity operator , so that \|_{\text{op } } = 0 ~\forall~a \in m_2(\mathds{c}) ] yields the lower bound as \|_{\text{op } } } .\label{3.1.9}\ ] ] the same lower bound can be obtained alternatively in a more rigorous manner by noting that the trace - norm of any element within the ball is bounded above . to see this ,consider an element s.t .\|_{\text{op } } \le 1 ] yielding \|_{\text{op}}}.\ ] ] therefore is bounded above by \|_{\text{op}}. \label{3.1.10}\ ] ] we can now decompose in a longitudinal " ( ) and transverse " ( ) component as , where and , is taken to be orthogonal to i.e. and corresponds to a unit vector in the plane formed by and .it is clear from , that we can choose to be an acute angle i.e ensuring that both , are positive . note that here we want otherwise in would collapse to zero .we can then re - write \|_{\text{op}} ] \|_{\text{op } } , \label{3.1.11.1}\ ] ] but can not be identified , as was done in , with it _ in general_. thus the rhs of was erroneously identified as an upper bound in .recall in this context that by definition of the infimum , it should rather correspond to the highest lower bound say , satisfying + \sin\theta [ \mathcal{d } , \pi(\widehat{\delta\rho}_{\perp } ) ] \|_{\text{op } } ~\forall~~ \delta\rho_{\perp } \in w , ~\theta \in [ 0 , \pi / 2).\ ] ] since the determination of a general formula to obtain is difficult , we have to be content with just writing \|_{\text{op } } = \inf_{\substack{\delta\rho}_{\perp } \in w \\ \theta\in [ 0 , \pi / 2 ) } \| \cos\theta [ \mathcal{d } , \pi(\widehat{\delta\rho } ) ] + \sin\theta [ \mathcal{d } , \pi(\widehat{\delta\rho}_{\perp } ) ] \|_{\text{op } } \label{3.1.12}\ ] ] and find case by case .returning to our derivation , we begin by rewriting the infinitesimal connes distance function , as where is given by from it is clear that the connes distance function can only depend on , i.e. it is also elementary to see that for any unitary transformation this also implies , from , that in general we note from and that and depend on the direction of , in the sense that even if , .however , if implies , this dependence disappears using , is a constant as and both have norm one and .this is the case for the coherent state basis in the moyal plane , where equality of the trace norms implies that and differ by a rotation of the form .this explains why the connes distance on the moyal plane is proportional to the trace norm , which is simply the euclidean distance , infinitesimally and for finite distances .we corroborate this result in the next section through a more explicit calculation . more generally , it is not difficult to verify that implies if and are the difference of two orthogonal pure states , in which case is again a constant .this , and more general scenarios under which this holds will be explored elsewhere .however , this clearly can not hold in general , but when this is the case , it readily follows that , as determined from ( [ 3.1.13 ] ) indeed corresponds to a numerical constant and upto this constant the metric that can be read off from the infinitesimal distance ( [ newb1 ] ) , is indeed given by the provost - vallee form i.e. in the coherent state basis in particular . in the case of the moyal plane , this readily yields a flat metric , as in , so that the straight lines are expected to play the role of geodesics .indeed this fact was used implicitly in the parametrization below . on the other hand , for the case of the fuzzy sphere ,although the metric is that of commutative sphere - upto an overall numerical factor , it turns out that the finite distance is quite different from the geodesic ( great circle ) .indeed , for the representation of i.e. for the case of maximal non - commutativity , the distance turns out to be half of the chordal distance and there does not exist any geodesic in the conventional sense , preventing one to integrate the infinitesimal distance to compute finite distance and the commutative result is obtained only in the limit . as we shall see in the sequel essentially similar results are obtained by applying our formula .we begin with the computation of the finite distance between coherent states on the moyal plane in the next section .the purpose of this section is to determine the spectral distance , _ a la _ connes , between an arbitrary pair of finitely separated coherent states , or more precisely between and .although a formal algorithm ( [ rev_formula ] ) and ( [ n_finite ] ) was devised for this purpose in the preceding section , this is not very user - friendly , as the identification of the right , for which the infimum is reached in ( [ n_finite ] ) is an extremely difficult job . on the other hand , the lower bound ( [ 3.1.9 ] ) can be easily computed as was done in ( at most up to a numerical constant ) .the strategy we therefore adopt is to emulate to obtain the corresponding upper bound and then look for an optimal element for which the saturation condition holds .if we can identify at least one ( note that this may not be unique ! ) then we can identify the upper bound to be the true distance .it may also happen in some situations that both upper and lower bounds coincide . in this case ,their common value can be identified as the distance . otherwise , one has to play with different choices of in ( [ n_finite ] ) to find the best possible estimate , as the upper bound can not be identified as the true distance. we shall encounter a variety of such situations in the rest of the paper , which will help us to study and contrast various non - commutative spaces through the examples of and .the present section deals with .we begin by considering the action of the state , ( associated to the fuzzy point in the moyal plane , in the spirit of gelfand and naimark ) , on a generic algebra element as , note that we have made use of here .this means that the algebra element gets translated by the adjoint action of thereby furnishing a proper representation of the translation group . without loss of generalitywe therefore only have to compute the distance between the pair of states and ( taken to correspond to the origin " ) as can easily be seen by invoking the transformational property of the dirac operator under translation , ( as explained in appendix a ) and can be written , by using , as intuitively , is the maximum change in the expectation values of the and the translated algebra element in the same state .this is somewhat reminiscent of the transition from the schrdinger to heisenberg picture , where the operators are subjected to the unitary evolution in time through an adjoint action of the unitary operator , while the states are held frozen in time .+ to proceed further , let us introduce a one - parameter family of density matrices with ] to be proportional to the identity operator , as happens here .this yields , where .+ one can check at this stage that although \|_{op } = 1,\ ] ] it fails to be a trace - class operator : consequently , but can be thought of as belonging to the multiplier algebra where and .a resolution of this problem was provided in .we briefly describe their approach here , as we propose an alternative approach in the next sub - section . + hereone weakens the strong requirement that and rather looks for a sequence , and also , by inserting a suitable operator - valued `` gaussian '' factor to ensure convergence of the trace - norm and thereby rendering it a trace - class operator : then the following proposition was proved in ( proposition 3.5 ) : [ [ proposition ] ] proposition : + + + + + + + + + + + + let be a fixed translation and .define , where . then there exists a s.t . ( lipschitz ball ) for any .+ using this proposition , any generic element of the sequence can be written in terms of above as now , it can be easily shown that therefore , it is clear , on the other hand , from the translational symmetry of the dirac operator ( see appendix a ) that it is quite adequate to look for the optimal element at the level of the infinitesimal distance itself .the anticipated advantage is that when projected to a finite dimensional sub - space will be automatically trace - class .besides , it will help us to put the computation presented in in the context of the present analysis and to zoom in on the source of the mismatch between the result of and , by a factor of .we therefore turn our attention towards the computation of the connes distance between infinitesimally separated coherent states and .clearly , this can just be read off from to yield , by invoking translational symmetry ( see appendix a.1 ) , and the apparent optimal element for the infinitesimal case is easily obtained by projecting ( [ 1.15.1 ] ) into the 2d subspace to get where upto . note that the projection was employed to construct , as lives in the above - mentioned 2d subspace and consequently only the projected component of can contribute to the distance , given in terms of the inner product .we therefore observe that in this case we have to deal with only finite dimensional subspaces , the optimal element is trace - class , by default and we need not invoke any sequence here .however , in this case it does not belong to the ball any more : \|_{op } = \sqrt{3 } > 1 \implies a_s \notin b ] .we find the matrix to be living on a higher -dimensional subspace , having a block - diagonal form : ^\dagger [ b , p_{n+1}a_sp_{n+1 } ] = \frac{\theta}{2 } \left ( \begin{array}{c|c } \mathds{1}_{(n-1 ) \times ( n-1 ) } & 0_{(n-1 ) \times 3}\\ \hline 0_{3 \times ( n-1 ) } & a\ \end{array } \right),\ ] ] but with being a non - diagonal block matrix : the corresponding operator norm thus turns out to be a linearly divergent -dependent function . for this is just as mentioned above .thus , although ( [ a1 ] ) and ( [ 1.16 ] ) are proportional , the in - appropriate projector ( [ 1.16 ] ) generates the undesirable factor , otherwise the lower bound itself would have yielded the desired result ( [ eqa7 ] ) . +we now present an alternative to this gaussian " sequence approach of ( [ eqa6])-([1.15.2 ] ) by constructing a sequence of projected in , rather than projected in as in , using a projector which is appropriate for the eigen - spinor basis of the dirac operator .this will allow us to evade the problem associated with the violation of the ball condition projector for only .+ we begin with the diagonal representation of , i.e. . in particular for , we have where the columns and rows are labelled by of ( [ dir_bas_m ] ) , respectively . note that it has vanishing entries in the remaining rows / columns , indexed by , with .+ proceeding with the same proposed optimal element , we now project it on the representation space spanned by the eigen - spinors . to begin with , we first project it on the same above subspace spanned by . on computation this yields consequently , \equiv \begin{pmatrix } 0 & -\frac{1}{\sqrt{2 } } & \frac{1}{\sqrt{2 } } & 0 & 0\\ \frac{1}{\sqrt{2 } } & 0 & 0 & -\frac{1}{2 } & \frac{1}{2}\\ -\frac{1}{\sqrt{2 } } & 0 & 0 & -\frac{1}{2 } & \frac{1}{2}\\ 0 & \frac{1}{2 } & \frac{1}{2 } & 0 & 0\\ 0 & -\frac{1}{2 } & -\frac{1}{2 } & 0 & 0\ \end{pmatrix}\ ] ] and finally , in contrast to , we have ^\dagger [ \mathcal{d } , \mathds{p}_2 \pi(a_s ) \mathds{p}_2 ] \equiv \left ( \begin{tabular}{c|c } & \\ \hline & \ \end{tabular } \right),\ ] ] where is a square matrix having eigenvalues andthis is in contrast to the matrix .further , refers to a rectangular null matrix with rows and columns .thus clearly we have in this case , \|_{op } = 1\ ] ] and , where denotes the inner product between a pair of elements given by and is the counter part of .again , the subscript indicates that the trace has to computed over .further note that a factor of has been inserted in ( [ innpro_moy ] ) in anticipation to relate the inner products and , in case both and in are representations of such that and . in that case, they can be related as , .of course here one can easily see that any s.t . and one can not simply relate with any inner products of . indeed , if it were to exist , we could have identified this ` ' , using ( [ mix_ball ] ) and ( [ innpro_moy ] ) , to be the optimal element itself , which by definition has to belong to , or at best to the multiplier algebra .in fact , this will be a persistent feature with any finite -dimensional projection with one can note at this stage , however , that one can keep on increasing the rank of the projection operator indefinitely without affecting ( [ mix_ball ] ) and ( [ innpro_moy ] ) in the sense that the counter part of these equations still has the same form \|_{op } = 1\ ] ] and are independent of if .+ these equations again follow from the fact that ^\dagger [ \mathcal{d } , \mathds{p}_n \pi(a_s ) \mathds{p}_n ] = \left ( \begin{array}{c|c } \mathds{1}_{(2n-1 ) \times ( 2n-1 ) } & o_{(2n-1 ) \times 2}\\ \hline o_{2 \times ( 2n-1 ) } & b\ \end{array } \right)\ ] ] with again appearing as the lower block and has `` support '' only on the first block .+ finally , since in the limit , by , we have .one can thus interpret \|_{op } = 1\ ] ] and thus , instead of inserting a gaussian factor , as in , we have a sequence of trace - class operators living in ( note that can be regarded as hilbert - schmidt operators acting on ) and each of them satisfy the ball condition .this is accomplished by projecting in the finite dimensional subspaces spanned by dirac eigen - spinors rather than projecting just to by in , where the ball condition gets violated and the operator norm diverges linearly .this latter projector could be associated naturally to a different orthonormal and complete basis for as and has the block - diagonal form .note that the eigen - spinor basis is easily obtained from by first leaving out separately and then pairing and as the projector ( [ p - n ] ) is then clearly the natural choice for the ball condition due to its natural association with the dirac operator .furthermore , note that we have to make use of the entire ( [ 1.15.1 ] ) as the optimal element .we would also like to point out in this context that ( [ pi_d_rho ] ) and ( [ proj - a_s ] ) are not proportional anymore , unlike their counterparts ( [ 1.16 ] ) .however , here too we can easily split or for that matter itself in the limit , into the longitudinal and transverse components , but now in and not in .this requires a slight generalization of the analysis presented in section [ sec5 ] .this , however , is not very useful in this context and we do not pursue it here anymore .+ it is finally clear from the above analysis that the upper bound is saturated in the infinitesimal case through the sequence in the limit , allowing one to identify invoking translational symmetry with as in and the transformational property of the dirac operator , it is clear that \right\rbrace\ ] ] and one concludes that for finitely separated states , one can write reproducing the result and identify the straight line joining to to be geodesic of the moyal plane enabling one to integrate the infinitesimal distance ( [ inf - dis ] ) along this geodesic to compute finite distance .in fact , one can easily see at this stage that the distance can be written as the sum of distances and as , where is an arbitrary intermediate pure state from the one - parameter family of pure states , introduced in , so that the respective triangle inequality becomes an equality . as we shall subsequently seethis feature will not persist for other generic non - commutative spaces and we will demonstrate this through the example of the fuzzy sphere later . before that we , however , complete our study of the moyal plane by computing the distance between the discrete harmonic oscillator " states in the next section .for the discrete case , we consider a pair of states , which are separated by an infinitesimal " distance . by thiswe mean the nearest states , which are eigenstates of . to compute the distance between the states and we take a similar approach i.e. start with and re - express this as the difference in the expectation value of the transformed algebra element and that of itself in the same state as + ab)b^\dagger - ( n+1)a| n \rangle|.\\\end{aligned}\ ] ] on simplification this yields | n+1 \rangle|\\ & = \sup_{a \in b } \frac{1}{\sqrt{n+1 } } |\langle n+1 |[b^{\dagger},a]| n \rangle| .\end{split}\ ] ] we can now invoke bessel s inequality ( written in terms of the matrix elements of an operator in some orthonormal bases ) , to write ( using ) | n+1 \rangle|\\ & = \sup_{a \in b } \frac{1}{\sqrt{n+1 } } |\langle n+1 | [ b^{\dagger},a]| n \rangle|\\ & \leq \frac{1}{\sqrt{n+1 } } \|[b , a]\|_{op } = \frac{1}{\sqrt{n+1 } } \|[b^\dagger , a]\|_{op}.\ \end{split}\ ] ] this finally yields for the finite case , to compute the distance between and with the difference between the two integer labels and being .we start by writing , as shown in the infinitesimal case eq , | n+i \rangle.\ ] ] therefore , proceeding as in the infinitesimal case , | n+i \rangle \big| \label{2.2.1}\\ & \leq \sup_{a \in b } \sum_{i=1}^{k } \frac{1}{\sqrt{n+i } } \big|\langle n + ( i-1 ) | [ b , a]| n+i \rangle\big| \label{2.2.2}\\ & \leq \sqrt{\frac{\theta}{2}}\sum_{i=1}^{k } \frac{1}{\sqrt{n+i } } , \label{2.2.3 } \end{aligned}\ ] ] by using eq and . + to find an optimal element for which the above inequality is saturated , we demand | n+i \rangle\big| = \sqrt{\frac{\theta}{2 } } \sum_{i=1}^{k } \frac{1}{\sqrt{n+i}}.\ ] ] equivalently , now if we let it implies .constructing such an we get , where .this gives from the above equation it is also seen that for the harmonic oscillator " basis reproducing the result of . + finally note that is no longer proportional to unlike ( [ 2.1.4.1 ] ) .consequently , the distance computed here will exceed the lower bound and contributes non - trivially in ( [ n_finite ] ) .we approach the problem of the fuzzy sphere in quite the same way as the moyal plane .there are , however , some fundamental differences between these cases and we will comment on these as we proceed . to begin with , we shall first try to find the distance in the discrete basis and later we will look at the continuous coherent state basis . as will be seen in the subsequent discussion , it is convenient to adopt different techniques for these two cases , namely we need to use the dirac operator eigen - spinors in the latter case .we begin with a particular fuzzy sphere , corresponding to a particular .the discrete set of basis are indexed by as or just in an abbreviated from , where the index ` ' is suppressed .we shall rather use a subscript ` ' to denote the distance function between a pair of states and .we first compute the distance between the states and ( this being the `` infinitesimal separation '' as far as discrete basis is concerned ) .similar to the moyal case , we start with then , | n_{3}+1 \rangle|.\\\end{aligned}\ ] ] again invoking bessel s inequality , we can write \|_{op}}{\sqrt{n(n+1)-n_{3}(n_{3}+1 ) } } = \frac{\|[j_{+},a]\|_{op}}{\sqrt{n(n+1)-n_{3}(n_{3}+1 ) } } \leq \frac{r_n}{\sqrt{n(n+1)-n_{3}(n_{3}+1 ) } } , \label{4.46}\ ] ] where we have made use of the inequality , as proved in appendix [ ap_fuz ] , where for , we have \|_{op } = \|[j_{+},a ] \|_{op } \leq r_n ; \ \r_n = \lambda \sqrt{n(n+1)}. \label{4.47}\ ] ] like in the case of the moyal plane , we look for an optimal element i.e. an algebra element saturating the upper bound in , so that it can be identified as the distance .however , as mentioned before , the optimal element may not be unique . here ,we provide two such optimal elements which saturate the above inequality .we try with the form corresponding to the lower bound ( [ 3.1.9 ] ) : ||_{op } } ~~\mathrm{where}~~ d\rho = |n_3 + 1\rangle\langle n_3 + 1| - |n_3\rangle\langle n_3|.\ ] ] the operator norm \|_{op} a -a^\dagger ] being a real parameter and .we then have for pair of states and , using the cauchy - schwartz inequality we get }{|\alpha}|\right)\big| & = |\omega_{zt}([j_{+},a ] ) - \omega_{zt}([j_{-},a])|\\ & \le \sqrt{2 } \sqrt { |\omega_{zt}([j_{+ } , a ] ) |^{2 } + |\omega_{zt}([j_{- } , a]^ ) |^{2}}\\ & \le \sqrt{2 } \sqrt { ||[j_{+ } , a]||_{op}^{2 } + ||[j_{- } , a]||_{op}^{2}}\\ & \le 2 r_n . \end{split}\ ] ] thus we get from and , therefore the upper bound of the connes distance on the fuzzy sphere is actually the geodesic distance on the commutative sphere the stark difference with the moyal plane is that we can not find any algebra element saturating this inequality , not even through a sequence or through projections .this limit is actually the distance on a commutative sphere and is reached only for the commutative limit , as we have shown earlier .this implies that for any finite representation , the distance between two points on the fuzzy sphere is less than the geodesic distance for a commutative sphere ( see , for example ) and the distance does not follow the conventional ` geodesic ' path as we know it .indeed , we will show that for , the distance actually corresponds to half of the chordal distance between a pair of points on the surface of the sphere or more precisely between the associated pure states and interpolated by a one parameter family of mixed states . only in the limit slowly deforms to become the great circle path on the surface . for the calculation of distances for coherent states or more precisely the operator - norm occurring in the ball condition, we now make use of the eigen - spinors of the dirac operator .we will first sketch the outline of the algorithm .one can calculate in a straightforward manner ( at least in principle ) the commutator ( a a^\dagger)_{(2n+2 ) \times ( 2n+2)} 0_{(2n+2 ) \times 2n} 0_{2n \times ( 2n+2)} ( a^\dagger a)_{2n \times 2n} ] as \equiv \frac{1}{r_n } \left ( \begin{tabular}{c|c } 0 & \\ \hline & 0\ \end{tabular } \right),\ ] ] where here rows / columns are labeled from up to down / left to right respectively by and .subsequently , we get ^\dagger [ \mathcal{d } , \pi(d\rho ) ] = \frac{1}{r_n^2 } \left ( \begin{tabular}{c|c } & 0\\ \hline 0 & \ \end{tabular } \right)\ ] ] with since ^\dagger [ \mathcal{d } , \pi(d\rho)]\|_\text{op } = \frac{1}{r_n^2 } \|a^\dagger a\|_\text{op} ] in terms of the coefficients . for any generic pair of states and we have , by making use of the general framework sketched in section , the matrix element , | m^\prime \rangle\rangle_- = \frac{2}{r_{1/2 } } ~_+\langle\langle m | \pi(a ) | m^\prime \rangle\rangle_-,\ ] ] where takes values and . on explicit computation ,one finds the following matrix elements for the commutator : so that the final form of the commutator matrix ( [ 6.3.2 ] ) in this case can be written as & \equiv \frac{1}{r_{1/2 } } \left ( \begin{tabular}{c|c } & \\ \hline & 0\ \end{tabular } \right ) \end{split } , ~\text{where}~\mathrm{a}_{3 \times 1 } = 2~ _ + \langle\langle m | [ \mathcal{d } , \pi(a ) ] | m^\prime \rangle\rangle_-= \begin{pmatrix } \sqrt{2 } ( a_1 + i a_2)\\ 2a_3\\ - \sqrt{2 } ( a_1 - i a_2 ) \end{pmatrix}.\ ] ] finally , ^\dagger [ \mathcal{d } , \pi(a ) ] = \frac{1}{r^2_{1/2 } } \left ( \begin{tabular}{c|c } & 0 \\ \hline 0 & \ \end{tabular } \right).\ ] ] since ^\dagger [ \mathcal{d } , \pi(a ) ] \|_\text{op} ] .we therefore consider the algebra element to be traceless throughout this section .it is quite tempting to start directly by identifying the algebra element as a linear combination of the gell - mann matrices .this algebra element with some extra restrictions provide us with a simple expression of the distance using , which we then corroborate with a more rigorous calculation using .the role of turns out to be very important in ( [ rev_formula ] ) and ( [ n_finite ] ) for the fuzzy sphere and we employ the most general form of possible to improve the estimate of the spectral distance as best as we can from the lower bound i.e. case in .the determination of an exact value , even with the help of , remains a daunting task . to begin with the case we first proceed in the same way as in section [ section2 ] to obtain , using , |n_3'\rangle\rangle_- = \frac{3}{r_1}~_+\langle\langle n_3|\pi(a)|n_3'\rangle\rangle_-~~\text{and}~~_-\langle\langle n_3'|[\mathcal{d},\pi(a)]|n_3\rangle\rangle_+ = -\frac{3}{r_1}~_-\langle\langle n_3'|\pi(a)|n_3\rangle\rangle_+,\ ] ] where the ranges of are respectively given by and so that with the diagonal representation the commutator 0_{4\times 4} a_{4\times 2} -a^{\dagger}_{2\times 4} 0_{2\times 2} ]we readily obtain : ^\dagger [ \mathcal{d } , \pi(d\rho ) ] = \frac{1}{r_1 ^ 2 } \left ( \begin{tabular}{c|c } & \\ \hline & \\end{tabular } \right).\ ] ] by exploiting the properties of the operator norm one has the freedom to choose between the two block - diagonal square matrices as ^{\dagger}[\mathcal{d},\pi(a)]\rvert_{op } = \lvert[\mathcal{d},\pi(a)]\rvert_{op}^2 ] is computed by varying the entries in the algebra elements , within the admissible ranges and obtaining the global minimum of .this gives \rvert_{\text{op}}=\frac{1}{r_1}\text{min}\big(\sqrt{e_+}\big)= \frac{1}{r_1}\sqrt{\text{min}(e_+)}.\ ] ] the eigenvalue will always have a concave - up " structure in the parametric space as it can be written as the sum of square terms only ( [ mat1 ] ) and ( [ eig_gen ] ) .there can be points in the parametric space where and are equal , namely points where becomes , but since can not become less than , determining the minimum of will alone suffice in calculating the infimum of the operator norm as is clear from .so we work with alone and use _ mathematica _ to get the desired result .+ the pure states corresponding to points on the fuzzy sphere can be obtained by the action of the su(2 ) group element on the pure state corresponding to the north pole ( ) of ( with ) i.e. . note that we have taken for convenience the azimuthal angle .this can be done without loss of generality .+ correspondingly , like the case , all entries are real here ; indeed by writing ( s are the gell - mann matrices ) the coefficients of , and vanishes .this however only provides us with a lower bound of the distance in connes formula and the actual distance is reached by some optimal algebra element ( ) of the form for which the infimum is reached ( say in ) .this should be contrasted with the optimal element , for which the supremum is reached in . in any case , let us first try to have an improved estimate of the upper bound of the distance .this will be followed by the computation involving .the upper bound for the spectral distance , obtained previously in corresponded to that of a commutative sphere , but that lies much above the realistic distance for any fuzzy sphere associated to the -representation of , as discussed in section .it is therefore quite imperative that we try to have a more realistic estimate of this where this upper bound will be lowered considerably . at this stage, we can recall the simple example of -atom , where the energy gap between the ground state and first excited state is the largest one and the corresponding gaps in the successive energy levels go on decreasing and virtually become continuous for very large ( ) .one can therefore expect a similar situation here too .indeed , a preliminary look into the distance between north and south poles already support this in the sense that .one therefore expects the distance function to be essentially of the same form as that of , except to be scaled up by a -factor and a miniscule deformation in the functional form . for large - values of ,the corresponding ratios , and the functional deformations are expected to be pronounced .however , the exact determination of this form is extremely difficult and we will have to be content with a somewhat heuristic analysis in this subsection and a more careful analysis , using , in the next subsection . to that end, we start here with the most general form of an algebra element , as a linear combination of all the gell - mann matrices ( ) and look for an optimal element from giving , with some additional restrictions which are to be discussed later .we write in analogy with for . againthe rows / columns are labeled from top to bottom / left to right by .now we calculate tr using the matrix and the above algebra element to get this clearly demonstrates the expected independence of imaginary components viz . and .we therefore set to begin with .this simplifies the matrix elements of , using , as like - wise the above expression simplifies as \bigg| .\label{guess}\end{aligned}\ ] ] our aim is to obtain a simple form of the ball " condition and eventually of connes spectral distance so that we might obtain an improved estimate for the spectral distance over the lower bound , obtained by making use of , which is more realistic than .we shall see shortly that with few more additional restrictions , apart from the previous ones ( like vanishing of and imposed already ) it is possible to simplify the analysis to a great extent , which in turn yields a distance estimate which has the same mathematical structure as that of the exact distance for the case upto an overall factor . to that end , we impose the following new constraints : as a simple observation of suggests that it simplifies even further to the following form : \big| = 2\sqrt{2}\sqrt{x_1 ^ 2 + 2x_3 ^ 2}\big|\sin\big(\frac{\theta}{2}\big ) \cos\big(\zeta-\frac{\theta}{2}\big)\big|,\ ] ] where and .moreover , putting the above constraints in the equation - , we get with this , the eigenvalue and the corresponding ball condition can be obtained as \rvert_{\text{op } } = \frac{1}{r_1}2\sqrt{x_1 ^ 2 + 2x_3 ^ 2}\le 1.\ ] ] using this ball condition in , we get hence , a suggestive form of the spectral distance between a pair of pure states and for the representation can be easily obtained by identifying the optimal value of the last free parameter to be given by .this yields the corresponding form of the optimal algebra element is obtained after a straightforward computation to get when , the distance is exactly the same between the two pure coherent states and and the above distance gives which exactly matches with the one computed in using the discrete formula .note that we have made use of all the restrictions and , imposed at various stages .finally , we would like to mention that this simple form was obtained by imposing the above ad - hoc constraints resulting in .consequently , one can not a priori expect this to reflect the realistic distance either . at best, this can be expected to be closer to the realistic one , compared to .the only merit in is that it has essentially the same structure as that of for the case . nevertheless , as we shall show below , the computation involving , using - matches with to a great degree of accuracy .note that we are denoting the analytical distances as to distinguish them from other distances to be calculated in the next section . also note that since the analytical distance has the same form as , it corresponds to -times the half of the chordal distance .furthermore , it satisfies the pythagoras equality just like the case .before we conclude this subsection , we would like to point out that we could have perhaps reversed our derivation by simply requiring the matrix to be diagonal : .but in that case for a particular choice of algebra element , satisfying the aforementioned conditions viz .now given the structures of and they will have shapes which are concave upwards , when the hyper - surfaces are plotted against the set of independent parameters occurring in , where represents the subregion in the parameter space , defined by these conditions .now it may happen that , in which case one of them , say , exceeds the other : . then clearly \rvert_{\text{op}}=\frac{1}{r_1}\sqrt{\text{min}_{a\in\mathcal{r}}(m_{11})}. \label{n-1}\ ] ] otherwise , the hyper - surfaces given by and will definitely intersect and will reduce to \rvert_{\text{op}}=\frac{1}{r_1}\sqrt{\text{min}_{a\in\bar{\mathcal{r}}}(m_{11})}=\frac{1}{r_1}\sqrt{\text{min}_{a\in\bar{\mathcal{r}}}(m_{22 } ) } , \label{n-2}\ ] ] where represents the subregion where .in fact , this is a scenario which is more likely in this context , as suggested by our analysis of infinitesimal distance presented in the next subsection ( see also fig .2 ) . we therefore also set . with the additional condition like , we can easily see that one gets , apart from , another set of solutions like , this , however , yields the following ball condition the counterpart of , and in contrast to , can not in anyway be related to its counterpart here , given by we therefore reject from our consideration , as it will not serve our purpose . in this sectionwe employ the modified distance formula by constructing the most general form of for both finite as well as infinitesimal distances .we show that in both of the cases the distance calculated using this more general ( numerical ) method matches with the corresponding result given by to a very high degree of accuracy suggesting that should be the almost correct distance for arbitrary .[ [ infinitesimal - distance-1 ] ] infinitesimal distance + + + + + + + + + + + + + + + + + + + + + + in this case takes a simpler form by expanding ( [ mat2 ] ) and keeping only the leading order terms in : the most general structure of the transverse part here is obtained by taking all possible linear combinations of the generic states i.e. where because of the hermiticity of .clearly , the complex parameters are exact analogues of suitable combinations of s in .the orthogonality condition ( [ new1 ] ) here requires the coefficient to be purely imaginary .moreover we can demand that the matrix representation of should be traceless as discussed in section [ sec1 ] .we thus impose . to better understand the significance of each term we write ( [ new2 ] ) in matrix form as follows : with this our optimal algebra element becomes : where we have absorbed inside the coeffiecients of . with independent parameters , it is extremely difficult to vary all the parameters simultaneously to compute the infimum analytically .however , as far as infinitesimal distances are concerned , it may be quite adequate to take each parameter to be non - vanishing one at a time .thus , by keeping one of these diagonal / complex conjugate pairs like and to be non - zero one at a time and computing the eigenvalues of the matrix ( [ mat1 ] ) using ( [ new1 ] ) , ( [ new4 ] ) and ( [ new3 ] ) , it is found that only the real part of contributes non - trivially to the infimum of the operator norm \rvert_\text{op} ] .note in this context that also , since , as follows from , the infinitesimal spectral distance is given by we now corroborate the same result by varying all the parameters simultaneously . of course we shall have to employ _ mathematica _ now . to that end ,first note that ( [ mat1 ] ) and ( [ eig_gen ] ) are now given as : where and .interestingly enough we get the same infimum i.e. by finding the minimum of the eigenvalue as discussed in section [ sec5 ] with the parameter eigenvalue where p and q are given by ( [ p1 ] ) and ( [ q1 ] ) .we therefore recover the distance which also matches with for .[ [ finite - distance-1 ] ] finite distance + + + + + + + + + + + + + + + for any finite angle , the matrix ( [ mat2 ] ) can be directly used to compute the square of the trace norm .moreover , this can be used as the algebra element to compute the eigenvalues of and then the operator norm \rvert_{op} ] on , on using , gives \phi = \sqrt{\frac{2}{\theta}}\begin{pmatrix } 0 & [ i \hat{b}^\dagger , a]\\ [ -i \hat{b},a ] \end{pmatrix}\phi.\ ] ] now regarding as a test function , we can identify the dirac operator as this is further simplified by considering the transformation and , which just corresponds to a rotation by an angle in space . with this transformationthe dirac operator takes the following hermitian form : which precisely has the same form as , used throughout the paper . the most important pointto note is that this very structure of the dirac operator allows us to make it also act directly from the left on . in this context, we would like to mention that this dirac operator responds to a rotation in the plane by an arbitrary angle as since =[\hat{b},\hat{b}^\dagger]=1 ] occurring in the ball condition ( [ condis ] ) . finally , note that the dirac operator here is hermitian and differs from that of by a factor ( ) , which however , is quite inconsequential as it does not affect the operator norm \|_{op} ] and ] for any hermitian element . then for a hermitian algebra element , \|^{2}_\text{op } & = \|[\mathcal{d},\pi(a)]^\dagger[\mathcal{d},\pi(a)]\|_\text{op}\\ & = \frac{1}{r^{2}}\left\|\left[\begin{array}{cc } [ j_{+},a]^{\dagger}[j_{+},a ] + [ j_{3},a]^{\dagger}[j_{3},a ] & [ [ j_{-},a],[j_{3},a]]\\ - [ [ j_{+},a],[j_{3},a ] ] & [ j_{-},a]^{\dagger}[j_{-},a ] + [ j_{3},a]^{\dagger}[j_{3},a ] \end{array}\right ] \right\|\\ & \ge \frac{1}{r^{2}}\sup_{{\stackrel{\psi \in \mathcal{h}}{\|\psi\| = 1 } } } \langle\psi_{1}| [ j_{+},a]^{\dagger}[j_{+},a ] + [ j_{3},a]^{\dagger}[j_{3},a ] |\psi_{1}\rangle \ \ \bigg(\psi = \begin{pmatrix } 0 \end{pmatrix } \bigg)\\ & \ge \frac{1}{r^{2 } } \sup_{{\stackrel{|\psi_{1}\rangle \in \mathcal{h}_{c } } { | \psi_{1}\rangle\langle\psi_{1}|= 1}}}\langle\psi_{1}| [ j_{+},a]^{\dagger}[j_{+},a]|\psi_{1}\rangle+ \frac{1}{r^{2}}\sup_{{\stackrel{|\psi_{1}\rangle \in \mathcal{h}_{c}}{| \psi_{1}\rangle\langle\psi_{1}| = 1 } } } \langle\psi_{1}| [ j_{3},a]^{\dagger}[j_{3},a ] |\psi_{1}\rangle\\ & \ge \|[j_{+},a]\|^{2}_\text{op } + \|[j_{3},a]\|^{2}_\text{op}. \end{split } \label{a.4}\ ] ] therefore we get \|_\text{op } \le \|[\mathcal{d},\pi(a)]\|_\text{op } \ \ \ \frac{1}{r } \|[j_{3},a]\|_\text{op }\le \|[\mathcal{d},\pi(a)]\|_\text{op}. \label{a.5}\ ] ] for ^{\dagger } = - [ j_{-},a] ] it thus follows that \|_\text{op } \le r ~~;~~ \|[j_{-},a]\|_\text{op } \le r. \label{a.7}\ ] ] galluccio s , lizzi f and vitale p 2008 _ phys .d _ * 78 * 085007 ; + balachandran a p and martone m 2009 _ mod .a _ * 24 * 1721 ; + balachandran a p , ibort a , marmo g and martone m 2010 _ phys .* 81 * 085017 . | we revise and extend the algorithm provided in to compute the finite connes distance between normal states . the original formula in contains an error and actually only provides a lower bound . the correct expression , which we provide here , involves the computation of the infimum of an expression which involves the transverse " component of the algebra element in addition to the longitudinal " component of . this renders the formula less user - friendly , as the determination of the exact transverse component for which the infimum is reached remains a non - trivial task , but under rather generic conditions it turns out that the connes distance is proportional to the trace norm of the difference in the density matrices , leading to considerable simplification . in addition , we can determine an upper bound of the distance by emulating and adapting the approach of in our hilbert - schmidt operatorial formulation . we then look for an optimal element for which the upper bound is reached . we are able to find one for the moyal plane through the limit of a sequence obtained by finite dimensional projections of the representative of an element belonging to a multiplier algebra , onto the subspaces of the total hilbert space , occurring in the spectral triple and spanned by the eigen - spinors of the respective dirac operator . this is in contrast with the fuzzy sphere , where the upper bound , which is given by the geodesic of a commutative sphere is never reached for any finite -representation of . indeed , for the case of maximal non - commutativity ( ) , the finite distance is shown to coincide exactly with the above mentioned lower bound , with the transverse component playing no role . this , however starts changing from onwards and we try to improve the estimate of the finite distance and provide an almost exact result , using our new and modified algorithm . |
in treatment planning ( tp ) of radiotherapy , a patient body is commonly modeled to be h of variable effective density , which corresponds to electron density for high - energy photons or stopping - power ratio for charged particles .the effective density is normally converted from computed - tomography ( ct ) number for attenuation of kilovoltage ( kv ) x ray .the standard approach to construct such conversion functions involves experimental modeling of the x ray and stoichiometric analysis of body tissues. the effective density is , though practically successful, an approximate concept for radiations undergoing complex interactions . with the advancement of computer technology , monte carlo ( mc ) simulation of radiotherapy is becoming feasible , where a radiation is handled as a collection of particles individually interacting with matter of known composition according to the basic laws of physics . for the modeling of body tissues , schneider , bortfeld , and schlegel ( sbs ) applied the stoichiometric calibration to construct functions to convert ct number to mass density and elemental weights. their conversion functions , though commonly used for research, are not applicable to the other ct systems .calibration and maintenance of the complex one - to - many relations may prevent mc simulation from applying to tp practice . for patient dose calculation , tp systems commonly use a selectable function to convert ct number to effective density of interest .recently , a practical two - step approach was proposed for tp with proton and ion beams, where ct number is only converted to electron density that is automatically converted to interaction - specific effective densities using invariant relations . in the present study, we extend the two - step approach to promote mc - based tp practice .we used the standard body tissue data in icrp publication 110, which is the latest compilation of the kind . in the publicationare mass density and elemental weights for 53 standard tissues that fully comprise 141 organs of reference male and female , for which we derived electron density and mass fraction per person ( 50% male/50% female ) .ignoring tiny mass of air , below 0.90 g/ was only a lung at 0.384 g/ occupying 1.4% of body mass , which we chose for elemental composition of air - containing tissues . in the 0.901.00 g/ regionwere an adipose tissue at 0.95 g/ ( 33.9% ) and medullary cavities including bone marrow all at 0.98 g/ ( 0.4% ) . in the 1.001.07 g/ regionwere a muscle at 1.05 g/ ( 34.5% ) , spongiosa tissues ( 0.3% ) , and many other general organs ( 11.7% ) . in the 1.071.101 g/ region were a skin ( 4.9% ) , a cartilage ( 0.6% ) , and spongiosa tissues ( 0.7% ) , which are miscellaneously epithelium , connective , and fatty bone tissues . in the 1.1011.25 g/ region were only spongiosa tissues ( 5.7% ) .above 1.25 g/ were a mineral bone at 1.92 g/ ( 5.7% ) and a tooth at 2.75 g/ ( 0.1% ) . among the material properties , we adopted mass density as the independent variable and examined its correlation to the dependent variables : electron density and elemental densities of six major elements = \{h , c , n , o , p , ca}. to define the regional representative tissues , we took mass - weighted mean for the set of densities over tissue in region by for the standard tissues and the regional representative tissues , we calculated the residual weight and the mean residual atomic number by adding weights and averaging atomic number over residual element , with which the residual mass could be approximately included in mc simulation .compared to muscle / organ tissues , adipose / marrow tissues have high concentration of fat .teeth have high concentration of minerals , connective tissues have high concentration of collagen , and bones are in between them .the concentration varies among and within individual tissues and are generally correlated with density .anatomically , adipose tissues are neighboring to muscles and organs , muscles are connected to bones via connective tissues , and teeth are connected to jaw bones . at these interfaces ,tissue mixing may occur in an image of finite spatial resolution .therefore , we modeled an arbitrary tissue of mass density to be mixture of representative tissues 1 and 2 that comprise a density segment of .the other densities are interpolated from those of the representative tissues by in a mass - weighting manner. in other words , we assigned the representative tissues to the polyline points for conversion from mass density to the other densities . for fatty tissues lighter than the representative adipose / marrow tissue , we extended the soft - tissue region down to 0.90 g/ , which is the mass density of human fat at 37. similarly , we extended the region for air - containing tissues up to 0.80 g/ , leaving a transition segment of 0.800.90 g/ to avoid discontinuity of the material properties and to cope with body fat that escape over the segment boundary as viewed in ct image .we also extended the hard - tissue region up to 3.00 g/ for teeth heavier than their representative . to define the extended boundaries, we applied eq .[ eq:3 ] for extrapolation although , for the fat , the n , p , and ca weights were forced to zero and the c weight was adjusted to sum up to 100% to comply with general composition of fatty acid . for the polyline tissues , we compiled mass fraction , mass and electron densities , elemental and residual weights , and mean residual atomic number .hnemohr _ et al ._ formulated a bi - line relation between mass and electron densities for the body tissues compiled in refs .. from the same dataset , sbs selected their representative tissues for mixing of air / lung , adipose tissue / adrenal gland , small intestine / connective tissue , and marrow / cortical bone, from which we derived their relations between mass density and elemental densities as broken - line functions .we compared our results with those preceding formulations .) and female ( ) tissues plotted with the polyline function ( solid lines ) , the bi - line function ( dotted lines ) and `` adipose 3 '' ( ) at 0.93 g/ by hnemohr_ et al . _( 2014) , and embedded subplots for the box - shaped areas.,width=8 ] ) and female ( ) tissues plotted with the polyline functions ( solid lines ) and the broken - line functions ( dotted lines ) according to schneider , bortfeld , and schlegel ( 2000).,width=8 ] ) and female ( ) tissues with thickness varied with residual mass and ( b ) mass - weighted histogram of mean residual atomic number.,width=8 ] rlddddddddddd & tissue type & & & & & & & & & & & + 1 & air & & 0.001 & 0.001 & 0.00 & 0.01 & 75.52 & 23.17 & 0.00 & 0.00 & 1.30 & 18.0 + 2 & lung & 1.4 & 0.384 & 0.380 & 10.3 & 10.7 & 3.2 & 74.6 & 0.2 & 0.0 & 1.0 & 15.9 + 3 & air - containing & & 0.80 & 0.793 & 10.3 & 10.7 & 3.2 & 74.6 & 0.2 & 0.0 & 1.0 & 15.9 + 4 & fat & & 0.90 & 0.907 & 12.09 & 84.39 & 0.00 & 3.52 & 0.00 & 0.00 & 0.00 & + 5 & adipose / marrow & 34.3 & 0.950 & 0.952 & 11.40 & 58.92 & 0.74 & 28.64 & 0.00 & 0.00 & 0.30 & 14.7 + 6 & muscle / organ & 46.5 & 1.049 & 1.040 & 10.25 & 14.58 & 3.20 & 70.87 & 0.21 & 0.02 & 0.87 & 16.8 + 7 & miscellaneous & 6.3 & 1.090 & 1.077 & 9.94 & 20.90 & 3.84 & 63.73 & 0.45 & 0.27 & 0.87 & 15.5 + 8 & spongiosa & 5.7 & 1.137 & 1.116 & 9.30 & 39.15 & 2.22 & 41.71 & 2.36 & 4.60 & 0.66 & 14.9 + 9 & mineral bone & 5.7 & 1.92 & 1.784 & 3.6 & 15.9 & 4.2 & 44.8 & 9.4 & 21.3 & 0.8 & 13.1 + 10 & tooth & 0.1 & 2.75 & 2.518 & 2.2 & 9.5 & 2.9 & 42.1 & 13.7 & 28.9 & 0.7 & 12.0 + 11 & extra tooth & & 3.00 & 2.739 & 1.93 & 8.27 & 2.65 & 41.58 & 14.53 & 30.37 & 0.67 & 11.8 + [ tab:1 ] figure [ fig:1 ] shows the correlation between mass and electron densities , where is the electron density of water .the two fitting functions , the polyline and the bi - line by hnemohr __ , were generally consistent with the standard tissues except around the fat at 0.90 g/ and around the tooth at 2.75 g/ , for which the bi - line function gave 0.892 ( ) and 2.486 ( ) , respectively .the discontinuity in the bi - line at their `` adipose 3 '' was settled by the polyline with better fitting .figure [ fig:2 ] shows the correlation between mass density and elemental densities .the polyline functions and the broken - line functions according to sbs were both generally consistent with the standard tissues except for the c , n , and o densities in the 1.01.1 g/ region .as shown in fig .[ fig:3 ] , the highest and lowest mean residual atomic numbers were 20.6 at 1.04 g/ for the thyroid and 12.0 at 2.75 g/ for the tooth . the global mean of 15.95 approximately corresponded to element s. table [ tab:1 ] shows the resultant material properties of the polyline tissues .a concise yet complete dataset of the polyline tissues may be suited to stoichiometric analysis for ct - number calibration. in the cases where electron density is available , the most likely mass and elemental densities can be determined by eq .[ eq:3 ] with variables replaced to .the resultant mass density and composition including residual weight for element s will constitute a volumetric patient model for mc simulation .the general agreement between the formulations indicates the consistency between the compilations of body - tissue data .the mass - weighting approach of this study took advantage of the publication that focused on the computational phantoms of reference adult male and female rather than on variations in age , physical status , or individual. the poor fitting in c , n , and o densities was due to undifferentiated inclusion of spongiosa tissues occupying 1% of body mass in the 10.1.1 g/ region .they could potentially be resolved by anatomical identification or independent quantitative imaging , in which case either an extended spongiosa / mineral - bone segment or a separate marrow / spongiosa segment should be applied to them .recently , megavoltage ( mv ) ct and dual - energy ( de ) ct have been investigated for direct electron - density imaging. hnemohr __ formulated the composition of body tissues as a two - variable function of electron density and effective atomic number from dect with improved accuracy. the dect with kv and mv x rays would be ideal. nevertheless , use of mvct or dect will potentially mitigate metal artifact and beam hardening , which limits the accuracy of kvct. in conclusion , we have formulated the invariant polyline relations between mass and electron densities and from mass to elemental densities for body tissues .the formulation enables mc simulation in tp practice without additional burden with ct - number calibration .http://dx.doi.org/10.1088/0031-9155/41/1/009[u .schneider ] , e. pedroni , and a. lomax , `` the calibration of ct hounsfield units for radiotherapy treatment planning , '' phys . med . biol . * 41 * , 111124 ( 1996 ) . http://dx.doi.org/10.1088/0031-9155/48/8/307[n .kanematsu ] , n. matsufuji , r. kohno , s. minohara , and t. kanai , `` a ct calibration method based on the polybinary tissue model for radiotherapy treatment planning , '' phys .biol . * 48 * , 10531064 ( 2003 ) . http://dx.doi.org/10.1118/1.4870980[n .kanematsu ] , y. koba , r. ogata , and t. himukai , `` influence of nuclear interactions in polyethylene range compensators for carbon - ion radiotherapy , '' med . phys . * 41 * , 071704 - 18 ( 2014 ) .http://dx.doi.org/10.1088/0031-9155/60/1/421[t .inaniwa ] , n. kanematsu , y. hara , and t. furukawa , `` nuclear - interaction correction of integrated depth dose in carbon - ion radiotherapy treatment planning , '' phys .* 60 * , 421435 ( 2015 ) .http://dx.doi.org/10.1088/0031-9155/45/2/314[w .schneider ] , t. bortfeld , and w. schlegel , `` correlation between ct numbers and tissue parameters needed for monte carlo simulation of clinical dose distributions , '' phys .* 45 * , 459478 ( 2000 ) .http://dx.doi.org/10.1088/0031-9155/53/17/023[h .paganetti ] , h. jiang , k. parodi , r. slopsema , and m. engelsman , `` clinical implementation of full monte carlo dose calculation in proton beam therapy , '' phys .biol . * 53 * , 48254853 ( 2008).x http://dx.doi.org/10.1088/0031-9155/58/8/2471[a .mairani ] , t. t. bhlen , a. schiavi , t. tessonnier , s. molinelli , s. brons , g. battistoni , k. parodi , and v. patera , `` a monte carlo - based treatment planning tool for proton therapy , '' phys .biol . * 58 * , 24712490 ( 2013 ) . http://dx.doi.org/10.1118/1.3679339[n .kanematsu ] , t. inaniwa , and y. koba , `` relationship between electron density and effective densities of body tissues for stopping , scattering , and nuclear interactions of proton and ion beams , '' med . phys . * 39 * , 10161020 ( 2012 ) . http://dx.doi.org/10.1088/0031-9155/59/22/7081[p .farace ] , `` experimental verification of ion stopping power prediction from dual energy ct data in tissue surrogates , '' physbiol . * 59 * , 70817084 ( 2014 ) .http://www.icrp.org/publication.asp?id=icrp%20publication%20110[international commission on radiological protection ] , _ icrp publication 110 : adult reference computational phantoms , _ ann .icrp 39(2 ) , ( icrp , ottawa , 2009 ) .http://dx.doi.org/10.1088/0031-9155/60/11/4243[d .r. warren ] , m. partridge , m. a. hill , and k. peach , `` improved calibration of mass stopping power in low density issue for a proton pencil beam algorithm , '' phys .biol . * 60 * , 42434261 ( 2015 ) . http://dx.doi.org/10.1259/0007-1285-60-717-907[d .r. white ] , h. q. woodard , and s. m. hammond , `` average soft - tissue and bone models for use in radiation dosimetry , '' br . j. radiol .* 60 * , 907913 ( 1987 ) .http://dx.doi.org/10.1118/1.4875976[n .hnemohr ] , h. paganetti , s. greilich , o. jkel , and j. seco , `` tissue decomposition from dual energy ct data for mc based dose calculation in particle therapy , '' med . phys . *41 * , 061714 - 114 ( 2014 ) .http://dx.doi.org/10.1088/0031-9155/45/4/404[k .j. ruchala ] , g. h. olivera , e. a. schloesser , r. hinderer , and t. r. mackie , `` calibration of a tomotherapeutic mvct system , '' phys .* 45 * , n27n36 ( 2000 ) .http://dx.doi.org/10.1016/j.radonc.2011.08.029[g .landry ] , b. reniers , p. v. granton , b. van rooijen , l. beaulieu , j. e. wildberger , and f. verhaegen , `` extracting atomic numbers and electron densities from a dual source dual energy ct scanner : experiments and a simulation model , '' radiother .oncol . * 100 * , 375379 ( 2011 ) . http://dx.doi.org/10.1088/0031-9155/53/9/015[m .bazalova ] , j .- f .carrier , l. beaulieu , and f. verhaegen , `` dual - energy ct - based material extraction for tissue segmentation in monte carlo dose calculations , '' phys .biol . * 53 * , 24392456 ( 2008 ) . http://dx.doi.org/10.1118/1.4901551[m. wu ] , a. keil , d. constantin , j. star - lack , l. zhu , and r. fahrig , `` metal artifact correction for x - ray computed tomography using kv and selective mv imaging , '' med .41 * , 121910 - 115 ( 2014 ) . | * purpose : * for monte carlo simulation of radiotherapy , x - ray ct number of every system needs to be calibrated and converted to mass density and elemental composition . this study aims to formulate material properties of body tissues for practical two - step conversion from ct number . * methods : * we used the latest compilation on body tissues that constitute reference adult male and female . we formulated the relations among mass , electron , and elemental densities into polylines to connect representative tissues , for which we took mass - weighted mean for the tissues in limited density regions . we compared the polyline functions of mass density with a bi - line for electron density and broken lines for elemental densities , which were derived from preceding studies . * results : * there was generally high correlation between mass density and the other densities except of c , n , and o for light spongiosa tissues occupying 1% of body mass . the polylines fitted to the dominant tissues and were generally consistent with the bi - line and the broken lines . * conclusions : * we have formulated the invariant relations between mass and electron densities and from mass to elemental densities for body tissues . the formulation enables monte carlo simulation in treatment planning practice without additional burden with ct - number calibration . |
the experimental setup consists of two erbium doped fiber ring lasers ( edfrls ) coupled with two passive coupling lines .each edfrl has approximately of erbium - doped fiber , the active medium , and approximately of passive single ( transverse ) mode fiber , making the total length of each cavity approximately .the lengths of the cavities are matched within of each other .though the doping density and the lengths of the active media are the same in both lasers , defects and imperfections in the fiber make the lasers nonidentical .the erbium ions in the active medium are pumped with identical semiconductor lasers at a pump power of . the lasing threshold for both lasersis approximately .this results in approximately of light circulating within each ring .an optical isolator is inserted within each ring cavity to ensure unidirectional propagation within the rings .each laser contains 4 fiber - optic evanescent field couplers placed at similar locations in both rings .these consist of two 70/30 couplers which input and output light between the lasers , one 90/10 coupler for monitoring , and a 95/5 coupler as an extra port for applications that are beyond the scope of this paper .the locations of the couplers are shown in figure [ fig : expsetup ] .the ports of the couplers not in use are angle cleaved to ensure that there are no back reflections and were monitored to ensure that light was propagating in the correct direction within the cavities .the lasers are connected via two injection lines , which consist of passive single mode optical fiber , one splitter , and a variable attenuator . in this configuration, we have the ability to monitor and control the injection amplitude between the lasers , through the splitter and variable attenuator respectively , and to observe it on an oscilloscope .the coupling strength is defined to be the ratio of the power of light in the injection line to the power of light in the source ring .( note that both lasers have been adjusted to have the same power . )the lowest coupling strength that can be resolved in our system is 0.0001 .the injection lines are approximately long , corresponding to a travel time between the two lasers of approximately , and again they are matched within . in the experiments presented here , the two coupling strengths are always the same ( symmetric coupling ) , though the electric field from each laser undergoes different phase and polarization changes due to fiber imperfections along their separate paths .the electric field intensity of each laser is monitored using a bandwidth photodetector and a digital oscilloscope .the optical spectrum of each laser is also monitored , and the spectra of the uncoupled lasers are matched to within .we use the model of with the modifications introduced in for coupled fiber ring lasers . only a single polarization mode in each laser is considered here .the model is characterized by the total population inversion ( averaged over the length of the fiber amplifier ) and the electric field in each laser .two delay times occur in the model : , the cavity round - trip time , and , the delay in the coupling between lasers .the equations for the model dynamics are as follows : e^{\text{fdb}}_j(t ) \nonumber \\ & & + \xi_j(t ) \label{eequ } \\\frac{dw_j}{dt } & = & q -1-w_j(t ) \nonumber \\ & & -\left|e^{\text{fdb}}_j(t)\right|^2 \left\ { \exp\left[2\gamma w_j(t)\right]-1 \right\ } , \label{wequ}\end{aligned}\ ] ] where is the complex envelope of the electric field in laser , measured at a given reference point inside the cavity . is a feedback term that includes optical feedback within laser and optical coupling with the other laser .time is dimensionless , measured in units of the decay time of the atomic transition , .the active medium is characterized by the dimensionless detuning between the transition and lasing frequencies and by the dimensionless gain , where is the material gain , the active fiber length , and the population inversion at transparency .the ring cavity is characterized by its return coefficient , which represents the fraction of light remaining in the cavity after one round - trip , and the average phase change due to propagation of light with wavelength along the passive fiber of length and index of refraction .energy input is given by the pump parameter .the electric field is perturbed by a complex gaussian noise source with standard deviation .coupling between the lasers is characterized by the coupling strength , which was varied from 0 to 0.014 .the same is used in both lasers ( symmetric mutual coupling ) .values of the parameters in the model , similar to those in the experimental system , are given in table [ tab : parameters ] ..[tab : parameters]parameters used in the coupled fiber laser model . [ cols="^,^,^,<",options="header " , ] eqns .[ eequ]-[wequ ] consist of a delay differential equation for coupled to a map for .we integrated eqn .[ wequ ] numerically using heun s method .the time step for integration was , where .this step size corresponds to dividing the ring cavity into spatial elements . because of the feedback term in eqn .[ eequ ] , one can think of eqn .[ eequ ] as mapping the electric field on the time interval ] in the absence of coupling ( ) .equivalently , because the light is traveling around the cavity , eqn .[ eequ ] maps the electric field at all points in the ring at time to the electric field at all points in the ring at time . we can thus construct spatiotemporal plots for or the intensity by unwrapping into segments of length .the experimental measurements were of the intensity of light from each laser after passing through a 125 mhz bandwidth photodetector .to correspond with the experiment , we computed intensities from model and applied a low pass filter with mhz , multiplying the fourier transform by the transfer function \right\ } ^{-1}.\ ] ] it should be noted that the time step for integration is an order of magnitude smaller than the smallest timescale allowed by the filtering function .we expect that the results are insensitive to the integration time step as long as it is sufficiently small .test runs with a finer mesh , time steps per round - trip , yielded similar behavior to that described here .all subsequent analysis of the model properties is based on the filtered intensity .in this section we discuss the types of dynamics observed in simulations and the experimental system .after transients have died out , each laser settles into a pattern that is approximately repeated every round - trip . the pattern may shift or change over time intervals of tens or hundreds of round - trips . due tothe presence of noise in the model , even the uncoupled lasers ( ) are not fully periodic for these parameter values .typical spatiotemporal plots and time traces for the model are shown in figure [ fig : st_sim ] .spatiotemporal plots are constructed by displaying each round - trip as a row in the diagram , colored according to the laser intensity , with subsequent round - trips forming subsequent rows .the system displays approximately steady behavior for small coupling ( ) ( fig .[ fig : st_sim]a ) .as the coupling increases ( ) , it then enters a regime where pulsing occurs , in which the intensity alternates between pulsing and dropping to a low value while the inversion builds up ( fig .[ fig : st_sim]b ) .although the spatiotemporal plot may appear more periodic during the pulses , the intensity is actually highly irregular .when the coupling increases further ( ) , the system leaves the pulsing regime and displays complicated behavior . traveling wave solutionsare commonly observed ( e.g. , fig .[ fig : st_sim]c ) . as the coupling increases , the traveling waves become less prominent ( e.g. , fig . [ fig : st_sim]d ) .time traces of the intensity are also shown ( figure [ fig : st_sim]e - h ) for two adjacent round - trips .approximate repetition of the intensity pattern from one round - trip to the next can be seen .spatiotemporal plots for the experiment are shown in figure [ fig : st_exp ] .because the round - trip time is not an integer multiple of the 1 ns sampling time , spacetime plots were constructed by the following procedure .the experimental intensity time trace was expanded to 10 times as many time points by linear interpolation .subsequent round - trips were aligned by shifting the relative position of the rows to maximize correlation between the rows .spatiotemporal plots for the experiment are normalized so that the intensities range from 0 to 1 .the experimental system displays similar behavior to the model . when , approximately steady behavior is observed ( figure [ fig : st_exp]a ) .when very small coupling is turned on , higher frequency structures emerge in the spatiotemporal pattern ( figure [ fig : st_exp]b ) .complicated spatiotemporal patterns are seen at stronger coupling ( figure [ fig : st_exp]c ) .pulsing was not observed in this experimental study .however , pulsing in other fiber ring laser systems has been commonly reported in the literature ( e.g. , ) . as in the model ,the experimental time traces show the approximate repetition of the intensity pattern in subsequent round - trips ( figure [ fig : st_exp]d - f ) .we next consider synchronization between the two lasers .because the lasers are mutually delay - coupled , with a delay time , we expect the coupling to cause lag synchronization in the lasers so that or . in figure[ fig : synctimetraces ] , we display time traces of the two lasers with an offset of . for both the experiment and the model , synchronization can be seen with a time shift in _ either _ direction .we studied the synchronization quantitatively using two metrics , the _ cross - correlation _ for amplitude synchronization and the _ mean phase coherence _ for phase synchronization .let denote the correlation computed for laser 1 leading laser 2 and the correlation for laser 2 leading laser 1 .thus for time series of length time points , \left [ i_b(t_j ) -\left< i_b \right > \right ] } , \ ] ] where denotes averaging and are standard deviations in intensity for lasers and .we define and similarly .the phase of a time series is defined as the hilbert phase , computed from the hilbert transform of the intensity .the mean phase coherence for two time series of length is then defined as follows : where is the phase difference between the points of the time series when laser leads laser . ranges between 0 ( no synchronization ) and 1 ( complete phase synchronization ) .the synchronization metrics were calculated for each round - trip , with each laser leading .we divided the time series into round - trips and shifted each short time series of length by the delay in either direction .( hanging ends were omitted , so that only was used in the calculation . ) and were then computed. when analyzing the experimental data , we compensated for the discrete sampling by using linear interpolation to expand the data to 10 times as many time points and then calculating and for a 2.5 ns range of coupling delays ( centered around the estimated ) .the maximal , , , and values for each round - trip ( maximized over possible coupling delays ) were recorded .sample results for phase synchrony are shown in figure [ fig : sync_vs_t ] .the synchronization fluctuates over time , and there are changes in which laser leading gives the maximum synchronization .the amplitude synchronization behaves similarly . to determine the mean amplitude synchronization for a given coupling value , we take the maximum of and for each round - trip and then average over all round - trips .we similarly define the mean phase synchronization over all round - trips .mean synchronizations were computed for each value of the coupling constant .experimental data were averaged over 5 - 10 separate runs of 4500 round - trips each .simulations were averaged over 6 runs of round - trips each ( with runs separated by at least 0.1 s of simulated time to insure independence ) .results are shown in figure [ fig : sync_vs_kappa ] .error bars are the standard deviations in and over all round - trips , so they are a measure of fluctuations in synchrony . in both the model and experiments , amplitude and phase synchronization increase as the coupling increases . however , the shapes of the synchronization vs. coupling curves are different .the experimental system is fairly well synchronized even at the smallest nonzero coupling , while the model synchronizes more gradually .the model also exhibits more fluctuations in synchrony , as exemplified by the larger standard deviations . for both model and experiment , the amplitude synchronization ( cross - correlation )is typically larger than the phase synchronization .we next consider which laser leads the other during synchronization , and switches in the leader / follower . in figure[ fig : sync_vs_t ] , the two curves are the synchrony computed with laser 1 leading ( ) and with laser 2 leading ( ) . examining the figure qualitatively , we see that at certain times the synchrony is clearly greater with one laser leading than with the other ( e.g. , round - trips 1300 - 1900 ) , and at other times it is more difficult to determine which laser is leading ( e.g. , round - trips 2500 - 2900 ) because the difference in synchrony with time shifts in either direction is small compared to the fluctuations in the synchrony .the latter portions of figure [ fig : sync_vs_t ] are enlarged in figure [ fig : sync_vs_t_zoom ] .we develop an algorithm to compute the leader and follower . for each round - trip, we compute the synchronization with either laser leading .let be the difference between the correlations for each round - trip within a run , and let be the standard deviation of the .when for some cutoff factor , we say that the synchronization is substantially greater with one laser leading than with the other , and a clear leader and follower can be identified .leader and follower for phase synchronization are defined similarly .figure [ fig : cutoff ] shows the fraction of round - trips for which a leader and follower can be identified for a range of cutoff factors .similar curves are obtained for the model and experiment for most values of .we select to use as the cutoff factor in this study . for most values of , for both the model and experimental data, the choice leads to approximately 30% of round - trips having an identified leader and follower for amplitude and phase synchronization .after leader and follower have been computed , we next search for switches . the following algorithm is used to identify changes in leader and follower .let be the set of round - trip numbers for which . whenever the sign of is opposite the sign of , a switch is identified at round - trip . to obtain statistics over several runs ,let be the total number of round - trips and total number of switches in amplitude synchronization observed .then the average number of round - trips between switches is .we assume square root error in the number of observed switches , leading to an error of in the number of round - trips between switches .switches in phase synchronization are located by the same procedure .an example of switching is given in figure [ fig : sync_vs_t ] and figure [ fig : sync_vs_t_zoom ] : locations of switches identified by our algorithm are marked by arrows .note that switches are defined only when the leader and follower change ; for example , no switch is identified around round - trip 2000 when and come together with no defined leader and then separate with the same leader and follower as before .switches were studied in the model and experiment for a range of coupling strengths , using the same data sets as in section [ sec : synchronization ] .results are shown in figure [ fig : switches ] .we find that the time between switches in phase synchronization decreases as the coupling increases for both the experiment and model .results for the amplitude synchronization are less clear . for the model ,switching in amplitude synchronization follows the same trend as for phase synchronization .however , the experimental system exhibits fewer switches in amplitude synchronization at larger coupling strengths .we compared experimental measurements with model predictions for a pair of mutually coupled fiber ring lasers as the coupling strength was varied .the coupling has similar effect on the dynamics in both model and experiment .quantitative measures of amplitude and phase synchronization were made , the latter by using the hilbert phase .approximately equal synchronization was found for forward and reverse time shifts equal to the coupling delay .we notice that the lasers spend considerable periods of time in synchronized states with no discernable leader or follower , since the synchronization values are very similar for time shifts in either direction .this is in contrast to the mutually coupled system with delay but no optical feedback described in , where leader and follower could be identified when the cross - correlation was computed for a short time series .we defined an algorithm to locate subtle switches of leader / follower between the two lasers .our statistical measures allow a leader and follower to be identified when the synchronization is sufficiently larger in one direction of the time shift . although the trend of increasing synchronization with increasing coupling was consistent between model and experiment , the shape of the synchronization vs. coupling curves differed , with the experimental system synchronizing more quickly .in addition , the fluctuations in synchronization were generally larger in the model .the model we have used includes only one polarization mode in each laser .future extensions to the model should include both polarization modes .this may explain the discrepancy between the experimental observations and computational prediction in the shapes of figure [ fig : sync_vs_kappa ] . in ,the authors note that for long time series , forward and reverse time shifts yielded similar cross - correlations , although leader and follower could be identified from cross - correlations of short time series .our cross - correlations were computed over the natural time scale of a round - trip , and similar results are obtained for shorter time series with length equal to the delay time ( data not shown ) .our observation that the system often does not have a clear leader and follower seems robust for reasonable choices of window size for averaging , although further study of appropriate window size may be conducted .the method for choosing the cutoff to identify clear regions of leader and follower is based on an internal definition for a given time series .the choice of the cutoff factor to be one standard deviation is somewhat arbitrary , but preliminary results indicate that the trends of the switching frequency vs. coupling are preserved for different cutoff factors .the algorithm for identifying switches may in the future be applied to other systems in which the difference in synchronization for forward and reverse time shifts is more substantial .we typically observe a higher frequency of switching at higher coupling values .there are very similar trends in the frequency of switches of leader / follower for the phase synchronization in experiment and theory .this effect might be expected since the lasers mutually influence each other more effectively with stronger coupling and thus can respond to each other more quickly .however , we can not explain the differences in corresponding plots for the amplitude synchronization .further study of this phenomenon is needed . in this study , we considered variations in the coupling strengths only .the noise strength for the numerical computations was chosen somewhat arbitrarily .the main guiding principle was to have a reasonable correspondence between the uncoupled lasers in theory and experiment .future studies will consider changes in the noise strength .another parameter to be varied is the detuning between lasers , because its value is not known exactly for the experimental system .the inclusion of a second polarization mode will introduce another set of parameters whose variation may also be studied .we are indebted to jordi garcia ojalvo for developing the original version of the model and the simulation code , and we acknowledge david deshazer s assistance in generating spatiotemporal plots of experimental data .this research was supported by the office of naval research .lbs is currently a national research council post doctoral fellow .ear was supported in part by a national science foundation fellowship . | a pair of coupled erbium doped fiber ring lasers is used to explore the dynamics of coupled spatiotemporal systems . the lasers are mutually coupled with a coupling delay less than the cavity round - trip time . we study synchronization between the two lasers in the experiment and in a delay differential equation model of the system . because the lasers are internally perturbed by spontaneous emission , we include a noise source in the model to obtain stochastic realizations of the deterministic equations . both amplitude synchronization and phase synchronization are considered . we use the hilbert transform to define the phase variable and compute phase synchronization . we find that synchronization increases with coupling strength in the experiment and the model . when the time series from two lasers are time - shifted in either direction by the delay time , approximately equal synchronization is frequently observed , so that a clear leader and follower can not be identified . we define an algorithm to determine which laser leads the other when the synchronization is sufficiently different with one direction of time shift , and statistics of switches in leader and follower are studied . the frequency of switching between leader and follower increases with coupling strength , as might be expected since the lasers mutually influence each other more effectively with stronger coupling . * the main goal of this paper is to explore the synchronization of mutually delay - coupled spatiotemporal systems and develop techniques to identify the evolving phase relationships between them . we illustrate our ideas in the specific case of mutually coupled fiber ring lasers , which exhibit a remarkable ability to adapt their dynamics so that much of the time there is no clear leader or follower , even though signals propagate from one system to the other with an appreciable time delay . the lasers are described by a system of stochastic delay differential equations , so as to include the effect of spontaneous emission in the erbium doped fiber amplifiers that constitute the active media . we use statistical measures to study phase and amplitude synchrony in model and experimental delay - coupled systems from their time series . * one fascinating area of dynamics in nature is the study of how systems respond to each other due to their interactions . when systems behave dynamically , interactions between them may cause the systems to operate in a similar , or coherent , manner . if the measured signals are similar enough over time , the correlated motion is termed synchronized . there now exist several excellent reviews on synchronization dynamics in the literature , such as refs . , and they cover a wide range of applications from many fields of science . in general , synchronization between interacting systems may be quantified by examining and comparing the output time series from each dynamical system . although the dynamics may be considered for the general coupling between systems , in this paper we restrict the number of dynamical systems to . then we may consider only two coupling schemes ; either mutual ( bidirectional ) , or unidirectional . suppose denote the vector output time series measured from each dynamical system . then several types of synchronization may be classified depending on the type of coherence measure . the systems are in complete synchronization if . complete synchronization occurs in coupled phase oscillators as well as in coupled chaotic oscillators . in this case , amplitudes and phases are identical . if the amplitudes are uncorrelated but the phases are locked , or entrained , between the two signals , then the systems are said to be in phase synchrony . one other type of synchronization deals solely with the unidirectional coupling between two oscillators of drive and response type , and is termed generalized synchronization . in generalized synchronization , there exists a functional relationship between the drive and response , where there might exist a function such that in a more general setting , this may also be thought of as a generalized entrainment in dynamics , whereby one system is entrained functionally to another . many examples of entrained systems occur in singularly perturbed problems , and specifically in systems with highly different time scales . if there is a parameter mismatch or noise in the dynamical systems , complete synchronization may not be possible , and other measures of synchronization are needed . one possible example of phase synchronization occurs when the amplitudes are correlated but locked in phase at a value other than zero . such a system exhibits lag synchronization when for some we have that is , the two outputs of the coupled dynamical systems appear shifted in time with respect to one another . for mutually coupled chaotic systems such as rossler attractors that are mismatched in frequency , lag synchronization is one of the routes to complete synchrony as coupling is increased . it should be noted that lag synchronization may occur without the presence of delay in the coupling terms . on the other hand , if delay is introduced into the coupling terms to model finite time signal propagation , then synchronized behavior may still occur . when there is a clear time lag between the delay - coupled dynamics , the systems are said to exhibit achronal synchronization . achronal synchronization exhibits a clear leading time series which is followed by a lagging time series . heil _ et al . _ showed the existence of achronal synchronization in a delay - coupled semiconductor laser experiment , as well as in a single mode model of the delay - coupled lasers in which stochastic effects modeling spontaneous emission are included . the time series shows a clear leader with delay equal to the coupling delay time . other groups have also considered leader - follower synchronization in single - mode semiconductor equations . one of the non - intuitive facets of interacting systems which synchronize with delay is that of anticipation in systems with short time delay . first observed in unidirectionally coupled systems , in contrast to lag synchronization , anticipatory synchronization occurs when a response in a system s state is replicated not simultaneously but anticipated by the response system . an example of anticipation in synchronization is found in coupled semiconductor lasers . here , the author followed ref . in the design of a unidirectional coupling arrangement , in which two single mode semiconductor lasers with delayed optical feedback and delayed injection coupling were modeled . cross - correlation statistics between the two intensities showed clear maxima at delay times consisting of the difference between the feedback and the coupling delay . anticipatory responses in the presence of stochastic drives , equally applied to transmitter and receiver , have been observed in models of excitable media as well . given that both lag and anticipatory dynamics may be observed in delay - coupled systems which are deterministic , it is natural to ask whether the systems may exhibit coexisting features . in single mode mutually coupled lasers , this is indeed the case . in fact , in the absence of noise ( the authors include a noise term in their model , but turn it off for the simulations ) , switching between leading and following state is observed . in a similar model with spontaneous emission included as a noise source , theory and experiment have exhibited achronal synchronization , with switching between leader and follower . the authors conjecture that for semiconductor lasers , changes in leader - follower roles may occur during sharp dropouts of the laser intensity . in the above discussion , it is clear that the role of signal propagation time is important in the dynamics of leader and follower in coupled systems . when noise is sufficiently large , it is not easy to detect changes in leader and follower statistically . this is especially the case when the systems are stochastic and close to a synchronized state . in ref . , the authors compute statistics on the phase differences and show that they might correspond to a random walk . in this paper , we develop methods to extract statistical behavior of the switches in leader and follower in mutually coupled fiber lasers . the chaotic dynamics of fiber ring lasers have been studied in the past . an experiment on the coupling between polarization modes was set up and modeled using delay differential equations in . other experiments on synchronization with fiber lasers have been reported in , and noise - induced generalized synchronization in fiber ring lasers has been reported in . modeling the ring laser yields a system of equations which consists of coupled difference and differential delay equations . to obtain better agreement with experiment , it was found that inclusion of spontaneous emission effects was necessary in the modeling , which resulted in a stochastic difference - differential system of equations , and it is this approach we follow here . for mutual coupling of two ring lasers with delay , we analyze both experimental results and a delay differential equation model of the system . synchronization with delay occurs and increases with coupling strength . generally , approximately equal synchronization is observed between the two lasers when time - shifted in either direction by the delay time , so that a clear leader and follower can not be identified . we define an algorithm to determine which laser leads the other when the synchronization is sufficiently different , and statistics of switches in leader and follower are studied . we introduce the experimental setup and corresponding model in sec . [ sec : experiment]-[sec : model ] . the dynamics of the system are outlined in sec . [ sec : dynamics ] . synchronization of the coupled lasers is discussed in sec . [ sec : synchronization ] and switching of leader and follower in sec . [ sec : switching ] . |
in this paper we consider the following statistical problem : upon observing a high - dimensional vector , one is interested in detecting the presence of a sparse , possibly structured , correlated subset of components of the vector .such problems emerge naturally in numerous scenarios . the setting is closely related to gaussian signal detection in gaussian white noise , on which there is an extensive literature surveyed in . in image processing ,textures are modeled via markov random fields , so that detecting a textured object hidden in gaussian white noise amounts to finding an area in the image where the pixel values are correlated .similar situations arise in remote sensing based on a variety of hardware .a related task is the detection of space time correlations in multivariate time series , with potential applications to finance .we investigate the possibilities and limitations in problems of detecting correlations in a gaussian framework .we may formulate this as a general hypothesis testing problem as follows .an -dimensional gaussian vector is observed . under the null hypothesis ,the vector is standard normal , that is , with zero mean vector and identity covariance matrix . to describe the alternative hypothesis ,let be a class of subsets of , each of size , indexing the possible `` contaminated '' components .one wishes to test whether there exists an such that where is a given parameter .equivalently , if denotes the vector of observations , then where denotes the identity matrix and we write for the probability under ( i.e. , the standard normal measure in ) and , for each , for the measure of .the goal of this paper is to understand for what values of the parameters reliable testing is possible .this , of course , depends crucially on the size and structure of the subset class .we consider the following two prototypical classes : * _ -intervals ._ in this example , we consider the class of all intervals of size of the form modulo aesthetic reasons .( we call such an interval a _ -interval_. ) this class is the flagship of _ parametric _ classes , typical of the class of objects of interest in signal processing . * _ -sets . _ in this example , we consider the class of all sets of size , that is , of the form where the indices are all distinct in .( we call such a set a _ -set_. ) this class is the flagship of _ nonparametric _ classes , and may arise in multiple comparison situations .our theory , however , applies more generally to other classes , such as : * _ -hypercubes ._ in this example , the variables are indexed by the -dimensional lattice , that is , , so that the sample size is , and we consider the class of all hyper - rectangles of the form interval modulo fixed size .this class is the simplest model for objects to be detected in images ( mostly in applications ) . *_ perfect matchings ._ suppose is a perfect square with .the components of the observed vector correspond to edges of the complete bipartite graph on vertices and each set in corresponds to the edges of a perfect matching .thus , . in this example has a nontrivial combinatorial structure . *_ spanning trees ._ in another example , and the components of correspond to the edges of a complete graph on vertices and every element of is a spanning tree of . as usual , a _ test _ is a binary - valued function . if , then the test accepts the null hypothesis ; otherwise is rejected by .we measure the performance of a test based on its _ worst - case risk _ over the class of interest , formally defined by we will derive upper and lower bounds on the _ minimax risk _ a standard way of obtaining lower bounds for the minimax risk is by putting a prior on the class and obtaining a lower bound on the corresponding _ bayesian risk _ , which never exceeds the worst - case risk .because this is true for any prior , the idea is to find one that is hardest ( often called _ least favorable _ ) .most classes we consider here are invariant under some group action : -intervals are invariant under translation and -sets are invariant under permutation .invariance considerations ( , section 8.4 ) lead us to considering the uniform prior on , giving rise to the following _ average risk _ : where and is the cardinality of .the advantage of considering the average risk over the worst - case risk is that we know an optimal test for the former , which , by the neyman pearson fundamental lemma , is the likelihood ratio test , denoted . introducing for all , the likelihood ratio between and be written as and the optimal test becomes note that .the ( average ) risk of the optimal test is called the _ bayes risk _ and it satisfies note that , with the only exception of the case of spanning trees , in all examples mentioned above , the minimax and bayes risks coincide , that is , .this is again due to invariance ( , section 8.4 ) .( the class of spanning trees is not sufficiently symmetric for this equality to hold .however , as we will see below , even in this case , and are of the same order of magnitude . )we focus on the case when is large and formulate some of the results in an asymptotic language with though in all cases explicit nonasymptotic inequalities are available . of course, such asymptotic statements only make sense if we define a sequence of integers and classes .this dependency in will be left implicit . in this asymptotic setting , we say that _ reliable _ detection is possible ( resp . , impossible ) if ( resp . , ) as . in this paperwe assume that , under the alternative hypothesis , the correlation between any two variables in the `` contaminated '' set is the same .while this model has a natural interpretation ( see lemma [ lemrepresent ] below ) , it is clearly a restrictive assumption .this simplification is in understanding the fundamental limits of detection ( i.e. , in obtaining lower bounds on the risk ) . at the same time, the tests we exhibit also match these lower bounds under more general correlation structures , such as that said , dealing with more general correlation structures remains an interesting and important challenge , relevant in the detection of textured objects in textured background , for example .the vast majority of the literature on detection is concerned with the detection of a signal in additive ( often gaussian ) noise , which would correspond here to an alternative where for , where is the ( per - coordinate ) signal amplitude .we call this the _ detection - of - means _ setting .the literature on this problem is quite comprehensive .indeed , the detection of -intervals and -hypercubes is treated extensively in a number of papers ; see , for example , . a more general framework that includesthe detection of perfect matchings and spanning trees is investigated in , and the detection of -sets is studied in . in the literature on detection of parametric objects , the phrase `` correlation detection '' usually refers to the method of _ matched filters _ , which consists of correlating the observed signal with signals of interest .this is not the problem we are interested in here . while the problem of _ detection - of - correlations _ considered here is mathematically more challenging than the detection - of - means setting ,there is a close relationship between the two . the connection is established by the representation theorem of stated here for the case gaussian random variables . [ lemrepresent ]let be standard normal with for .then there are i.i.d .standard normal random variables , denoted , such that for all .thus , given , the problem becomes that of detecting a subset of variables with nonzero mean ( equal to ) and with a variance equal to ( instead of 1 ) .this simple observation will be very useful to us later on .when is random , the setting is similar to that of detecting a gaussian process ( here equal to for , and equal to 0 otherwise ) in additive gaussian noise .however , the typical setting assumes that the gaussian process affects all parts of the signal . in our setting, the signal ( the subset of correlated variables ) will be sparse .since we only have one instance of the signal , the problem can not be considered from the perspective of either multivariate statistics or multivariate time series .if indeed we had multiple copies of , we could draw inspiration from the literature on the estimation of sparse correlation matrices , from the literature on multivariate time series , or on other approaches ; but this is not the case as we only observe .closer in spirit to our goal of detecting correlations in a single vector of observation is the paper of , which aims at testing whether a gaussian random field is i.i.d . or has some markov dependency structure .their setting models communication networks and is not directly related to ours .it transpires , therefore , that in the detection - of - correlations setting plays a role analogous to in the detection - of - means setting .while this is true to a certain extent , the picture is quite a bit more subtle .the detection - of - means problem for parametric classes such as -intervals is well understood . in such cases , needs to be of order at least for reliable detection of -intervals to be possible .this remains true in the detection - of - correlations setting , and the _ generalized likelihood ratio test _ ( _ glrt _ ) is near - optimal , just as in the detection - of - means problem ; see , for example , .our inspiration for considering -sets comes from the line of research on the detection of sparse gaussian mixtures .very precise results are known on that make detection possible and optimal tests have been developed , such as the `` higher criticism '' .in fact , the recent paper deals with heteroscedastic instances of the detection - of - means problem where the variance of the anomalous variables may be different from 1 .for example , it is known that , when [ resp . , , needs to be of order at least [ resp ., for reliable detection of -sets to be possible , and the test based on ( resp . , ) is near - optimal .though more precise results are available when , these can not be translated immediately to our case via the representation theorem of lemma [ lemrepresent ] . as a bonus ,we show that the glrt is clearly suboptimal in some regimes see theorem [ thmglrt - bad ] .note that in the detection - of - means problem it is not known whether the glrt has any power .this paper contains a collection of positive and negative results about the detection - of - correlation problem described above . in section [ seclower ]we derive lower bounds for the bayes risk .the usual route of bounding the variance of the likelihood ratio , that is very successful in the detection - of - means problem , leads essentially nowhere in our case .instead , we develop a new approach based on lemma [ lemrepresent ] .we establish a general lower bound for the bayes risk in terms of the moment generating function of the size of the overlap of two randomly chosen elements of the class .this quantity also plays a crucial role in the detection - of - means setting and we are able to use inequalities worked out in the literature in various examples . in section [ secupper ]we study the performance of some simple and natural tests such as the squared - sum test based on , the generalized likelihood ratio test ( glrt ) and a goodness - of - fit ( gof ) test , as well as some variants .we show that , in the case of parametric classes such as -intervals and -hypercubes , the glrt is essentially optimal .the squared - sum test is shown to be essentially optimal in the case of -sets when is large , while the glrt is clearly suboptimal in this regime .this is an interesting example where the glrt fails miserably .when is small , detection is only possible when is very close to .we show that a simple gof test is near - optimal in this case .the analysis of tests such as the squared - sum test and the glrt involves handling quadratic forms in .this is technically more challenging than the analogous problem for the detection - of - means setting in which only linear functions of appear ( which are normal random variables ) .in this section we investigate lower bounds on the risk , which are sometimes called information bounds .first we consider the special case when contains only one element as this example will serve as a benchmark for other examples .then we consider the standard method based on bounding the variance of the likelihood ratio under the null hypothesis , and show that it leads nowhere .we then develop a new bound based on lemma [ lemrepresent ] that has powerful implications , leading to fairly sharp bounds in a number of examples . as a warm - up , and to gain insight into the problem ,consider first the simplest case where contains just one set , say . in this case , the alternative hypothesis is simple and the likelihood ratio ( neyman pearson ) test may be expressed by this follows by the fact that which is easy to check by straightforward calculation .the next simple lemma helps understand the behavior of the bayes risk .[ lemqf ] under , is distributed as and under the alternative , it has the same distribution as where and denote independent random variables with degrees of freedom and , respectively .if denotes a standard normal vector , then under , the quadratic form is distributed as , and under the alternative , it has the distribution of , since is distributed as .now , observe that for any symmetric matrix with eigenvalues , the quadratic form has distribution this follows simply by diagonalizing and using the rotational invariance of the standard normal distribution .the lemma follows from this simple representation and the fact that has eigenvalue with multiplicity , with multiplicity , and the eigenvalue with multiplicity .now it is straightforward to analyze the bayes risk . in particular, we immediately have the following : [ prpsimple ] if is a singleton , if and only if .similarly , if and only if .suppose .it suffices to show that there exists a threshold such that and .we use lemma [ lemqf ] and the fact that , by chebyshev s inequality , for any sequence , and the fact that we choose and define . then under the null , and under the alternative , setting , we then conclude with the fact that , for large enough , . if is bounded , the densities of the test statistic under both hypotheses have a significant overlap and the risk can not converge to .the proof of the second statement is similar .clearly , the role of is immaterial in this specific example as the optimal test ignores all components whose indices are not in . when the class contains more than one element , the likelihood ratio with uniform prior on is given by ( [ l ] ) . a common approach for deriving a lower bound on the bayes risk is via an upper bound on the _ variance _ of under the nullindeed , by the cauchy schwarz inequality , - 1}}{2}.\ ] ] therefore , an upper bound on -1 = { \operatorname{var}}_0(l(x)) ] unless .the implications are rather insubstantial .it only shows that , when with fixed , the bayes risk does not tend to zero .as we shall see , this lower bound is grossly suboptimal , except in the case where is a singleton ( as in section [ secsimple ] ) or does not grow in size with .a refinement of this method consists in bounding the first and second _ truncated _ moments of , again under the null hypothesis .for example , this is the approach used in in the detection - of - means setting for the case of -sets to obtain sharp bounds .unfortunately , in our case this method only provides a useful bound when the class is not too large ( i.e. , has size polynomial in ) while it does not seem to lead anywhere in the case of -sets .the computations are quite involved and we do not provide details here , as we were able to obtain a more powerful general bound that applies to both -intervals and -sets .this is presented in the next section . in this sectionwe derive a general lower bound for the bayes risk . as in the detection - of -means problem , the relevant measure of complexity is in terms of the moment generating function of the size of the overlap of two randomly chosen elements of . in the detection - of - means setting ,this is a consequence of bounding the variance of the likelihood ratio .we saw in section [ secmoment ] that this method is useless here .instead , we make a connection between the two problems using lemma [ lemrepresent ] .[ thmlower ] for any class and any , where and , with drawn independently , uniformly at random from . in particular , taking , where .the starting point of the proof is lemma [ lemrepresent ] , is as described in distribution . ] which enables us to represent the vector as where are independent standard normal random variables .we consider now the alternative , defined as the alternative given .let , , [ resp ., , , be the risk of a test , the likelihood ratio , and the optimal ( likelihood ratio ) test , for versus [ resp ., versus ] .for any , , by the optimality of for versus . therefore , conditioning on , [ is the expectation with respect to . ]using the fact that for all , we have } { { \mathbb{e}}}_0 } { { \mathbb{e}}}_0 \\ & \geq & { { \mathbb{p}}}\{|u|\le a\}\biggl(1 - \frac12 \max_{u \in[-a , a ] } \sqrt{{{\mathbb{e}}}_0l_u^2(x ) - 1}\biggr).\end{aligned}\ ] ] since we get it is easy to check that which implies which concludes the proof .we now apply theorem [ thmlower ] to a few examples .the theorem converts the problem into a purely combinatorial question and offers various estimates for the moment generating function of which we may use for our purposes .consider first the simplest case when contains disjoint sets of size .[ cordisjoint ] let be the class of all sets of size . if then the bayes risk satisfies , and if or if .clearly , the size of the overlap of two randomly chosen elements of equals zero with probability and with probability .thus , which is bounded by if .the first part then follows from the second part of theorem [ thmlower ] . for the second part ,we need to find such that .( note that in this case the upper bound above tends to zero . )first assume that . in that case , , so it suffices to take slowly enough that .next assume that . in this case , we have , and we simply choose slowly enough that .consider the class of all -intervals .the situation is similar to that of nonoverlapping sets .( in fact , since this class of -intervals contains ] into bins of length , denoted .let be the bin counts thus , we are computing a histogram .then consider the following gof test : where is some threshold .[ prpgof ] consider the class of all -sets in the case where and . in the gof testabove , choose such that . when in ( [ model - general ] ) , the resulting test with threshold has worst - case risk tending to zero .bernstein s inequality , applied to the binomial distribution , gives that .\ ] ] this and the union bound imply that , indeed , consider now an alternative of the form ( [ model - general ] ) , with denoting the anomalous set .let though the set is random , by lemma [ lemclose ] and the fact that , we have that define the event for some .note that , since the variance of is bounded by 1 , .define .on , using a simple taylor expansion , we have where denotes the standard normal density function and is taken sufficiently large . therefore ,when and hold , at least of the anomalous s fall in an interval of length at most .since such an interval is covered by at most bins , by the pigeonhole principle , there is a bin that contains anomalous s . by bernstein s inequality, the same bin will also contain at least nonanomalous s ( with high probability ) , so in total this bin will contain points . by our choice of , , so it suffices to choose slowly enough that still .then , with high probability , there is a bin with more than points . ignoring logarithmic factors , we are now able to state the following : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the gof test is near - optimal for detecting -sets in the regime where and . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ when , things are somewhat different .there , the gof test requires that , which is still close to optimal when , but far from optimal when is bounded ( e.g. , when , the exponent is 4 instead of 2 ) .indeed , when , needs to be chosen larger than , and bernstein s inequality is not accurate .instead , we use the simple bound note that bennett s inequality would also do . ( the analysis also requires some refinement showing that , with probability tending to 1 under the alternative , one cell contains at least points . ) note that in the remaining case , , the glrt is optimal up to a logarithmic factor , since it only requires that , as seen in section [ secglrt - nonparametric ] .we do not know whether a comparable performance can be achieved by a test that does not have access to ._ when is unknown ._ in essence , we are trying to detect an interval with a higher mean in a poisson count setting .as before , it is enough to look at dyadic intervals of all sizes , which can be done efficiently as explained earlier , following the multiscale ideas in .the proof is divided into three steps .the first step formalizes the fact that we want to prove that ( under ) , the contaminated set has no influence ( with high probability ) on the glrt statistic .the second step exhibits a useful high probability event .finally , in the third step we show that on this high probability event , the contaminated set has no influence on the glrt .it can easily be seen that for every of size , introduce the function defined by for . denoting , for and , the vector of components of belonging to by , we may write the glrt as note that by the symmetry of and the test , given , define the coupling as follows : for , and are independent for .note that . then ,no matter what the threshold is , we have in the following we show that , with probability tending to , we have which then implies that the glrt is asymptotically powerless . by lemma [ lemrepresent ] , there exist independent standard normal such that for all , using the fact that with high probability , with probability tending to 1 , we have ,\ ] ] where and is any sequence such that .fix to be determined later and define where . by the fact that are i.i.d .standard normal , , so that if .when is bounded away from , this is the case if . in conclusion, we proved that the event has a probability that tends to if as long as is bounded away from 1 .we specify .note that , as required , exceeds and is bounded away from 1 .assume that we are on the event .first note that \\[-8pt ] & = & k ( k-1 ) \zeta^2 ( 1 - \rho\gamma^2 ) , \nonumber\end{aligned}\ ] ] and the same holds for .let be such that .we want to show that there exists such that .this entails that , since for we have .first remark that we can assume that since otherwise by ( [ eqminordered ] ) we can simply take . to simplify notation , we may assume that . by definition of and the fact that contains at least one index in , there exist such that and do not appear in .we want to show that by replacing by either or , in , one increases the value of .more precisely , we want to show that then by induction one can show the existence of the described above .note that , for and , consider the case where ( the case can be dealt with similarly ) . since , it suffices to show that , which follows from this concludes the proof .we thank omiros papaspiliopoulos for his illuminating remarks and the anonymous referees for challenging us to obtain stronger results in the sparse setting and for pointing out a mistake in proposition [ prpgof ] . | we consider the hypothesis testing problem of deciding whether an observed high - dimensional vector has independent normal components or , alternatively , if it has a small subset of correlated components . the correlated components may have a certain combinatorial structure known to the statistician . we establish upper and lower bounds for the worst - case ( minimax ) risk in terms of the size of the correlated subset , the level of correlation , and the structure of the class of possibly correlated sets . we show that some simple tests have near - optimal performance in many cases , while the generalized likelihood ratio test is suboptimal in some important cases . , . |
in order to study harmonic fields in two dimensions , very powerful techniques have been developed based on complex analysis . among these , we will use in the following conformal maps , which allow to relate the geometry we wish to study ( i.e. a semi - infinite domain limited by a rough interface ) , to a regular one as schematically illustrated in figure ( [ fig : method ] ) .0.8 > as usual , we will identify a point in the plane , with the complex number .we note the complex conjugate of . the two variables and can be treated as independent variables instead of and .a mapping from the complex plane onto itself is simply defined as a complex function of and which transforms one point of the complex plane into another point .for the mapping to be of physical interest it has to be bijective in a domain of interest , and thus inversible .the mapping is _ conformal _ if the function is _ holomorphic _ , i.e. it only depends on and not on .it can be shown that in this case , local angles are preserved in the transformation apart from singular points and hence the term `` conformal '' . moreover ,the real part and the imaginary part of any such holomorphic function are both harmonic , i.e. . the latter property results from the expression of the laplacian operator in terms of the variables and : two obvious choices are well suited for our geometry : i ) a semi - infinite plane , ii ) the unit circle , .these two domains can be related by the transformation , and thus they are basically equivalent . since the boundary we consider is periodic in the -direction , the mapping to the unit circle is well suited . however , in the following , we will rather use the mapping to the half - plane , since it corresponds directly to the `` reference '' problem where the roughness vanishes .the domain of interest , denoted by is limited by a rough interface which is a periodic function of of period . the conformal map , , is a function of which associate one point of the reference domain , or , to another point in . from now on in order to distinguish the initial and the image domain, we will note a point in the image plane , and keep the notation for the initial plane unless otherwise mentioned . before specifying the particular form of the boundary ,it is possible to guess an adequate form for these transformations .let us first consider the mapping from the half - plane to . as tends to , the mapping should approach the identity , , since the roughness of the boundary is not expected to play any significant role at a large distance ( compared to the period ) from the boundary .we introduce the function such that functions of the form with real , thus appears to be natural candidates for .they are indeed periodic functions of , and vanish exponentially as goes to when . moreover , in order to satisfy the same periodicity as we require that where is an integer .thus we propose as a writing of the transformation the following decomposition without loss of generality we will set for all the rest of this study .the rough boundary is to be identified with the image of the axis , so that obeys the parametric equations the corresponding transformation from the unit disk to the domain can be obtained from the above form ( [ map ] ) and the tranformation from the disk to the semi - infinite plane . the resulting transformation reads where has been used . evidently , the image of the unit circle provides the same parametric form as ( [ boundary ] ) .the form of the transformation being imposed , one needs to check that the transformation is bijective : a point should have a single image , and an image point a unique parent .this condition does impose some restriction on the transformation .it can be rephrased simply for the transformation ( [ map ] ) as for all . in principle , it is sufficient to impose this condition only in the strict interior of the domain .if on the boundary , a kink may appear at this point . in the following , we will assume that the interface is smooth at a small scale so that poles are forbidden on the boundary .as an example , if the transformation is simply then the condition ( [ inversible ] ) reduces to or . for this maximum value ,the image of the -axis is a cycloid , with cusp points .figure ( [ fi : cycloid ] ) illustrates this limit case . 0.8the above presented transformation is only useful for a particular application if the transformation can be computed , once the boundary is imposed .this section is devoted to this problem .the algorithm that we have developed generates the transformation very efficiently .different numerical techniques applied to computing the map from arbitrary closed domains to the unit disk can be found in ref. .our algorithm can be shown to be related to the jacobi method used in these studies .we define the rough boundary as a single valued real function such that is the equation of the boundary . in other words ,the boundary is given by the parametrised form . to comply with the framework we have chosen here, we use in the following a 2 -periodic . from the particular form of the transformation , we expand the real and imaginary parts on the boundary \\h(u)= & \mathop{\im m}\left[\sum_k \omega_k e^{-ikx}\right ] \end{array } \right.\ ] ] if were equal to , the second equation would be close to a fourier transform expression of the function . more precisely rewriting the last equation as \ ] ]we see that the coefficient can be computed from the fourier transform of .the difficulty is that is _ a priori _ unknown .however , we note that if the roughness is small enough , say of order , can be written as .therefore , identifying with is a zeroth order approximation . from the latter ,the coefficient can be computed by the fourier transform of .this provides a first order approximation of , from which an improved estimate of can be obtained , by taking the fourier transform of , i.e. a non - uniform sampling of the profile .iterating this scheme is the basis of our algorithm .we will omit for the time being the prerequisite on the amplitude of the roughness .we will return to this point by considering the stability of the algorithm .the intermediate quantities appearing at the iteration will be labelled with a superscript .we also formulate the algorithm directly in discrete terms suited for a numerical implementation . in the remainder of this article, all functions will be decomposed over a set of discrete values .the number of fourier modes will thus be limited to .we first introduce a series of sampling points with which is initially set to an arithmetic series .the sampling of by the gives the array the discrete fourier transform of this array is the complex valued array for .the latter is shortly written as \ ] ] where denotes the fourier transform , which will be chosen as the fast fourier transform ( fft ) algorithm , thus imposing that is an integer power of 2 .the intermediate mapping is computed from the as the latter form is obtained from the identification of eq.([eqboom]b ) and the definition of , taking care of the fact that one sum is over positive index , while the other extends over the interval $ ] .then , one computes the series this linear transformation is shortly noted as \ ] ] where is the above detailed transformation .the form of is dictated by eq.([eqboom]a ) for positive index , and from the fact that the inverse fourier transform of ( see below ) is real .the new sampling series is finally obtained from \label{eqalgend}\ ] ] the equations ( [ eqalgbeg]-[eqalgend ] ) define one step in the algorithm relating to .briefly we note this step .the searched function is clearly a fixed point of the transformation defined above in a discretized version .the uniqueness of the transformation results from that of the harmonic field in the domain with an equipotential condition on the boundary and a constant gradient perpendicular to the boundary at infinite distance from it . therefore, the only condition to consider is the stability of the fixed point .let us assume that we have an approximate solution of the transformation , from which we compute the series .all intermediate quantities computed from the exact solution are denoted by a superscript .following one complete iteration of the algorithm , we obtain the following expressions \\b = & b^*+{\cal g}[a - a^ * ] \\ ( \delta u)^{\prime}= & { \cal f}^{-1}[b - b^ * ] \end{array}\ ] ] where a taylor expansion of has been used to estimate the values of and where indices are omitted when unnecessary .the resulting difference after one cycle is thus \ ] ] let us introduce the norm parseval s theorem relates the above norm in real and fourier spaces according to in a similar fashion , the transformation does not affect the norm using the two previous results , we can estimate the norm of as therefore , if the absolute value of the slope of the objective profile satisfies , for all , then the fixed point is attractive for the transformation .it should however be noted that the number of modes should be large enough so that the perturbation should be small enough to legitimate the taylor expansion of used in the stability analysis . in practice, the convergence is very fast provided the sufficient condition ( [ eqcondcv ] ) is fulfilled .moreover it has to be noted that one step in the algorithm requires a rather limited amount of computing time of order ( i.e. as for a fft operation ) . considering that this computation gives the solution of an harmonic problem in a semi - infinite domain, this cost appears to be extremely low .when our algorithm is applied to a simple monochromatic sine ( or cosine ) profile , , it turns out that as soon as the condition ( [ eqcondcv ] ) is violated ( i.e. ) the scheme is unstable , and a loop begins to appear around the origin where the slope exceeds 1 .thus the sufficient condition is also a necessary condition .the limit can simply be broken if one uses an under - relaxation scheme .the optimum determination of the under - relaxation parameter , or the use of other algorithms can be found in ref. for mapping arbitrary domains on the unit disk .the transposition of these algorithms to our problem can be worked out in details .other ways to break this limit is to decompose the transformation in two ( or more ) substeps .suppose one could map the real axis onto an intermediate profile using a first transformation and then the intermediate profile onto the objective one using a second transformation .the combination of the two transformations is then the searched mapping . by breaking the problem into two steps , it is possible that each step can be handled by the above presented algorithm , while the combination of the two giving a profile having a slope larger than unity .the difficulty here is to device a suited intermediate step .one could consider for example filtering the initial objective profile so that the filtered profile may fulfill the slope constraint .we did not investigate this extension any further .we present in the following calculations of conformal transformations associated with a simple sine interface .we will use a norm on the error similar to the one introduced in the previous subsection .the distance between the objective profile and the calculated one is defined as follows : ^ 2 dx\ ] ] it is convenient to make this distance dimensionless , normalising it by the amplitude of the profile , where the objective profile has the equation .it is worth noting that the problem is far from being as simple as it might appear on the surface . in real spaceone single fourier mode is sufficient to entirely characterise the interface .the transformation however requires much more modes .figure ( [ fi : pssine ] ) shows the power spectrum of the series , for different amplitudes . 0.7the convergence of the algorithm is shown at play on figure ( [ fi : cvsine ] ) , where the obtained profile obtained after the first few iterations are shown . in this particular example andthe number of modes is 32 .0.7 the importance of allowing for enough fourier modes is also illustrated by considering the minimum error obtained as a function of as shown in figure ( [ fi : ersine ] ) .( for this particular study we did not resort to a fft algorithm to handle any value of ) . for observe that about 20 modes are necessary to reach the single precision used in the computation .as the amplitude increases , the number of modes needed to reach a small enough error becomes larger and larger .in the description of rough surfaces and interfaces , some recent progress has been achieved by recognising some scaling invariance properties which has been observed in a number of real surfaces , and has been shown to result naturally in a number of growth models .recent reviews have covered this field . due to the different roles played by the directions normal and parallel to the surface , the scaling invariance when applicable involves different scale factors depending on orientation , a property called self - affinity .we consider here only two dimensional media so that the boundary is self - affine if it remains ( statistically ) invariant under the transformation for all values of .the exponent is called the `` hurst '' or roughness exponent .it is characteristic of the scaling invariance . from this property , we derive easily that where is a prefactor . it is noteworthy that the self - affinity property does not involve the scaling of any measure .however , studying the scaling of the length of the curve , two regimes are revealed . for large distances , larger than a scale ,the curvilinear length of the profile is simply proportional to the projected length along the axis , hence on can identify a trivial fractal dimension equal to 1 . on the other hand , for distances smaller than , the arc length scales in a non - trivial fashion with the projected length .this allows to define a fractal dimension equal to .the cross - over scale between these two regimes is such that the typical slope of the profile is 1 , i.e. using the notations of eq.([eqsa ] ) , once a roughness profile has been measured , a very convenient way to check the self - affinity is to compute the power spectral density ( psd ) of the profile . in the case of a self - affine profile of exponent , the psd is expected to have the following behaviour : it is important to stress that the approach developed in this article is not specific to self - affine boundaries .however , being given the practical importance of such boundaries , and the expected generality of scaling results , we will essentially focus on self - affine boundaries as practical applications of the concepts developed in the framework of harmonic field in the vicinity of rough boundaries . in view of the form of the transformation , and of the previous scaling , eq.([eqsaps ] ) , we introduce a particular set of transformation : let us choose where are random gaussian variable with 0 mean and unit variance for the real and imagnary part independently , we can write = 1 + a. \mathop{\im m } \left [ \sum_k \epsilon_k k^{1/2 - \zeta } e^{-ikx } \right]\ ] ] then for a given set of , we can define a maximum amplitude such that the mapping is bijective : }}}\ ] ] this method gives a short way to generate directly transforms which image the real axis to a self - affine interface shape .this approach is useful to study generic properties of self - affine boundaries .when the amplitude is small enough , , and thus the series is equal to the fourier transform of the profile .the transformation sends the real axis onto a periodic function whose power - spectrum is of the form eq.([eqsaps ] ) .when the amplitude increases , the first iteration of the algorithm turns out to be rather approximative . in order to show that the power spectrum of is not significantly altered by further steps , we show in figure ( [ fi : pssyntsa ] ) the power spectrum of as compared to the initial zeroth order approximation .the result have been obtained from an average over 100 profiles with 2048 modes each .we see that the synthetic generation of the transform does not modify the power spectrum of coefficients . 0.7 therefore, we can directly generate mappings which will transform the real axis into a periodic boundary which is self - affine with any prescribed roughness exponent for distances smaller than the period .such a construction may appear as artificial in the sense that the rough boundary is not imposed but on the contrary it results from the choice of the mapping .it is however useful , as will be shown later , because it allows to study generic properties of harmonic fields close to self - affine boundaries .the alternative way consists in using the mapping construction algorithm .we have in a previous section analysed the convergence of the algorithm applied to the special case of a sine profile .we now consider the case of a self - affine boundary in a similar fashion .this interface has been calculated in the real space with 64 modes , and we have used 256 modes in the conformal transformation . the standard deviation of the height distribution is called .the chosen exponent chosen for this example is . from eq.([eqampmax ] ) , we note that the maximum amplitude decreases as the number of modes increases .this is natural since as the lower cut - off in the scaling regime decreases , the self - affine function will tend toward a continuous but non - differentiable curve when .the distribution of local slopes is indeed expected to gets wider and wider as the number of modes increases .quantitatively , .it is to be noted that as increases , the standard deviation of the height does not increase .it is to be noted that these conclusions are drawn under the hypothesis that the longest wavelength remains fixed , here set to . alternatively , if the smaller cut - off and the amplitude of the corresponding mode were kept constant while increasing the number of modes , then the maximum amplitude would remain constant . as in the previous example ( sine profile ), we can observe on figure ( [ fi : exsyntsa ] ) an example of the conformal map obtained for , where the maximum standard deviation that could be handled by the algorithm without diverging is .we can see on this figure that the major differences between the objective and calculated profiles occur in areas where the local slope is maximum . for roughness amplitude greater than the convergence threshold, one can see loops appearing in these areas .the convergence speed , the sensitivity to the number of modes allowed in the determination of , the evolution of the minimum error , ... behaved for these self - affine profiles in a similar qualitative way as for the simple sine profile . 0.7in the following , we show that the knowledge of such a conformal transformation allows to solve immediatly harmonic problems .we essentially focus here on the case where the field is assumed to be uniform far from the boundary .this is a typical case as soon as the roughness is of small amplitude .this can be seen as an asymptotic expansion focusing here on the small scale details of the interface , whereas the matching with the far field can be done using a field whose variation is small on the scale of the roughness amplitude .we will now focus on two problems : perfectly conducting boundary so that the potential gradient is normal to the boundary , and perfectly insulating boundary where the potential gradient is parallel to the surface .since we know how to taylor mappings which image the real axis on a generic self - affine boundary , this gives us a key to consider the scaling features of harmonic fields in the vicinity of self - affine boundaries .harmonic fields are encountered very frequently in nature .linear transport involving scalar fields , where the flux is proportional to the field gradient plus a conservation law in the absence of sources , and in steady conditions , , imply the harmonic nature of the field , .heat diffusion obeying fourier s law , gives an harmonic temperature field in steady condition .mass diffusion with fick s law is a similar example with the concentration field .electric conduction with ohm s law , viscous flow in confined two dimensional hele - shaw cells , vorticity in stokes flow , ... constitutes a partial list of possible applications . in this part , for the sake of concreteness, we use the case of thermal conduction .we are interested in the temperature field in the region limited by the rough interface .let us first consider the case of a perfectly conducting interface , so that for each point of the boundary .we impose in the far field a homogeneous unit flux of heat .the problem to solve is the knowledge of allows then to define , image field of in the smooth domain : .as and in , the resolution of ( [ eqpblap ] ) in is thus equivalent to : the frontier being the real axis , we have immediately the solution in : and then the solution in is : \ ] ] figure ( [ fi : isotherm ] ) shows a set of isotherm curves close to a self - affine isotherm boundary .these lines become smoother and smoother when the distance to the electrode increases .the morphology of these isotherm lines have some interesting features .if denotes the distance to the boundary , one can observe from the form of the mapping that modes with a wavelength smaller than will be damped whereas longer wavelength modes will only be slightly decreased .therefore , the isotherm curves will be similar to the profile up to a low pass filtering .in the case of a self - affine boundary , the isotherms will preserve the self - affine character with the same exponent , but their lower cut - off will increase as the distance to the actual boundary , up to the distance of order of the largest wavelength . 0.7let us now study the temperature gradient .quite generally , we can write the gradient in the complex plane as from the expression of the temperature field , we have from the expression of the function , we see that at a large distance from the rough boundary , the term vanishes exponentially. therefore , one recovers the imposed condition for the temperature at infinity i.e. . on figure ( [ fi : hgrat ] ) we have presented on the same graph the profile of a rough electrode and the modulus of the temperature gradient .one may see quite easily that the field is very large ( small ) in the deepest ( highest ) areas .this field depends naturally both on the local topography and on its remote environment .the connection between the field and the local topography can be analysed through cross - correlations as will be done in a following section .0.7 the perfectly insulating boundary is the other archetypical problem whose expression is the solution to this problem can simply be obtained from the previous using duality properties of the harmonic field .the real part of the previous solution gives the answer to the problem .\ ] ] the temperature gradient is then simply we have seen previously that once we know the conformal mapping able to transplant the half complex plane onto the rough domain , we have immediatly the solution of the electrical potential near the rough electrode . if the roughness amplitude remains below the convergence threshold , we are now able to solve this problem for any kind of boundary . in practice , very often , one does not worry about the details of the rough interface .as we have seen most perturbations die away from the boundary exponentially fast .therefore , knowing the longest wavelength of the boundary gives the scale away from the boundary where the field becomes homogeneous .this means practically , that if one is interested only in the far field , one could replace the rough interface by a straight one so that the far field is unperturbed .the question we want to address in this section is the following : where should the `` equivalent '' straight interface be located so as to match the asymptotic far field ?a zeroth order guess is to place it at the geometrical average of the height distribution .this will be shown not be an exact answer , in the following , we call the distance between the equivalent position and the geometrical average . in order to illustrate the problem , let us imagine the following experiment .let us consider an electrolytic bath , where the electrical field is homogeneous between two opposite electrodes and at a distance from each other .the electrical resistance of the set - up is measured .then , as illustrated in figure ( [ fi : elect ] ) we place in the middle of the bath and parallel to the electrodes a rough plane of negligible thickness which is a good conductor , so that it can be considered as an equipotential .we measure again the electrical resistance of the set - up , which is now reduced to .what is the value of ?so as to answer this question , we part the system in two , and .each of these two problems corresponds to the situation described in the introduction of this section . extrapolating the field from electrode , we find an offset .similarly from we obtain a different offset , so that , ignoring the details of the perturbed field in the vicinity of , the rough electrode will appear as being equivalent to a plane electrode of thickness .this `` electrical thickness '' has nothing to do with the real thickness of the plane considered here to be zero .if the rough electrode has the shape of a sine function , of amplitude and wavelength we will argue below that . finally it is a simple matter to relate the resistance drop to this effective thickness through . 0.7 we now revert to the notation of the previous paragraph , and deal with the temperature instead of the voltage . for distances away from the rough boundary much greater than , ( our longest wavelength ) ,all exponential terms die out , and hence the far - field can be written where is the constant term in the function .the off - set position of the equivalent isotherm is thus let us first analyse the problem for a small amplitude sine boundary of amplitude and wavelength .the offset in the location of the equivalent straight boundary is to be normalised by to obtain a dimensionless quantity .the latter should be a function of the dimensionless ratio .taylor expansion of this function provides the perturbation expansion a simple argument allows to simplify the latter equation .suppose one would analyse the problem for the profile of amplitude .the latter is obtained from the former by a translation along the axis by an amount .thus should be unchanged .this imposes that odd terms in the expansion should vanish , hence thus the dominant correction is of order .it can be interpreted as the product of the amplitude and a typical slope .this result holds in the limit of a small amplitude and long wavelength .if the wavelength goes to zero , clearly the offset should converge to the amplitude , but the latter limit can not be obtained from the above taylor expansion in the small parameter . for a sine profile of small amplitude it is possible to carry out the computation of the coefficient .we briefly sketch here the solution .the potential is to be computed to second order in .we revert as in the previous sections to a wavelength .the solution reads \vspace{0.15 cm } \\ & & & \displaystyle + \mathop{\im m}\left [ i\frac{a^2}{2 } \right]+ { \cal o}(a^3 ) \end{array}\ ] ] the offset can be read from this equation as . reincorporating the dependence ,we arrive at or this last result is of course only valid for small values , being bounded by . on figures ( [ fi : eqboh - a ] ) and ( [ fi : eqboh - l ] ) , we can see comparisons between this perturbative calculation and the result directly obtained by conformal transformation . we observe an excellent agreement for small amplitudes ( resp .large wavelengths ) and then the perturbative calculation overestimates for larger values of the amplitude ( resp . smaller wavelength ) . in view of the upper bound on the offset and the above perturbation expansion , we propose the following form which fits the data accurately as can be seen on figs .( [ fi : eqboh - a ] ) and ( [ fi : eqboh - l ] ) , and which reproduces both limiting behaviors and . 0.7 0.7 the question we ask is now how does the result translate to a rough profile ? in particular for a self - affine profile , there is no characteristic length scales apart from the cut - offs .the product of the amplitude times the slope is a scale dependent factor .is it possible to reach quantitative conclusions for such profiles ? in order to estimate for a rough boundary , we use the formalism developed for introducing the algorithm .we expand the function as well as all other intermediate quantities in series of the profile amplitude . using the linearity of the transformations , and ,we arrive at =\displaystyle{\frac{\mathop{\re e}[a_0]}{2n } } = { \frac 1{2n}}\sum_jh_j^{*}\\ & \displaystyle = { \frac 1{2n}}\sum_jh^{\prime } ( u_j)\ { \cal f}^{(-1)}\circ { \cal g}\circ { \cal f}[h_j ] \end{array}\ ] ] up to third order terms in the amplitude .we now need an asymmetric version of parseval s theorem .let us compute the integral for two arbitrary arrays defined in real space for and fourier space for = & \displaystyle{\frac 1{2n}}\sum_j\sum_ku_jv_ke^{-ikj } \\= & \displaystyle{\frac 1{2n}}\sum_k\overline{{\cal f}[u]}_kv_k \end{array}\ ] ] the offset can now be expressed as }_k\ { \cal g}\circ { \cal f}[h(x)]_k\ k \\ = & \displaystyle-{\frac 1{4n^2}}\left ( \sum_{k>0}\tilde h_k\overline{\tilde h_k}\ k-\sum_{k<0}\overline{\tilde h_k}\tilde h_k\ k\right ) \\= & \displaystyle-{\frac 1{2n^2}}\sum_{k>0}|\tilde h(k)|^2\ k \end{array}\ ] ] where is the fourier transform of . figure ( [ fi : eqbosa ] ) gives the evolution of with the amplitude of self - affine profiles of roughness exponent , and 16 modes .again , we observe that the above given expression ( [ eqhexpress ] ) is accurate for small amplitude , but shows deviations for larger amplitudes . 0.7 it is interesting to consider the scaling of observed from the generic transformations where are postulated to be .the expectation value of the offset reads to dominant order in the amplitude where the extra factor of 2 comes from the expectation value of are independent gaussian variables of zero mean and unit variance . depending on the value of the roughness exponent cases are to be distinguished . for a _ `` persistent '' _ profile i.e. the sum in eq.([eqhsasca ] ) is dominated by the smallest , i.e. the longest wavelength , and thus the scaling of can be expressed as where is the riemann zeta function .we have quitted momentarily the convention that the largest wavelength is , hence and are respectively the smallest and largest cut - off lengths in the profile . in this case , , the amplitude , is such that the largest wavelength mode amounts to .let us introduce the standard deviation of the profile given by which leads ( using parseval s theorem ) to to the scaling eq.([eqsa ] ) .eq.([eqhsasca ] ) can then be expressed as the latter equation simply means that the rough profile behaves as a the simple monochromatic profile .this conclusion is however not always valid as is shown in the following case . for a _ `` anti - persistent '' _ profile i.e. the sum in eq.([eqhsasca ] ) is dominated by the largest , i.e. the shortest wavelength , in contrast to the previous persistent case . therefore , we can express the scaling of in an intrinsic fashion as in contrast to the persistent case , it appears that the offset is dependent on the lower cut - off scale of the profile .in fact if is kept fixed , the standard deviation grows as .therefore , one sees that the upper scale cut - off disappears , so that only depends on . in order to see this more clearly, we introduce another measure of the roughness which is sensitive to the small scale .let be the norm of the derivative of : which amounts to from the latter norm , the offset can be written as which is the counterpart of eq.([eqpers ] ) for the antipersistent case . as a conclusion , the scaling of the offset controlled by the shortest ( resp .longest ) scale cut - off of the self - affine regime for anti - persistent ( resp .persistent ) boundaries . in a preceeding section, we have extracted the expression of the temperature gradient as a function of the transformation .we now use it to investigate the correlations between the topography and the temperature gradient .we study these correlations in the limit of a small amplitude .the first order perturbation in the temperature gradient can be extracted from eq.([eqgrad ] ) as where the amplitude of the profile is asssumed to be of order .we introduce the logarithm of the temperature gradient denoted which can be expressed as +{\cal o}(\epsilon^2)\ ] ] from now on we will omit the term , keeping in mind that we focus here only on the dominant term . in order to compute the correlation between the gradient of temperature and the height, we form the cross - product and average over ( or for convenience , since their difference is of order ) .the expectation value of the product is \mathop{\im m}[\omega(x)]\rangle = -2 \sum_k k\vert\omega_k\vert^2\ ] ] it is amazing that the same expression appeared when computing the offset of the equivalent straight boundary .we define now the correlation coefficient which can be identified as the slope of a linear regression between and .its value is since to first order in .hence we have this expression holds for any rough boundary of small amplitude . in the particular case of a self - affine boundary ,we assume as in the previous section that the transformation can be taken as the one generated artificially from its fourier decomposition .the latter expression can thus be written as from the latter expression , we have to distinguish between persistent and anti - persistent profiles depending on whether the series is convergent or divergent when the number of modes increases . in the case of a _ persistent _ self - affine boundary , as the number of modes increases to infinity , the value of converges toward an asymptotic limit given by a ratio of riemann zeta functions the divergence of the zeta function as its argument approaches 1 , leads to a divergence of as . in the more general case where is not set to , the above equation should be corrected to as a practical illustration of the latter property we have studied the correlations between and by averaging at fixed for 1000 profiles having the same characteristics : amplitude , roughness exponent and 64 fourier modes .figure ( [ fi : corrstat ] ) shows the evolution of versus . from the eq.([eqriemann ] ) we estimate .as shown on figure ( [ fi : corrstat ] ) , this value of provides an accurate fit to the data .the evolution of this coefficient as a function of is shown in figure ( [ fi : corrslop ] ) .0.7 0.7 the anti - persistent self - affine profile behaves differently from the previous case .the correlation between the surface temperature gradient and the height vanishes .mathematically , this result can be traced to the difference of behavior of the two series in eq.([eqalpha ] ) . however , as in the previous section concerning the location of the equivalent smooth interface , one can extract the asymptotic behavior of . this latter result shed some light on the physical meaning of the previously mentioned divergence . in our presentation, we have chosen to fix the largest wavelength ( set to ) and amplitude of this mode . increasingthe number of modes implies that the shortest wavelength decreases . for roughness exponents in the range , this implies an algebraic increase of with , while is bounded .this divergence of is of no importance for the correlation only if the profile is persistent .otherwise , eq.([eqalpha_ap ] ) holds .the perturbation method used however assumes that both and should be small .the above analysis simply identifies which cut - off will dictate its behavior to the correlation .the antipersistent case is more suited to the case where is fixed together with its amplitude , while varies . in this case, the coefficient increases as as can be read from eq.([eqalpha_ap ] ) using the scaling ( being independent of ) . up to now , we have only considered harmonic problems with a uniform field at infinity .this kind of boundary condition is of particular interest for problems where the scale of variation of the field in the bulk of the solid is large compared to the scale of the roughness so that an asymptotic development can be performed where the matching is to be done on the far field as one focusses on the rough boundary .however , from the conformal mapping , one can address more complex types of boundary conditions . in order to illustrate this ,we develop here a particular class of solutions which can be used to solve any problem .we will consider green functions which give the field in the medium for localized flux injected in the medium from the surface .let us consider the following problem : a localised flux is injected at point , on the border of the unit circle .the remaining boundary is perfectly insulating .the same flux is withdrawn at the origin where . the harmonic field which fulfills such boundary conditions is \ ] ]this potential is the green function for the domain .considering the transformation maps the unit circle to the semi - plane . in the transformation ,the potential becomes \ ] ] which is the green function for a unit flux localised at every site for all integer . at infinity , the potential approaches . from this green functionit is simple to derive the one obtained for a translated array of sources . for sources at , we have \ ] ] from this latter expression , the green function for a localised and periodic source on the rough profile is obtained by combining and .the green function thus reads which gives the potential at point for a series of sources periodically spaced with the same period as the profile .we have introduced here conformal mapping technique which allows to address harmonic problems in semi - infinite domains limited by a rough interface .this mapping is accompanied by an efficient numerical technique which allows us to compute the mapping by a few iterations of a one dimensional fourier transform . moreover, this technique provides a natural basis for discussing analytically some practical applications .we then defined and studied the notion of an equivalent smooth boundary , whose position has been obtained exactly in the limit of a small amplitude .this question underlined the differences between persistent and anti - persistent boundaries , in terms of sensitivity to the lower or upper scale cut - off of the self - affine character of the boundary .we considered the question of correlations between the gradient of the harmonic field on the boundary and the height of the profile at the same point .the correlation has been explicitly computed and shown to converge to a precise limit for persistent boundaries .anti - persistent profiles lead to a correlation coefficient which is dependent on the self - affinity range .extensions of the above technique are numerous .we essentially focused here on static problems involving harmonic fields .however , the same mapping may also be used in connection with evolution problems such as diffusion or wave propagation ( localisation ) .thermal diffusion in the vicinity of a rough boundary has recently been shown to display an anomalous scaling behavior at early stages which could be addressed by such methods .impedence of rough electrodes is another potential field of extension which has been studied in recent years .it may also be one constitutive brick of a different mapping dealing with different geometries .an example of such extensions is the stress intensity factor ( i.e. the singular behavior of the stress field ) at a crack tip . in the framework of antiplaneelasticity one can compute the local stress intensity factor at the crak tip and relate it to the far - field singular behavior .this problem is currently being investigated .b. b. mandelbrot , d. e. passoja and a. j. paullay , nature * 308 * , 721 ( 1984 ) ; s. r. brown , geophys . res .* 13 * , 1430 ( 1986 ) ; r. h. dauskardt , f. haubensak and r. o. ritchie , acta metall . mater . * 38 * , 143 ( 1990 ) ; b. l. cox and j. s. y. wang , fractals * 1 * , 87 ( 1993 ) ; e. bouchaud , g. lapasset and j. planes , europhys . lett . * 13 * , 73 ( 1990 ) ; k. j. maloy , a. hansen , e. l. hinrichsen and s. roux , phys . rev. lett . * 68 * , 213 ( 1992 ) . | the aim of this study is to analyze the properties of harmonic fields in the vicinity of rough boundaries where either a constant potential or a zero flux is imposed , while a constant field is prescribed at an infinite distance from this boundary . we introduce a conformal mapping technique that is tailored to this problem in two dimensions . an efficient algorithm is introduced to compute the conformal map for arbitrarily chosen boundaries . harmonic fields can then simply be read from the conformal map . we discuss applications to equivalent smooth interfaces . we study the correlations between the topography and the field at the surface . finally we apply the conformal map to the computation of inhomogeneous harmonic fields such as the derivation of green function for localized flux on the surface of a rough boundary . defining and computing effective properties of heterogeneous media is a subject which has been studied for a long time , and for which a number of powerful techniques have been developped . in most cases however , the heterogeneities are considered to lie in the bulk of the material . another type of inhomogeneity is due to the random geometry of the surface on which boundary conditions are applied . this study focusses on this second type . we will thus consider _ homogeneous media _ which are limited by a rough surface or interface . our purpose here is to introduce a very efficient way of solving harmonic problems in two - dimensional systems for any geometry of the boundary . the occurence of rough interfaces in nature is more the general rule than the exception . apart from very specific cases such as mica where a careful cleavage can produce planar surfaces at the atomic scale , surfaces are rough . even glass with a very homogeneous composition , where the surface is obtained by a slow cooling of the material , so that surface tension can act effectively to smoothen all irregularities , does display roughness in the range 5 to 50 nanometers over a window of a few micrometers width . similarly , the so - called `` mirror '' fracture surface which is optically smooth exhibits specific topographic patterns when examined with an atomic force microscope . the key question is thus to identify the relevant range of scales at which roughness appears . from common observations , this question may not admit a simple clear cut answer . indeed in a variety of cases , the amplitude of the roughness appears to be strongly dependent on the size of the examined surface . a particular class of such scale dependent roughness , namely _ self - affine roughness_ , has recently motivated a lot of activity ( see references for recent reviews ) both because of its relevance in many different instances , and of their theoretical justification which has been obtained in statistical physics for a wide class of models ranging from growth models , molecular beam epitaxy , fracture surfaces , to immiscible fluid interfaces . although the present study is not specific to self - affine surfaces , we shall consider this particular class in order to apply our method . the interest of this choice being that _ i ) _ the description of the roughness is realistic for a number of applications , _ ii ) _ consequences can be expressed in quite general terms as a function of few parameters directly accessible experimentally , and finally _ iii ) _ the most commonly studied roughness models are `` monochromatic '' surfaces with a single asperity pattern repeated periodically , and hence the transposition to more complex geometries may reveal wrong ( examples of such cases will be explicited in the main body of this article ) . as previously mentioned , if most surfaces are rough , this roughness may be of small amplitude macroscopically , and thus one may feel that its role can be neglected in most cases . fortunately , this is generally true . taking into account precisely the surface roughness may be required in two distinct classes of problems : the first class ( i ) covers applications where the roughness can not be neglected at the scale at which the bulk field varies . for obvious reasons , there is no way to avoid the accurate description of the boundary . we may mention the following potential applications : in confined geometries , such as naturally encountered in surface force study , the roughness of the surface may affect the interpretation and thus the precision of the measurements since the distance between two facing surfaces is generally estimated from indirect measurements of transport in the gap between the surfaces. field which are rapidly varying in space will be sensitive to fine details of the boundary geometry . the most obvious example in this field is the reflection and scattering of a wave by a rough boundary. of particular importance is the case of surface waves , evanescent waves , rayleigh waves in elasticity , ... in a similar spirit , diffusion processes may display anomalous behaviors at short times where the diffusion length is smaller or comparable to the roughness. the second class of problems ( ii ) where roughness can not be neglected is when one has to focus on the boundary , either because only this part matters for extraneous reasons or because the system is sensitive to high fields which can be induced by the roughness itself . some examples of these two cases are listed below : surface phenomenon such as electrofiltration requires a proper solution of say a stokes flow field , in the immediate vicinity ( typically debye length scale ) of a rough boundary where an electric boundary layer is present and can be entrained by the fluid to give rise to an electric current in response to a fluid flow in a porous medium. the brittle fracture of glass is generally due to surface defects which induce locally high stresses , which reduce significantly the breaking limit of this material . in the absence of specific surface degradation the most important source of surface defect is the topography itself . some growth models have a local growth rate which depends on a harmonic field locally . the development of unstable modes which will finally induce a macroscopic roughening does require the proper analysis of the field at the surface. the relative independence of the bulk field on the small scale roughness of the boundary for a slowly varying field ( class ii problems ) can be used to explore the local field close to the boundary using an asymptotic analysis with a double scale technique . the large scale problem consists in solving the problem at hand replacing the rough boundary by a smooth equivalent one . the small scale problem deals with the details of the rough boundary and matches at `` infinity''with a homogeneous field . this local problem will be considered in full details in the following . these examples are obviously not exhaustive . inhomogeneous boundary conditions may arise for instance in contact problems where the roughness can not be neglected. one may also consider application outside the realm of physical applications , such as the use of harmonic problems and particularly conformal map for providing a simple means of meshing a domain limited by a rough boundary . in the present article we will essentially focus on harmonic problems . the latter arise in a variety of different domains in physics , such as electrostatics , thermal or concentration diffusion , flow in porous media , anti - plane elasticity to mention a few . another use of conformal mappings is the resolution of bi - harmonic problems near a rough interface ; both stress field in elasticity and velocity field in low reynolds number fluid mechanics can be derived from potentials that obey such bi - laplacian equations . we refer the reader to the companion paper which is completely devoted to this specific problem . this paper is devoted to the study of harmonic problems in 2d semi - infinite media limited by a rough boundary . to extend the definition of the profile of the boundary to infinity , we use periodic boundary conditions along the boundary . although very specific , this type of geometry will be very convenient as soon as no other boundary lies close to the first one . the distance threshold to consider in such a case is typically of the order of magnitude of the larger spatial wavelength of the profile _ i.e. _ the spatial period in the geometry we have described . we use a conformal mapping technique . it consists in contructing a map from the domain of interest ( in the complex plane ) onto a regular semi - plane . the conformity of the map allows to preserve harmonicity through the map transform . in a first part of this paper , we define the form of the conformal mapping suited to our geometry . then , we adress the question of constructing the mapping associated with any prescribed interface . we show that this problem can be solved with an iterative algorithm using fast fourier transforms ( fft ) . this algorithm allows to get the conformal map in a few iterations of fft , whose computation time scales as where is the number of fourier modes used to describe the interface . note the remarquable efficiency of such a technique , considering that the map gives the solution of a laplacian field in the entire two - dimensional problem . this problem is very close to the so - called `` theodorsen problem '' in a circular geometry . we also show that one can generate maps which naturally give rise to self - affine boundaries , a powerful technique to explore generic properties of such problems . specific applications of this technique to self - affine profile are studied , which includes _ i ) _ the question of defining an equivalent smooth ( planar ) interface , and finding its height compared to the geometrical average height of the interface , _ ii ) _ the correlation between the height and the field which is computed exactly in the limit of a small roughness amplitude . these two examples demonstrate the unexpected difference in behaviors for persistent and anti - persistent profiles . finally , we give the expression of the green function for localized flux on a rough interface . |
it is now widely accepted that acceleration in supernova - driven shock waves plays an important role in the production of the observed cosmic ray spectrum up to energies of ( heavens 1984a ; ko 1995a ) , and it is plausible that acceleration in shocks near stellar - size compact objects can produce most of the cosmic radiation observed at higher energies ( jones & ellison 1991 ) . in the generic shock acceleration model , cosmic rays scatter elastically with magnetic irregularities ( mhd waves ) that are frozen into the background ( thermal ) gas ( gleeson & axford 1967 ; skilling 1975 ) . in crossing the shock , these waves experience the same compression and deceleration as the background gas , if the speed of the waves with respect to the gas ( roughly the alfvn speed ) is negligible compared with the flow velocity ( achterberg 1987 ) .the convergence of the scattering centers in the shock creates a situation where the cosmic rays gain energy systematically each time they cross the shock .since the cosmic rays are able to diffuse spatially , they can cross the shock many times . in this process , an exponentially small fraction of the cosmic rays experience an exponentially large increase in their momentum due to repeated shock crossings .the characteristic spectrum resulting from this first - order fermi process is a power - law in momentum ( krymskii 1977 ; bell 1978a , b ; blandford & ostriker 1978 ) .it was recognized early on that if the cosmic rays in the downstream region carry away a significant fraction of the momentum flux supplied by the incident ( upstream ) gas , then the dynamical effect of the cosmic - ray pressure must be included in order to obtain an accurate description of the shock structure ( axford , leer , & skadron 1977 ) . in this scenario ,the coupled nonlinear problem of the gas dynamics and the energization of the cosmic rays must be treated in a self - consistent manner .a great deal of attention has been focused on the `` two - fluid '' model for diffusive shock acceleration as a possible description for the self - consistent cosmic - ray modified shock problem . in this steady - state theory ,first analyzed in detail by drury & vlk ( 1981 , hereafter dv ) , the cosmic rays and the background gas are modeled as interacting fluids with constant specific heat ratios and , respectively . the coupling between the cosmic rays and the gas is provided by mhd waves , which serve as scattering centers but are otherwise ignored .the cosmic rays are treated as massless particles , and second - order fermi acceleration due to the stochastic wave propagation is ignored . within the context of the two - fluid model ,dv were able to demonstrate the existence of multiple ( up to 3 ) distinct dynamical solutions for certain upstream boundary conditions .the solutions include flows that are smooth everywhere as well as flows that contain discontinuous , gas - mediated `` subshocks . ''subsequently , multiple solutions have also been obtained in modified two - fluid models that incorporate a source term representing the injection of seed cosmic rays ( ko , chan , & webb 1997 ; zank , webb , & donohue 1993 ) .only one solution can be realized in a given flow , but without incorporating additional physics one can not determine which solution it will be .the two - fluid model has been extended to incorporate a quantitative treatment of the mhd wave field by mckenzie & vlk ( 1982 ) and vlk , drury , & mckenzie ( 1984 ) using a `` three - fluid '' approach . during the intervening decades ,a great deal of effort has been expended on analyzing the structure and the stability of cosmic - ray modified shocks ( see jones & ellison 1991 and ko 1995b for reviews ) .much of this work has focused on the time - dependent behavior of the two - fluid model , which is known to be unstable to the development of acoustic waves ( drury & falle 1986 ; kang , jones , & ryu 1992 ) and magnetosonic waves ( zank , axford , & mckenzie 1990 ) .ryu , kang , & jones ( 1993 ) extended the analysis of acoustic modes to include a secondary , rayleigh - taylor instability . in most cases, it is found that the cosmic - ray pressure distribution is not substantially modified by the instabilities .the two - fluid theory of dv suffers from a `` closure problem '' in the sense that there is not enough information to compute the adiabatic indices and self - consistently , and therefore they must be treated as free parameters ( achterberg , blandford , & periwal 1984 ; duffy , drury , & vlk 1994 ) .this has motivated the subsequent development of more complex theories that utilize a diffusion - convection transport equation to solve for the cosmic - ray momentum distribution along with the flow structure self - consistently . in these models , `` seed '' cosmic rays are either advected into the shock region from far upstream , or injected into the gas within the shock itself .interestingly , kang & jones ( 1990 ) , achterberg , blandford , & periwal ( 1984 ) , and malkov ( 1997a ; 1997b ) found that diffusion - convection theories can also yield multiple dynamical solutions for certain values of the upstream parameters , in general agreement with the two - fluid model .frank , jones , & ryu ( 1995 ) have obtained numerical solutions to the time - dependent diffusion - convection equation for oblique cosmic - ray modified shocks that are in agreement with the predictions of the steady - state two - fluid model .these studies suggest that , despite its shortcomings , the two - fluid theory remains one of the most useful tools available for analyzing the acceleration of cosmic rays in shocks waves ( ko 1995b ) . in their approach to modeling the diffusive acceleration of cosmic rays, dv stated the upstream boundary conditions for the incident flow in terms of the total mach number , and the ratio of the cosmic - ray pressure to the total pressure , where , , , , , and denote the flow velocity of the background gas , the total sound speed , the gas density , and the total , cosmic - ray , and gas pressures , respectively .we will use the subscripts `` 0 '' and `` 1 '' to denote quantities associated with the far upstream and downstream regions , respectively .dv described the incident flow conditions by selecting values for and .once these parameters have been specified , the determination of the flow structure ( and in particular the number of possible solutions ) in the simplest form of the two - fluid model requires several stages of root finding . since the analysis is inherently numerical in nature , the results are usually stated only for specific upstream conditions .the characterization of the upstream conditions in terms of and employed by dv turns out to be an inconvenient choice from the point of view of finding exact critical relations describing the number of possible flow solutions for given upstream conditions . as an alternative approach , it is possible to work in terms of the individual gas and cosmic - ray mach numbers , defined respectively by where denote the gas and cosmic - ray sound speeds , respectively . according to equation ( 1.1 ) , , andtherefore the mach numbers and are related to the total mach number and the pressure ratio via or , equivalently , since these equations apply everywhere in the flow , the boundary conditions in the two - fluid model can evidently be expressed by selecting values for _ any two _ of the four upstream parameters . in their work ,dv described the upstream conditions using , whereas ko , chan , & webb ( 1997 ) and axford , leer , & mckenzie ( 1982 ) used .another alternative , which apparently has not been considered before , is to use the parameters .although these choices are all equivalent from a physical point of view , we demonstrate below that the set is the most advantageous mathematically because it allows us to derive _ exact _ constraint curves that clearly delineate the regions of various possible behavior in the parameter space of the two - fluid model .this approach exploits the formal symmetry between the cosmic - ray quantities and the gas quantities as they appear in the expressions describing the asymptotic states of the flow .the remainder of the paper is organized as follows . in 2we discuss the transport equation for the cosmic rays and derive the associated moment equation for the variation of the cosmic - ray energy density . in 3 we employ momentum and energy conservation to obtain an exact result for the critical upstream cosmic - ray mach number that determines whether smooth flow is possible for a given value of the upstream gas mach number . in 4 we derive exact critical conditions for the existence of multiple solutions containing a discontinuous , gas - mediated subshock .the resulting curves are plotted and analyzed for various values of the adiabatic indices and . in 5 we present specific examples of multiple - solution flows that verify the predictions made using our analytical critical conditions .we conclude in 6 with a general discussion of our results and their significance for the theory of diffusive cosmic - ray acceleration .the two - fluid model is developed by treating the cosmic rays as a fluid with energy density comparable to that of the background gas , but possessing negligible mass . in this sectionwe review the basic equations relevant for the two - fluid model . for integrity and clarity of presentation, we also include re - derivations of a few of the published results concerning the overall shock structure and the nature of the transonic flow . the diffusive acceleration of energetic cosmic rays due to the convergence of scattering centers in a one - dimensional , plane - parallel flow is described by the transport equation ( skilling 1971 ; 1975 ) where is the particle momentum , is the flow velocity of the background gas ( taken to be positive in the direction of increasing ) , is the spatial diffusion coefficient , and the operator expresses the comoving ( lagrangian ) time derivative in the frame of the gas .equation ( 2.1 ) describes the effects of fermi acceleration , bulk advection , and spatial diffusion on the direction - integrated ( isotropic ) cosmic - ray momentum distribution , which is normalized so that the total number density of the cosmic rays is given by note that equation ( 2.1 ) neglects the second - order fermi acceleration of the cosmic rays that occurs due to stochastic wave propagation , which is valid provided the alfvn speed is much less than the flow velocity , where is the magnetic field strength .furthermore , equation ( 2.1 ) does not include a particle collision term , and therefore it is not applicable to the background gas , which is assumed to have a thermal distribution .the momentum , mass , and energy conservation equations for the gas can be written in the comoving frame as respectively , where is the internal energy density of the gas . the expression for in equation ( 2.4 ) implies a purely adiabatic variation of , and therefore it neglects any heating or cooling of the gas due to wave generation or damping .this adiabatic equation must be replaced with the appropriate rankine - hugoniot jump conditions at a discontinuous , gas - mediated subshock , should one occur in the flow . in the case of a relativistic subshock , the momentum conservation equation for the gas must be modified to reflect the anisotropy of the pressure distribution ( e.g. , kirk & webb 1988 ) . the pressure and the energy density associated with the isotropic cosmic - ray momentum distribution are given by ( duffy , drury , & vlk 1994 ) where denote respectively the kinetic energy , the speed , and the lorentz factor of a cosmic ray with momentum and mass .although the lower bound of integration is formally taken to be , in practice the cosmic rays are highly relativistic particles , and therefore vanishes for .if the distribution has the power - law form , then we must have in order to avoid divergence in the integrals for and ( achterberg , blandford , & periwal 1984 ) , although this restriction can be lifted if cutoffs are imposed at high and/or low momentum ( kang & jones 1990 ) .we can obtain a conservation equation for the cosmic - ray energy density by operating on the transport equation ( 2.1 ) with , yielding where the mean diffusion coefficient is defined by ( duffy , drury , & vlk 1994 ) and the cosmic - ray adiabatic index is defined by ( malkov & vlk 1996 ) note that in deriving equation ( 2.7 ) , we have dropped an extra term that arises via integration by parts because it must vanish in order to obtain finite values for and . the integral expression in equation ( 2.9 )indicates that must lie in the range .it also demonstrates that will evolve in response to changes in the shape of the momentum distribution .the closure problem in the two - fluid model arises because is not calculated at all , and therefore must be imposed rather than computed self - consistently .the conservation equations can be rewritten in standard eulerian form as where the fluxes of mass , momentum , and total energy are given respectively by the momentum and energy fluxes can be expressed in dimensionless form as where is the asymptotic upstream flow velocity and the dimensionless quantities , , and are defined respectively by note that the definition of implies that the incident flow has .these relations can be used to rewrite equations ( 1.3 ) for the gas and cosmic - ray mach numbers as in this paper we shall adopt the two - fluid approximation in the form used by dv , and therefore we assume that the adiabatic indices and are each constant throughout the flow .the assumption of constant is probably reasonable since the background gas is expected to remain thermal and nonrelativistic at all locations .the assumption of constant is more problematic , since we expect the cosmic ray distribution to evolve throughout the flow in response to fermi acceleration , but it is justifiable if the `` seed '' cosmic rays are already relativistic in the far upstream region .we also assume that a steady state prevails , so that the fluxes , , and are all conserved . in this casethe quantities and express the pressures of the two species relative to the upstream ram pressure of the gas , where is the asymptotic upstream mass density .the eulerian frame in which we are working is necessarily the frame of the shock , since that is the only frame in which the flow can appear stationary ( becker 1998 ) . in a steady state ,the adiabatic variation of implied by equation ( 2.4 ) indicates that along any smooth section of the flow , the gas pressure can be calculated in terms of the velocity using where and denote fiducial quantities measured at an arbitrary , fixed location within the section of interest . according to equation ( 2.19 ) , the associated variation of the gas mach number along the smooth section of the flowis given by where denotes the gas mach number at the fiducial location .substituting for in equation ( 2.16 ) using equation ( 2.20 ) and differentiating the result with respect to yields the dynamical equation ( achterberg 1987 ; ko , chan , & webb 1997 ) critical points occur where the numerator and denominator vanish simultaneously .the vanishing of the denominator implies that at the critical point , and therefore the critical point is also a _ gas sonic point _( axford , leer , & mckenzie 1982 ) .the vanishing of the numerator implies that at the gas sonic point .we can rewrite the dimensionless momentum flux by using equations ( 2.19 ) to substitute for and in equation ( 2.16 ) , yielding similarly , equation ( 2.17 ) for the dimensionless energy flux becomes using equation ( 2.23 ) to eliminate in equation ( 2.24 ) yields for the gradient of the cosmic - ray pressure where along any smooth section of the flow , depends only on by virtue of equation ( 2.21 ) , which gives as a function of . in the two - fluid model , the flow is assumed to become gradient - free asymptotically , so that in the far upstream and downstream regions ( dv ; ko , chan , & webb 1997 ) .the function must therefore vanish as , and consequently we can express the values of and in terms of the upstream mach numbers and using and where we have also employed the boundary condition .the critical nature of the dynamical equation ( 2.22 ) implies that at the gas sonic point .hence if the flow is to pass _ smoothly _ through a gas sonic point , then must vanish at _ three _ locations .it follows that one of the key questions concerning the flow structure is the determination of the number of points at which .we can address this question by differentiating equation ( 2.25 ) with respect to , which yields where is the total mach number , and we have used the result implied by equation ( 2.21 ) . for the second derivative of obtain since the cosmic rays have a higher average lorentz factor than the thermal background gas , ( cf .2.9 ) , and therefore , implying that is concave down as indicated in figure 1 .hence there are exactly _ two _ roots for that yield , one given by the upstream velocity and the other given by the downstream velocity , denoted by .we therefore conclude that if the flow includes a gas sonic point , then the velocity at that point must be either or .consequently the flow can not pass smoothly through a gas sonic point , as first pointed out by dv .the flows envisioned here are decelerating , and therefore the high - velocity root is associated with the incident flow . in this case , the fact that is concave - down implies that in the upstream region , and therefore based on equation ( 2.29 ) we conclude that in the upstream region and in the downstream region , with at the peak of the curve where . hence the flow _ must _ contain a sonic transition with respect to the _ total _ sound speed . in this sense ,the flow is a `` shock '' whether or not it contains an actual discontinuity .for the flow to decelerate through a shock transition , the total upstream mach number must therefore satisfy the condition .this constraint also implies that the upstream flow must be supersonic with respect to both the gas and cosmic - ray sound speeds ( i.e. , and ) , since .furthermore , for a given value of , the upstream cosmic - ray mach number must exceed the minimum value corresponding to the limit . the requirement that forces us to conclude that if a gas sonic point exists in the flow , then it must be identical to the gradient - free asymptotic _ downstreamconsequently the flow must either remain supersonic everywhere with respect to the gas , or it must cross a discontinuous , gas - mediated subshock . if everywhere , then the flow is completely smooth and the gas sonic point is `` virtual , '' meaning that it exists in the parameter space , but it does not lie along the flow trajectory . in this case , the gas pressure evolves in a purely adiabatic fashion according to equation ( 2.20 ) , although the total entropy of the combined system ( gas plus cosmic rays ) must increase as the flow crosses the shock , despite the fact that it is smooth . in 3 we derive the critical value for the upstream cosmic - ray mach number that determines whether or not smooth flow is possible for a given value of the upstream gas mach number .the overall structure of a cosmic - ray modified shock governed by the dynamical equation ( 2.22 ) can display a variety of qualitatively different behaviors , as first pointed out by dv .depending on the upstream parameters , up to three distinct steady - state solutions are possible , although only one of these can be realized in a given situation .the possibilities include globally smooth solutions as well as solutions containing a discontinuous , gas - mediated subshock .smooth flow is expected when the upstream cosmic - ray pressure is sufficiently large since in this case cosmic ray diffusion is able to smooth out the discontinuity . in this sectionwe utilize the critical nature of the dynamical equation to derive an analytic expression for the critical condition that determines when smooth flow is possible , as a function of the upstream ( incident ) mach numbers and . whether or not the flow contains a discontinuous , gas - mediated subshock, it must be smooth in the upstream region ( preceding the subshock if one exists ) .we can therefore apply equation ( 2.21 ) for the variation of in the upstream region , where it is convenient to use the incident parameters and as the fiducial quantities .the requirement that at the gas sonic point implies that the velocity there is given by which we refer to as the `` critical velocity . ''if a sonic point exists in the flow , then must correspond to the downstream root of the equation , i.e. , .note that the value of depends only on , and consequently it is independent of .the asymptotic states of the flow are assumed to be gradient - free , and the critical conditions associated with the dynamical equation ( 2.22 ) imply that at the gas sonic point .the constancy of and therefore allows us to link upstream quantities to quantities at the gas sonic point by using equations ( 2.23 ) , ( 2.24 ) , ( 2.27 ) , and ( 2.28 ) to write and respectively , where denotes the value of the cosmic - ray mach number at the gas sonic point .if we substitute for using equation ( 3.1 ) and eliminate by combining equations ( 3.2 ) and ( 3.3 ) , we can solve for to obtain an _ exact expression _ for the critical upstream cosmic - ray mach number required for the existence of a sonic point in the asymptotic downstream region , \gamma_c - \left[\left({r_0 ^ 2 \over 2 } - { 1 \over 2 } - { 1 \over \gamma_g-1 } \right ) m_{g0}^2 + { r_0 ^ 2 \over \gamma_g - 1 } \right ] \left(\gamma_c - 1 \right ) \over r_0 \ , ( r_0 - 1 ) \ , m_{g0}^2 } \right\}^{-1/2 } \ , , \eqno(3.4)\ ] ] where note that is an explicit function of the upstream gas mach number .the interpretation is that if for a given value of , then the flow is everywhere supersonic with respect to the gas sound speed except in the far - downstream region , where it asymptotically approaches the gas sonic point .surprisingly , this simple solution for has apparently never before appeared in the literature , probably due to the fact that the analytical form is lost when one works in terms of the alternative parameters and employed by dv .this can be clearly demonstrated by using the expressions to substitute for and in equations ( 3.4 ) and ( 3.5 ) and then attempting to solve the resulting equation for either or .it is easy to convince oneself that it is not possible to express either of these quantities explicitly in terms of the other . in figure 2we depict the curve in the parameter space using equations ( 3.4 ) and ( 3.5 ) for various values of and .we have determined that smooth flow into an asymptotic downstream gas sonic point is possible if .however , in order to obtain a complete understanding of the significance of the critical upstream cosmic - ray mach number , we must determine the nature of the flow when .the resulting flow structure can be analyzed by perturbing around the state by taking the derivative of the asymptotic downstream velocity with respect to , holding constant .the fact that is held fixed implies that the critical velocity also remains unchanged by virtue of equation ( 3.1 ) . upon differentiating ,we obtain ^{-1 } \ , , \eqno(3.7)\ ] ] which is always negative since the flow decelerates , and therefore .this indicates that if is _ decreased _ from the value for fixed , then the downstream velocity _ increases _ above the critical velocity , and therefore the flow is everywhere supersonic with respect to the gas sound speed . in this casethere is no gas sonic point in the flow , and consequently a globally smooth solution is possible .conversely , when , a gas sonic point exists in the flow , and therefore the flow can not be globally smooth because that would require smooth passage through a gas sonic point , which we have proven to be impossible . in this case, the flow _ must _ pass through a discontinuous , gas - mediated subshock .we conclude that globally smooth flow is possible in the region below each of the critical curves plotted in figure 2 .note that in each case there is a critical value for , denoted by , to the right of which smooth flow is _ always _possible for _ any _ value of .this critical value is the solution to the equation \gamma_c - \left[\left({r_a^2 \over 2 } - { 1 \over 2 } - { 1 \over \gamma_g-1 } \right ) m_{ga}^2 + { r_a^2 \over \gamma_g - 1 } \right ] \left(\gamma_c - 1 \right ) = 0 \ , , \eqno(3.8)\ ] ] where corresponding to the limit in equation ( 3.4 ) .we plot as a function of and in figure 3 .when and , we find that , in agreement with the numerical results of dv and heavens ( 1984b ) .dv discovered that two new discontinuous solutions become available in addition to either a smooth solution or another discontinuous solution when the upstream total mach number is sufficiently large .subsequent authors have confirmed the existence of multiple dynamical solutions within the context of the two - fluid model ( achterberg , blandford , & periwal 1984 ; axford , leer , & mckenzie 1982 ; kang & jones 1990 ; ko , chan , & webb 1997 ; zank , webb , & donohue 1993 ) .however , most of this work utilized numerical root - finding procedures and therefore it fails to provide much insight into the structure of the critical conditions that determine when multiple solutions are possible .we revisit the problem in this section by recasting the upstream boundary conditions using the same approach employed in 3 .in particular , we show that when the incident flow conditions are stated in terms of the upstream gas and cosmic - ray mach numbers and , respectively , it is possible to obtain exact , analytical formulae for the critical curves that form the border of the region of multiple solutions in the parameter space . the existence of multiple dynamical solutions is connected with the presence in the flow of a discontinuous subshock mediated by the pressure of the gas .we can therefore derive critical criteria related to the multiple - solution phenomenon by focusing on the nature of the flow in the post - subshock region , assuming that a subshock exists in the flow .as we demonstrate in the appendix , the energy , momentum , and particle fluxes for the cosmic rays and the background gas are independently conserved as the flow crosses the subshock .this implies that the quantities associated with the gas satisfy the usual rankine - hugoniot jump conditions , as pointed out by dv .far downstream from the subshock , the flow must certainly relax into a gradient - free condition if a steady state prevails .in fact , it is possible to demonstrate that the entire post - subshock region is gradient - free , so that downstream from the subshock .this has already been shown by dv using a geometrical approach , but it can also be easily established using the following simple mathematical argument . first we combine the dynamical equation ( 2.22 ) with the definition of given by equation ( 2.25 ) to obtain the alternative form in the post - subshock gas , and therefore regardless of the value of , since .it follows from equation ( 2.29 ) that downstream from the subshock . referring to figure 4, we wish to prove that the subshock transition must take the velocity directly to the gradient - free asymptotic downstream root denoted by , so that on the immediate downstream side of the subshock . to develop the proof ,let us suppose instead that in the post - subshock gas , corresponding to the post - subshock velocity in figure 4 .in this case , equation ( 4.1 ) implies that in the downstream region , so that increases along the flow direction , evolving _away _ from the gradient - free root in the post - subshock flow .conversely , if in the post - subshock gas ( corresponding to the velocity in figure 4 ) , then in the downstream region and consequently decreases , again evolving away from the gradient - free root . henceif the flow is to be stationary , then must jump _ directly _ to the value , and the entire post - subshock region must therefore be gradient - free .this conclusion is valid within the context of the `` standard '' two - fluid model studied by dv , but zank , webb , & donohue ( 1993 ) suggest that it may be violated in models including injection .we can derive an expression for suitable for use in the post - subshock region by employing equation ( 2.21 ) to eliminate in equation ( 2.25 ) .this yields where we have adopted the immediate post - subshock values and as the fiducial quantities in the smooth section of the flow downstream from the subshock .since we have already established that the post - subshock flow is gradient - free , we can obtain an equation satisfied by the post - subshock velocity by writing the gradient - free nature of the post - subshock flow also trivially implies where is the asymptotic downstream gas mach number associated with the downstream velocity .equation ( 4.3 ) can be interpreted as a relation for the immediate _ pre_-subshock velocity by utilizing the standard rankine - hugoniot jump conditions ( landau & lifshitz 1975 ) where the pre - subshock gas mach number is related to and via which follows from equation ( 2.21 ) . by using equations ( 4.5 ) and ( 4.6 ) to eliminate , , and ,we can transform equation ( 4.3 ) into a new equation for the pre - subshock velocity , which we write symbolically as where \ , \left(\gamma_g - 1 + 2 \ , m_{g0}^{-2 } \ ,u_-^{-1-\gamma_g } \over \gamma_g + 1 \right ) - { \cal e } \ . \eqno(4.8)\ ] ] recall that the constants and are functions of and by virtue of equations ( 2.27 ) and ( 2.28 ) .in addition to satisfying the condition , acceptable roots for the pre - subshock velocity must also exceed the critical velocity . this is because the flow must be supersonic with respect to the gas before crossing the subshock , if one exists .equation ( 4.7 ) allows us to solve for the pre - subshock velocity as a function of the upstream cosmic - ray and gas mach numbers and , respectively . in general, is a multi - valued function of and , and this results in the possibility of several distinct subshock solutions in certain regions of the parameter space .fortunately , additional information is also available that can be utilized to calculate the shape of the critical curve in space bordering the region of multiple subshock solutions .the nature of this information becomes clear when one examines the topology of the function as depicted in figures 5 and 6 for and .we consider a sequence of situations with held fixed and gradually increasing from the minimum value given by equation ( 2.32 ) , corresponding to the limit .note that since is held constant , the critical velocity also remains constant according to equation ( 3.1 ) .two qualitatively different behaviors are observed depending on whether or not exceeds , where is the critical upstream gas mach number for smooth flow calculated using equation ( 3.8 ) with and . in figure 5_a_ we plot as a function of and for , which yields for the critical velocity . in this casethe minimum upstream cosmic - ray mach number is .note that initially , for small , there is one root for corresponding to the single crossing of the line .since this root does not exceed , a subshock solution is impossible and instead the flow must be globally smooth as discussed in 3 .the choice satisfies the condition , and therefore as increases , the root for eventually equals , which occurs when . in this casethe subshock is located at the asymptotic downstream limit of the flow , and therefore it is identical to the gas sonic point .equations ( 3.1 ) and ( 4.6 ) indicate that the pre - subshock gas mach number as expected . as we continue to increase beyond the critical value in figure 5_a _ , the root for , and therefore the smooth solution is replaced by a subshock solution .we refer to this solution as the `` primary '' subshock solution .since in this region of the parameter space , the subshock plays a significant role in modifying the flow . if is increased further , the primary root for increases slowly , the concave - down shape changes , and a new peak begins to emerge at large .the peak touches the line when , and therefore at this point a new subshock root appears with the value .the development of this new root can be clearly seen in figure 5_b _ , where we replot at a much smaller scale .as continues to increase , the peak continues to rise , and the new root bifurcates into two roots . since the two new roots for are larger than the primary root , the corresponding pre - subshock gas mach numbers are also larger , and therefore the two new subshocks are stronger than the primary subshock .we conclude that in this region of the parameter space , _ three _ discontinuous subshock solutions are possible , although only one can occur in a given situation .another example is considered in figure 6_a _ , where we plot as a function of and for , which yields and . in this case , , and consequently there is _ always _ a root for below , indicating that globally smooth flow is possible for all values of .hence the `` primary '' subshock solution never appears in this example .however , for sufficiently large , a peak develops in just as in figure 5_b_. this peak rises with increasing and eventually crosses the line at when , corresponding to the appearance of a new physically acceptable subshock root for .this new root bifurcates into two roots as is increased , which can be clearly seen in figure 6_b _ , where is replotted on a much smaller scale .it follows that in this region of space , two discontinuous solutions are possible in addition to a single globally smooth solution .it is apparent from figures 5 and 6 that the onset of multiple solutions is connected with the vanishing of coupled with the additional , simultaneous condition which supplements equation ( 4.7 ) .equations ( 4.7 ) and ( 4.9 ) can be manipulated to obtain explicit expressions for the critical upstream gas and cosmic - ray mach numbers corresponding to the onset of multiple subshock solutions as functions of the pre - subshock velocity .these critical mach numbers are denoted by and , respectively .the logical procedure for obtaining the relations is straightforward , although the algebra required is somewhat tedious .the first step in the process is to solve equations ( 4.7 ) and ( 4.9 ) individually to derive two separate expressions for .equation ( 4.7 ) yields where \ , u_-^{2 \gamma_g } \ , , \eqno(4.10{\rm c})\ ] ] \ , , \eqno(4.10{\rm e})\ ] ] \ , .\eqno(4.10{\rm f})\ ] ] the solution to equation ( 4.9 ) is given by where \ , , \eqno(4.11{\rm e})\ ] ] \ , .\eqno(4.11{\rm f})\ ] ] the similarity of the dependences on in equations ( 4.10 ) and ( 4.11 ) for suggests that we can derive an exact solution for as a function of by equating these two expressions .the result obtained is where \ , u_- \bigg\ } \ , , \eqno(4.12{\rm b})\ ] ] \ , , \eqno(4.12{\rm d})\ ] ] equation ( 4.12 ) can be used to evaluate the critical upstream gas mach number as a function of the pre - subshock velocity .once is determined , we can calculate the corresponding critical upstream cosmic - ray mach number using either equation ( 4.10 ) or equation ( 4.11 ) , which both yield the same result .hence equations ( 4.10 ) , ( 4.11 ) , and ( 4.12 ) provide a direct means for calculating and as exact functions of . as an example , setting , , and yields and , in agreement with the results plotted in figure 5 . likewise , setting , , and yields and , in agreement with figure 6 .although our expressions for and simplify considerably in the special case , , we have chosen to derive results valid for general ( but constant ) values of and in order to obtain a full understanding of the sensitivity of the critical mach numbers to variations in the adiabatic indices .while admittedly somewhat complex , these equations can nonetheless be evaluated using a hand calculator , and replace the requirement of utilizing the root - finding techniques employed in most previous investigations of the multiple - solution phenomenon in the two - fluid model .in figure 7 we plot the critical curves for the occurrence of multiple solutions using various values of and .this is accomplished by evaluating and as parametric functions of the pre - subshock velocity using equation ( 4.12 ) along with either equation ( 4.10 ) or ( 4.11 ) .the critical curves denote the boundary of the wedge - shaped _ multiple - solution region_. inside this region , two new subshock solutions are possible , along with either the primary subshock solution or the globally smooth solution . outside this region ,the flow is either described by the primary subshock solution or else it is globally smooth .the lower - left - hand corner of the multiple - solution region curves to the left and culminates in a sharp cusp .the presence of the cusp implies that there is a minimum value of , below which multiple solutions are never possible for any value of .note that the multiple - solution region becomes very narrow as the values of and approach each other , suggesting that the physically allowed solutions converge on a single solution as the thermodynamic properties of the two populations of particles ( background gas and cosmic rays ) become more similar to each other . in the limit , multiple solutions are not allowed at all , and the single available solution is either smooth or discontinuous depending on the values of and . in figure 8 we plot all of the various critical curves for the physically important case of an ultrarelativistic cosmic - ray distribution ( ) combined with a nonrelativistic background gas ( ) .this is the most fully self - consistent example of the two - fluid model , since in this case we expect that the adiabatic indices will remain constant throughout the flow , as is assumed in the model ( see eq .[ 2.9 ] ) .the area of overlap between the multiple - solution region and the smooth - solution region implies the existence of four distinctly different domains within the parameter space . within domaini , which lies outside both the multiple - solution region and the smooth - solution region , the flow must be discontinuous , with exactly one ( i.e. , the primary subshock ) solution available .domain ii lies inside the multiple - solution region and outside the smooth - solution region , and therefore within this area of the parameter space , _ three _ distinct subshock solutions are possible , while globally smooth flow is impossible .domain iii is formed by the intersection of the multiple - solution region and the smooth - solution region , and therefore two discontinuous subshock solutions are available as well as one globally smooth solution .finally , domain iv lies outside the multiple - solution region and within the smooth - flow region , and therefore one globally smooth flow solution is possible .the existence of domain iv is consistent with our expectation that smooth flow will occur for sufficiently large values of the upstream cosmic - ray pressure due to diffusion of the cosmic rays .if we consider a trajectory through the parameter space that crosses into the multiple - solution region , then the sequence of appearance of the subshock roots for depends on whether the boundary is crossed through the `` top '' or `` bottom '' arcs of the wedge .we illustrate this phenomenon for and in figure 8 , where we consider two possible paths approaching the point inside the multiple - solution region , starting outside at either points or .the two paths cross the multiple - solution boundary on different sides of the wedge . in figure 9we plot along the segment ( with ) , and in figure 10 we plot along the segment ( with ) .note that the primary subshock root for already exists at points and since they both lie inside domain i. when the lower section of the multiple - solution boundary is crossed along segment ( in fig .9 ) , two new roots for appear at larger values than the primary root , which is the same sequence observed in figure 5 . however , when the _ upper _ section of the boundary is crossed along segment ( in fig .10 ) , the two new roots for appear at _ smaller _ values than the primary root .hence the order of appearance of the subshock roots for is different along each path . despite this path dependence ,the actual set of roots obtained at point is the same regardless of the approach path taken , and therefore there is no ambiguity regarding the possible subshock solutions available at any point in the parameter space . in figure 11 we present summary plots that include all of the various critical curves derived in 3 and 4 for several different values of and .note that the area of overlap between the multiple - solution region and the smooth - solution region observed when and rapidly disappears when the difference between and is reduced , due to the increasing similarity between the thermodynamic properties of the cosmic rays and the background gas .ko , chan , & webb ( 1997 ) and bulanov & sokolov ( 1984 ) have obtained similar parameter space plots depicting the critical curves for smooth flow and for the onset of multiple solutions .the parameters used to describe the incident flow conditions are in the case of ko , chan , & webb ( 1997 ) and for bulanov & sokolov ( 1984 ) , where and refer to the incident pressure ratio and total mach number , respectively ( see eqs . [ 1.1 ] and [ 1.2 ] ) .when these parameters are employed directly instead of the quantities utilized in our work , the critical curves must be determined by numerical root - finding .however , if desired , equations ( 1.5 ) can be used to transform our exact solutions for the critical curves from the space to the and spaces .this procedure was used to obtain the results depicted in figure 12 , which are identical to those presented by ko , chan , & webb ( 1997 ) and bulanov & sokolov ( 1984 ) .our interpretation of the various critical mach numbers derived in 3 and 4 can be verified by performing specific calculations of the flow dynamics for a few different values of the upstream mach numbers and .we shall focus here on cases with and , but this is not essential .along smooth sections of the flow , we can calculate the velocity profile by integrating the dynamical equation obtained by combining equations ( 2.21 ) and ( 4.1 ) , which yields ^{-1 } \ , \eqno(5.1)\ ] ] where and we have introduced the new spatial variable the quantities and are fiducial parameters measured at an arbitrary location along the smooth section of interest , and the constants and are functions of the incident mach numbers and via equations ( 2.27 ) and ( 2.28 ) . as discussed in 3 , if , then the flow is _ everywhere _ supersonic with respect to the gas , and therefore we can use equation ( 5.1 ) to describe the structure of the _ entire _ flow by setting the fiducial parameters and equal to the asymptotic upstream quantities and , respectively .conversely , if , then the flow must contain a discontinuous , gas - mediated subshock . in this casethe upstream region is governed by equation ( 5.1 ) with and , and the downstream region is governed by equation ( 5.1 ) with and .the pre - subshock and post - subshock regions are linked by the jump conditions given by equations ( 4.5 ) , which determine and and yield in the entire downstream region as expressed by equation ( 4.3 ) . in order to illustrate the various possible behaviors predicted by our critical conditions , we shall present one example from each of the four domains discussed in iv ( see fig . 8) calculated using and . in figure 13we plot the velocity profile obtained by integrating the dynamical equation ( 5.1 ) with and . according to figure 8 , this point lies within domain i , and therefore we expect to find a single subshock solution , and no globally smooth solution .equation ( 2.25 ) confirms that in this case the downstream root for is , and therefore smooth flow is impossible since this does not exceed the critical velocity .analysis of equation ( 4.7 ) yields a single acceptable value for the pre - subshock velocity root , with an associated pre - subshock gas mach number .figure 13 also includes plots of the gas mach number obtained using equation ( 2.21 ) , the dimensionless gas pressure obtained using equation ( 2.20 ) , and the dimensionless cosmic - ray pressure obtained using equation ( 2.16 ) .the first three quantities exhibit a jump at the subshock , whereas the cosmic - ray pressure is continuous since a jump in this quantity would imply an infinite energy flux .the overall compression ratio is and the pressures increase by the factors and between the asymptotic upstream and downstream regions .recall that and express the pressures of the two species divided by the upstream ram pressure of the gas . hence in this example the two species each absorb comparable fractions of the upstream ram pressure .note that the increase in the cosmic - ray pressure is entirely due to the smooth part of the transition upstream from the discontinuous subshock , while the gas pressure experiences most of its increase in crossing the subshock .in figure 14 we set and , corresponding to domain ii in figure 8 .we therefore expect to find three distinct , physically acceptable subshock solutions associated with these parameter values .analysis of equation ( 2.25 ) verifies that no smooth solutions are possible in this case because the downstream root is less than the critical velocity .equation ( 4.7 ) indicates that the three acceptable roots for the pre - subshock velocity are with associated pre - subshock gas mach numbers respectively .the gas and cosmic - ray pressures increase by the factors and and the overall compression ratios are .note that the solution with the _ largest _ cosmic - ray pressure has the _ largest _ overall compression ratio and the _ weakest _ subshock .this is due to the fact that the cosmic - ray pressure is amplified in the smooth - flow region upstream from the subshock , and this region is most extended when the flow does not encounter a subshock until the smallest possible value of .conversely , the solution with the largest gas pressure is the one with the strongest subshock and the smallest overall compression ratio . in figure 15we use the upstream parameters and .this point lies within domain iii in figure 8 , and therefore we expect to find two distinct subshock solutions along with one globally smooth solution .equation ( 2.25 ) confirms that in this case a smooth solution is possible since the downstream root exceeds the critical velocity .analysis of equation ( 4.7 ) yields two acceptable values for the pre - subshock velocity given by .the corresponding pre - subshock gas mach numbers are respectively . in the discontinuous ( subshock )solutions the gas and cosmic - ray pressures increase by the factors and and the overall compression ratios are .the subshocks in this example are strong , and therefore in the solutions containing a subshock most of the deceleration is due to the buildup of the pressure of the background gas , rather than the cosmic - ray pressure .hence both of the subshock solutions are gas - dominated .in the globally smooth solution , which is cosmic - ray dominated , the pressures increase by the factors and , and the compression ratio is . in this casethe cosmic rays absorb almost all of the ram pressure of the upstream gas .finally , in figure 16 we set and .according to figure 8 , this point lies within domain iv , and therefore we expect that only a single , globally - smooth solution exists for these upstream parameters .this prediction is verified by equation ( 4.7 ) , which confirms that no acceptable subshock roots for exists .furthermore , equation ( 2.25 ) indicates that smooth flow is possible since the downstream root exceeds the critical velocity .hence the only acceptable solution is globally smooth , with the pressure increases given by and .the overall compression ratio is and the flow is cosmic - ray dominated .in this paper we have obtained a number of new analytical results describing the critical behavior of the two - fluid model for cosmic - ray modified shocks .it is well known that in this model , up to three distinct solutions are possible for a given set of upstream boundary conditions .the behaviors of the various solutions can be quite diverse , including flows that are smooth everywhere as well as flows that contain a discontinuous , gas - mediated subshock .the traditional approach to the problem of determining the types of possible solutions , employed by ko , chan , & webb ( 1997 ) and bulanov & sokolov ( 1984 ) , is based on stating the upstream boundary conditions in terms of the incident total mach number and the incident pressure ratio ( see eqs . [ 1.1 ] and [ 1.2 ] ) . in this approachthe determination of the available solution types requires several steps of root - finding , and there is no possibility of obtaining analytical expressions for the critical relationships .the analysis presented here utilizes a fresh approach based upon a new parameterization of the boundary conditions in terms of the upstream gas and cosmic - ray mach numbers and , respectively .the analytical results we have obtained in 3 and 4 for the critical upstream mach numbers expressed by equations ( 3.4 ) , ( 4.11 ) , and ( 4.12 )provide for the first time a systematic classification of the entire parameter space for the two - fluid model , which remains one of the most powerful and practical means available for studying the problem of cosmic - ray modified shocks .these expressions eliminate the need for complex root - finding procedures in order to understand the possible flow dynamics for a given set of upstream boundary conditions , and are made possible by the symmetry between the gas and cosmic - ray parameters as they appear in the expressions describing the asymptotic upstream and downstream states of the flow .we have compared our quantitative results with those of ko , chan , & webb ( 1997 ) and bulanov & sokolov ( 1984 ) , and they are found to be consistent .the results are valid for arbitrary ( but constant ) values of the gas and cosmic - ray adiabatic indices and , respectively . in 5we have presented numerical examples of flow structures obtained in each of the four parameter space domains defined in figure 8 .these examples verify the predictions made using our new expressions for the critical mach numbers , and confirm that the largest overall compression ratios are obtained in the globally - smooth , cosmic - ray dominated cases .the existence of multiple distinct solutions for a single set of upstream boundary conditions demands that we include additional physics in order to determine which solution is actually realized in a given situation .this question has been addressed by numerous authors using various forms of stability analysis as well as fully time - dependent calculations .dv speculated that when three distinct solutions are allowed ( in domains ii and iii of the parameter space plotted in fig .8) , the solution with the intermediate value of the cosmic - ray pressure will be unstable .the argument is based on the idea that if the cosmic - ray pressure were to increase slightly due to a small perturbation , then the gas would suffer additional deceleration , leading to a further increase in the cosmic - ray pressure .this nonlinear process would drive the flow towards the steady - state solution with the largest value of .conversely , a small decrease in the cosmic - ray pressure would decrease the deceleration , leading to a smaller value for the cosmic - ray pressure . in this casethe flow would be driven towards the steady - state solution with the smallest value of .recently , mond & drury ( 1998 ) have suggested that this type of behavior may be realized as a consequence of a corrugational instability .other authors ( e.g. , drury & falle 1986 ; kang , jones , & ryu 1992 ; zank , axford , & mckenzie 1990 ; ryu , kang , & jones 1993 ) have argued that the globally smooth , cosmic - ray dominated solutions are unstable to the evolution of mhd waves in certain situations .jones & ellison ( 1991 ) suggest that even when formally stable , the smooth solutions may not be realizable in nature . on the other hand , donohue , zank , & webb ( 1994 ) report time - dependent simulations which seem to indicate that the smooth, cosmic - ray dominated solution is indeed the preferred steady - state solution in certain regions of the parameter space .hence , despite the fact that much effort has been expended in analyzing the stability properties of cosmic - ray modified shocks , there is still no clear consensus regarding which of the steady - state solutions ( if any ) is stable and therefore physically observable for an arbitrary set of upstream conditions . in light of the rather contradictory state of affairs regarding the stability properties of the various possible dynamical solutions , we propose a new form of _ entropy - based _ stability analysis . in this method ,the entropy of the cosmic - ray distribution is calculated by first solving the transport equation ( 2.1 ) for the cosmic - ray distribution and then integrating to obtain the boltzmann entropy per cosmic ray , where is boltzmann s constant , is the cosmic - ray number density , is planck s constant , is the system volume , and . according to equation ( 2.3 ) , gives the probability that a randomly selected cosmic ray at location has momentum in the range between and .the cosmic - ray entropy density is computed using where the final two terms stem from the fundamental indistinguishability of the cosmic ray particles ( of like composition ) , and is necessary in order to avoid the gibbs paradox ( reif 1965 ) . the inconvenient reference to the system volume be removed by combining equations ( 6.1 ) and ( 6.2 ) to obtain the equivalent expression the total entropy per particle for the combined gas - particle system is calculated using where and denote the entropy density and the number density of the background gas , respectively .one may reasonably hypothesize that the state with the largest value for will be the preferred state in nature .this criterion may prove useful for identifying the most stable solution when multiple solutions are available .we plan to pursue this line of inquiry in future work .the results may shed new light on the structure of cosmic - ray modified shocks .the authors acknowledge several stimulating conversations with frank jones and truong le .the authors are also grateful to the anonymous referee for several useful comments .pab also acknowledges support and hospitality from nasa via the nasa / asee summer faculty fellowship program at goddard space flight center . in this sectionwe demonstrate that the cosmic - ray energy flux must be continuous across a velocity discontinuity ( subshock ) , if one is present in the flow .this is turn implies that the subshock must be mediated entirely by the pressure of the background gas , and therefore the discontinuity is governed by the classical rankine - hugoniot jump conditions .the cosmic - ray energy flux in the direction is given by in a steady - state , the cosmic - ray energy equation ( 2.7 ) reduces to which can be combined with equation ( a1 ) to express the derivative of as integration in the vicinity of the subshock located at yields in order to avoid unphysical divergence of at the subshock , must be continuous , and therefore the integrand on the right - hand side of equation ( a4 ) is no more singular than a step function .this implies that in the limit , the right - hand side vanishes , leaving hence remains constant across the subshock .the energy , momentum , and particle fluxes of the gas and the cosmic rays are therefore independently conserved across the discontinuity .this allows us to use the standard rankine - hugoniot jump conditions to describe the subshock transition in equations ( 4.5 ) . topology of the function given by eq .the quantities and respectively denote the upstream and downstream roots for the velocity in a globally smooth solution .critical upstream cosmic - ray mach number ( eq . [ 3.4 ] ) for smooth flow plotted as a function of the upstream gas mach number in the parameter space for ( _ a _ ) and various values of as indicated ; ( _ b _ ) and various values of as indicated .smooth flow is not possible in the region above each curve .critical upstream gas mach number for smooth flow ( eq .[ 3.8 ] ) plotted as a function of the gas and cosmic - ray adiabatic indices and , respectively . when , smooth flow is possible for any value of . schematic depiction of the function ( eq . [ 2.25 ] ) . if the flow contains a discontinuous , gas - mediated subshock , then the velocity must jump _ directly _ to the final asymptotic value in crossing the shock .otherwise the flow is unstable ( see the discussion in the text ) . function ( eq . [ 4.8 ] ) is plotted for the parameters ( _ a _ ) , , .the value of is indicated for each curve . in this example , , and therefore the primary subshock solution appears when the low - velocity root , which occurs when .the same function is plotted on a smaller scale in ( _ b _ ) , where we see that two new subshock roots for appear when .fig . 6 . function ( eq . [ 4.8 ] ) is plotted for the parameters ( _ a _ ) , , .the value of is indicated for each curve . in this example , , and therefore the primary subshock solution never appears .hence smooth flow is possible for all values of .the same function is plotted on a smaller scale in ( _ b _ ) , where we see that two new subshock roots for appear when . fig . critical mach numbers and for the onset of multiple solutions ( eqs .[ 4.11 ] and [ 4.12 ] ) are plotted as parametric functions of the pre - subshock velocity in the parameter space for ( _ a _ ) and the indicated values of ; ( _ b _ ) and the indicated values of .the interior of each wedge is the multiple - solution region for the associated parameters .fig . 8 . critical upstream mach numbers for the occurrence of multiple solutions ( eqs .[ 4.11 ] and [ 4.12 ] ; _ solid line _ ) and for smooth flow ( eq . [ 3.4 ] ; _ dashed line _ ) are plotted together in the parameter space for the case and . the minimum upstream cosmic - ray mach number required for decelerating flowis also shown ( eq .[ 2.32 ] ; _ dotted line _ ) .there are four distinct domains in the parameter space as discussed in the text . function ( eq . [ 4.8 ] ) is plotted for the parameters ( _ a _ ) , , along the segment in fig .the values of the upstream cosmic - ray mach number are ( _ solid line _ ) , ( _ dashed line _ ) , ( _ dotted line _ ) .when , there are three distinct subshock solutions available . in panel( _ b _ ) the same function is plotted on a smaller scale . function ( eq . [ 4.8 ] ) is plotted for the parameters ( _ a _ ) , , along the segment in fig .. the values of the upstream gas mach number are ( _ solid line _ ) , ( _ dashed line _ ) , ( _ dotted line _ ) . when , there are three distinct subshock solutions available . in panel( _ b _ ) the same function is plotted on a smaller scale . critical upstream mach numbers for the occurrence of multiple solutions ( eqs .[ 4.11 ] and [ 4.12 ] ; _ solid line _ ) and for smooth flow ( eq . [ 3.4 ] ; _ dashed line _ ) are plotted together in the parameter space for ( _ a _ ) , ; ( _ b _ ) , ; ( _ c _ ) , ; ( _ d _ ) , . also indicated is the minimum value of required for decelerating flow ( eq . [ 2.32 ] ; _ dotted line _ ) . our analytical results for the critical curves generated using equations ( 3.4 ) , ( 4.11 ) , and ( 4.12 ) are combined with equations ( 1.5 ) to create corresponding curves in the alternative parameter spaces and employed by bulanov & sokolov ( 1984 ) and ko , chan , & webb ( 1997 ) , respectively .panel ( _ a _ ) , with and , is identical to fig . 4 of bulanov & sokolov ( 1984 ) .panel ( _ b _ ) , with and , is identical to fig .1(_a _ ) of ko , chan , & webb ( 1997 ) .note that these authors generated their curves using root - finding procedures .the interpretation of the line styles is the same as in fig .fig . 13 . numerical solutions for ( _ a _ ) , ( _ b _ ) , ( _ c _ ) , and ( _ d _ ) are plotted as functions of ( see eq . [ 5.3 ] ) .the solutions were obtained by integrating the dynamical eq . [ 5.1 ] with and , which corresponds to domain i in fig .8 . in this caseone discontinuous solution is possible , and smooth flow is impossible .14 . same as fig . 13 , except and , which corresponds to domain ii in fig . 8 .in this case three distinct discontinuous solutions are possible , and smooth flow is impossible .the values of the pre - subshock gas mach number are ( _ solid line _ ) , ( _ dashed line _ ) , ( _ dotted line _ ) .15 . same as fig . 13 , except and , which corresponds to domain iii in fig . 8 .in this case two distinct discontinuous solutions are possible in addition to one globally smooth solution ( _ solid line _ ) .the values of the pre - subshock gas mach number are ( _ dashed line _ ) , ( _ dotted line _ ) . same as fig . 13 , except and , which corresponds to domain iv in fig . 8. in this caseone globally smooth solution solution is possible , and discontinuous flow is impossible . | the acceleration of relativistic particles due to repeated scattering across a shock wave remains the most attractive model for the production of energetic cosmic rays . this process has been analyzed extensively during the past two decades using the `` two - fluid '' model of diffusive shock acceleration . it is well known that 1 , 2 , or 3 distinct solutions for the flow structure can be found depending on the upstream parameters . interestingly , in certain cases both smooth and discontinuous transitions exist for the same values of the upstream parameters . however , despite the fact that such multiple solutions to the shock structure were known to exist , the precise nature of the critical conditions delineating the number and character of shock transitions has remained unclear , mainly due to the inappropriate choice of parameters used in the determination of the upstream boundary conditions . in this paper we derive the exact critical conditions by reformulating the upstream boundary conditions in terms of two individual mach numbers defined with respect to the cosmic - ray and gas sound speeds , respectively . the gas and cosmic - ray adiabatic indices are assumed to remain constant throughout the flow , although they may have arbitrary , independent values . our results provide for the first time a complete , analytical classification of the parameter space of shock transitions in the two - fluid model . we use our formalism to analyze the possible shock structures for various values of the cosmic - ray and gas adiabatic indices . when multiple solutions are possible , we propose using the associated entropy distributions as a means for indentifying the most stable configuration . |
modern x - ray observations show complex structures in both the spatial and spectral domains of various astrophysical sources . nonetheless , active galalactic nuclei ( agn ) including quasars nuclei remain spatially unresolved even with the highest - resolution x - ray telescopes .most of their energy is released within the unresolved core , and only spectral and timing information is available to study the nature of the x - ray emission .generally speaking , emission and absorption lines constitute an important part of the x - ray spectrum in that they can provide information as to the state of plasma .one of the goals of x - ray data analysis is to understand the components present in the spectrum , and to obtain information about the emission and absorption features , as well as their locations and relation to the primary quasar emission .the detection of weak lines in noisy spectra is the main statistical problem in such analyses : is a bump observed in the spectrum related to a real emission line or is it simply an artifact of the poissonian noise ?although quasars x - ray spectra are usually featureless as expected based on the comptonization process ( see for example * ? ?* ; * ? ? ?* ; * ? ? ?* ) , an important x - ray emission feature identified in agn and quasars spectra is the iron k emission line ( see recent review by * ? ? ?determining the origin and the nature of this line is one of main issues in agn and quasar research .this line is thought to come directly from illuminated accretion flow as a fluorescent process .the location of the line in the spectrum indicates the ionization state of iron in the emitting plasma , while the width of the line tells us the velocity of the plasma .the iron line provides a direct probe of the innermost regions of accretion flow and matter in close vicinity of a black hole .absorption features associated with the outflowing matter ( warm wind , partial covering absorber ) have also been observed in recent x - ray observations . although the location and width of absorption lines provide information as to the velocity of the absorber and its distance from the quasar , this article focuses on statistical issues in fitting the spectral location of narrow emission lines , i.e. , identifying the ionization state .there are two parts to the fe - k - alpha emission line observed in agn : one is a broad component thought to be a signature of a relativistic motion in the innermost regions of an accretion flow ; the other is a narrow component that is a result of a reflection off the material at larger distances from the central black hole .a detection of the broad component is challenging as it requires a spectral coverage over a large energy range , so the continuum emission is well determined and the broad line can be separated .the relativistic line profile is broad and skewed , and two strong peaks of the emission line that originates in a relativistic disk can be prominent and narrow . while the full profile of the broad line may not be easily separable from the continuum , these two peaks may provide a signature for this line in the x - ray spectrum .the broad fe - line gives an important diagnostic of the gas motion and can be used to determine the spin of a black hole ; see also an alternative model for the red wing " component by .the narrow component of the line gives diagnostics of the matter outside the accretion disk and conditions at larger distances from the black hole ; see fe - line baldwin effect discussion in .both line components are variable and the line may `` disappear '' from the spectrum .the spectral resolution of x - ray ccd detectors ( for example 100 - 200 ev in acis on _ chandra _ or epic on xmm-_newton _ ) is relatively low with respect to the predicted width of narrow ( km s ) emission or absorption lines in agn and quasars .observations with grating instruments ( rgs or heg ) can provide high resolution x - ray spectra , but the effective area of the present x - ray telescopes is too low for efficient agn detections , and only a handful of bright low redshift sources have been observed with gratings to date . therefore mainly the x - ray ccd spectra of lower resolution are used to study large samples of agn and quasars ( see for example * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) . using these relatively low resolution x - ray detectors, the fe - k - alpha emission line can be narrow enough to be contained entirely in a single detector bin . in some cases ( for example in _chandra _ ) the line may occupy a few bins . in this articlewe focus on the statistical problem of fitting the spectral location of an emission line or a set of emission lines that are narrow .this is a common objective in high - energy analyses , but as we shall discuss fitting these relatively narrow features poses significant statistical challenges .in particular we find evidence that using line profiles that are narrower than we actually expect the emission line to be can improve the statistical properties of the fitted emission line location .x - ray spectra , such as those available with the _ chandra x - ray observatory _ carry much information as to the quasar s physics . taking advantage of the spectral capacity of such instruments , however , requires careful statistical analysis .for example , the resolution of such instruments corresponds to a fine discretization of the energy spectrum . as a result, we expect a low number of counts in each bin of the x - ray spectrum .such low - count data make the gaussian assumptions that are inherent in traditional minimum fitting inappropriate .a better strategy , which we employ , explicitly models photon arrivals as an inhomogeneous poisson process . in addition, data are subject to a number of processes that significantly degrade the source counts , e.g. , the absorption , non - constant effective area , blurring of photons energy , background contamination , and photon pile - up .thus , we employ statistical models that directly account for these aspects of data collection .in particular , we design a highly structured multilevel spectral model with components for both the data collection processes and the complex spectral structures of the sources themselves . in this highly structured spectral model, a bayesian perspective renders straightforward methods that can handle the complexity of _ chandra _ data .as we shall illustrate , these methods allow us to use low - count data , to search for the location of a narrow spectral line , to investigate its location s uncertainty , and to construct statistical tests that measure the evidence in the data for including the spectral line in the source model .the energy spectrum can be separated into two basic parts : a set of continuum terms and a set of several emission lines .we begin with a standard spectral model that accounts for a single continuum term along with several spectral lines . throughout this paper, we use as a general representation of model parameters in the spectral model .the components of represent the collection of parameters for the continuum , ( emission ) lines , absorption , and background contamination , respectively .( notice that the roman letters in the superscripts serve as a mnemonic for these four processes . ) because the x - ray emission is measured by counting the arriving photons , we model the expected poisson counts in energy bin , where is the set of energy bins , as where and are the width and mean energy of bin , is the expected counts per unit energy due to the continuum term at energy , is the set of free parameters in the continuum model , is the number of emission lines , is the expected counts due to the emission line , and is the proportion of an emission line centered at energy and with width that falls into bin .there are a number of smooth parametric forms to describe the continuum in some bounded energy range ; in this article we parameterize the continuum term as a power law , i.e. , where and represent the normalization and photon index , respectively .the emission lines can be modeled via the proportions using narrow gaussian distributions , lorentzian distributions , or delta functions ; the counts due to the emission line are distributed among the bins according to these proportions .while the gaussian or lorentzian function parameterizes an emission line in terms of center and width , the center is the only free parameter with a delta function ; the width of the delta function is effectively the width of the energy bin in which it resides . while the model in equation [ park: eq : ideal ] is of primary scientific interest , a more complex statistical model is needed to address the data collection processes mentioned in [ park : sec : hea ] .we use the term _ statistical model _ to refer to the model that combines the _ source _ or _astrophysical model _ with a model for the stochastic processes involved in data collection and recording .thus , in addition to the source model , the statistical model describes such processes as instrument response and background contamination . specifically , to account for the data collection processes , equation [ park : eq : ideal ] is modified via where is the expected observed poisson counts in detector channel , is the set of detector channels , is the probability that a photon that arrives with energy corresponding to bin is recorded in detector channel ( i.e. , is the so - called redistribution matrix or rmf commonly used in x - ray analysis ) , is the effective area ( i.e. , arf , a calibration file associated with the x - ray observation ) of bin , is the probability that a photon with energy is _ not _ absorbed , is the collection of parameters for absorption , and is a poisson intensity of the background counts in channel . while the scatter probability and the effective area are presumed known from calibration , the absorption probability is parameterized using a smooth function ; see for details . to quantify background contamination ,a second data set is collected that is assumed to consist only of background counts ; the background photon arrivals are also modeled as an inhomogeneous poisson process .unfortunately , the statistical methods and algorithms developed in can not be directly applied to fitting _narrow _ emission lines .there are three obstacles that must be overcome in order to extend bayesian highly structured models to spectra containing narrow lines .in particular , we must develop ( 1 ) new computational algorithms , ( 2 ) statistical summaries and methods for inference under highly multimodal posterior distributions , and ( 3 ) statistical tests that allow us to quantify the statistical support in the data for including an emission line or lines in the model .our main objective in this paper is to extend the methods of in these three directions , and to evaluate and illustrate our proposals .here we discuss each of these challenges in detail .[ [ challenge-1-statistical - computation . ] ] challenge 1 : statistical computation .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + fitting the location of narrow lines requires new and more sophisticated computational techniques than those developed by .indeed , the algorithms that we develop require a new theoretical framework for statistical computation : they are not examples of any existing algorithm with known properties .although the details of this generalization are well beyond the scope of this article , we can offer a heuristic description ; a more detailed description is given in appendix [ ap : alg ] .readers who are interested in the necessary theoretical development of the statistical computation techniques are directed to and . the algorithms used by to fit the structured bayesian model described in [ park : sec : model ] are based on the probabilistic properties of the statistical models .for example , the parameters of a gaussian line profile can be fit by iteratively attributing a subset of the observed photons to the line profile and using the mean and variance of these photon energies to update the center and width of the line profile .the updated parameters of the line profile are used to again attribute a subset of the photons to the line , i.e. , to stochastically select a subset of the photons that are likely to have arisen out of the physical processes at the source corresponding to the emission line .these algorithms are typically very stable . for example , they only return statistically meaningful parameters because the algorithms themselves mimic the probabilistic characteristics of the statistical model .the family of expectation / maximization ( em ) algorithms and markov chain monte carlo ( mcmc ) methods such as the gibbs sampler are the examples of statistical algorithms of this sort .a drawback of these algorithms is that in some situations they can be slow to converge . when fitting the location of a gaussian emission line , for example , the location is updated more slowly if the line profile is narrowerthis is because only photons with energies very close to the current value of the line location can be attributed to the line .updating the line location with the mean of the energies of these photons can not result in a large change in the emission line location .the situation becomes chronic when a delta function is used to model the line profile : the line location parameter sticks at its starting value throughout the iteration .it is to circumvent this difficulty that we develop both new em - type algorithms and new mcmc samplers specially tailored for fitting narrow lines .our new samplers are motivated by the gibbs sampler , but constitute a non - trivial generalization of gibbs sampling known as partially collapsed gibbs sampling ; see appendix [ ap : alg ] .our updated versions of both classes of algorithms are able to fit narrow lines by avoiding the attribution of photons to the emission line during the iteration .such algorithms tend to require fewer iterations to converge regardless of the width of the emission line .because they involve additional evaluation of quantities evolving the large dimensional redistribution matrix , , however , each iteration of these algorithms can be significantly more costly in terms of computing time .a full investigation of the relative merit of the algorithms and a description of how the computational trade - offs can be played to derive optimal algorithms are beyond the scope of this paper . except in appendix[ ap : alg ] , we do not discuss the details of the algorithms further in this article ; interested readers are directed to and . [[ challenge-2-multimodal - likelihoods . ] ] challenge 2 : multimodal likelihoods .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the likelihood function for the emission line location(s ) is highly multimodal .each mode corresponds to a different relatively likely location for an emission line or a set of emission lines .standard statistical techniques such as computing the estimates of the line locations with their associated error bars or confidence intervals implicitly assume that the likelihood function is unimodal and bell shaped .because this assumption is clearly and dramatically violated , these standard summary statistics are unreliable and inadequate . unfortunately , there are no readily available and generally applicable simple statistical summaries to handle highly multimodal likelihoods. instead we must develop summaries that are tailored to the specific scientific goals in a given analysis . because general strategies for dealing with multimodal likelihood functions are little known to astronomers and specific strategies for dealing with multimodal likelihood functions forthe location of narrow spectral lines do not exist , one of the primary goals of this article is to develop and illustrate these methods .a fully bayesian analysis of our spectral model with narrow emission lines is computationally demanding , even with our new algorithms .thus , we develop techniques that are much quicker and give similar results for the location of emission lines .these methods based on the so - called _ profile posterior distribution _ do not stand on as firm of a theoretical footing as a fully bayesian analysis , but are much quicker and thus better suited for _ exploratory data analysis_. the profile posterior distribution along with our exploratory methods are fully described and compared with the more sophisticated bayesian analysis . [ [ challenge-3-testing - for - the - presence - of - narrow - lines . ] ] challenge 3 : testing for the presence of narrow lines .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in addition to fitting the location of one or more emission lines , we often would like to perform a formal test for the inclusion of the emission lines in the statistical model .that is , we would like to quantify the evidence in a potentially sparse data set for a particular emission line in the source .testing for a spectral line is an example of a notoriously difficult statistical problem in which the standard theory does not apply .there are two basic technical problems .first , the simpler model that does not include a particular emission line is on the boundary of the larger model that does include the line .that is , the intensity parameter of an emission line is zero under the simpler model and can not be negative under the larger model .an even more fundamental problem occurs if either the line location or width is fit , because these parameters have _ no value _ under the simpler model .the behavior ( i.e. , sampling distribution ) of the likelihood ratio test statistic under the simpler model is not well understood and can not be assumed to follow the standard distribution , even asymptotically . propose a monte - carlo - based solution to this problem based on the method of posterior predictive p - values . in this articlewe extend the application of protassov _ et al_. s solution to the case when we fit the location of a narrow emission line , a situation that was avoided in .the remainder of the article is organized into four sections . [ park : sec : model - based ] reviews bayesian inference and monte carlo methods with an emphasis on multimodal distributions , outlines our computation methods , proposes new summaries of multimodal distributions , and describes exploratory statistical methods in this setting .we introduce illustrative examples in [ park : sec : model - based ] , but detailed spectral analysis is postponed in order to allow us to focus on our proposed methods . in [ park : sec : simul ] , a simulation study is performed to investigate the statistical properties of our proposed methods , with some emphasis placed on the potential benefits of model misspecification . [ park : sec : quasar ] presents the analysis of the high redshift quasar pg1634 + 706 , and how to test for the inclusion of the line in the spectral model .concluding remarks appear in [ park : sec : conclusion ] .an appendix outlines the computational methods we developed specifically for fitting the location of narrow emission lines .using a poisson model for the photon counts , the likelihood function of the parameter in the spectral model described in [ park : sec : model ] is given by ]th and the ] .] , kev in the spectrum .case 3 : : : there is a moderate gaussian emission line at 2.85 kev with kev and kev in the spectrum . case 4 : : : there is a narrow gaussian emission line at 2.85 kev with kev and kev in the spectrum .case 5 : : : there are two narrow gaussian emission lines with kev , one at 1.20 kev with kev and the other at 2.85 kev with kev in the spectrum .case 6 : : : there are two narrow gaussian emission lines with kev , one at 1.20 kev with kev and the other at 2.85 kev with kev in the spectrum .that is , each spectrum has either 0 , 1 , or 2 lines .data are recorded in an energy grid with bin width 0.01 kev .thus , the narrow emission line ( i.e. , cases 2 , 4 , 5 , and 6 ) corresponds to about 17 energy bins and the moderate one ( i.e. , case 3 ) about 85 energy bins , using standard deviations .as compared to the _ chandra _ resolution , the delta function emission line profile corresponds to 1 energy bin and thus it does not correctly specify the width of the gaussian emission lines in this simulation . through the simulation study , however , we illustrate possible advantage of this model misspecification in producing valid and efficient estimates and associated uncertainties for the line location .we show that using delta functions emission lines in the model is a useful strategy even when the true line occupies multiple bins . for each of the six spectra ,we generate twenty test data sets ( 120 data sets in total ) each with about 1500 counts similar to the observed number of counts in the _ chandra _ x - ray spectrum of pg1634 + 706 analyzed in [ park : sec : quasar ] , mimicking the real data situation .each spectrum has a power law continuum with and .our simulation is done with sherpa software in ciao , assuming the _ chandra _ responses ( effective area and instrument response function ) and no background contamination . .the vertical dashed lines represent the locations of the true emission lines.,width=576 ] for each of the test data sets , we run state - of - the - art mcmc samplers to fit a spectral model with a single delta function emission line .based on the monte carlo draws collected from the multiple chains of the mcmc samplers , the top two rows of figure [ park : fig : simul ] present the marginal posterior distribution of the delta function line location for one simulation under each of the six cases ; the vertical dashed lines represent the true line locations .the marginal posterior density is smoothed using gaussian kernel smoothing with standard deviation 0.01 kev , as described in [ park : sec : mm ] . when there is no emission line in the spectrum ( i.e. , case 1 ) , the posterior distribution of the delta function line location is highly multimodal .in the case of a weak narrow gaussian emission ( i.e. , case 2 ) , the marginal posterior distribution often remains highly multimodal , but one mode typically identifies the true line location . in practice , the local mode(s ) of such a highly multimodal posterior distribution may suggest plausible line locations and show evidence for multiple lines ; see [ park : sec : simul2 ] .even with a moderate line ( i.e. , case 3 ) , the true line location appears well estimated with the marginal posterior distribution of the delta function line location .as we shall see in table [ park : tbl : simulhpd ] , however , in this case the resulting posterior region under - covers the true values because the true line is 85 times wider than the specified model . with the strong narrow gaussian line ( i.e. , case 4 ) , the posterior distribution of the line location tends to be unimodal , and the posterior mode correctly identifies the true line location . the posterior distribution for case 5 in figure [ park : fig : simul ] is bimodal , with the modes corresponding to the two true line locations .when multiple lines are present in a spectrum , the posterior distribution of the single line location can be multimodal , as shown in the case 5 of figure [ park : fig : simul ] .thus the multiple modes may be indicative of multiple lines ; see [ park : sec : simul2 ] for details .when one of two narrow gaussian emission lines is much stronger ( i.e. , case 6 ) , the single delta function line model tends to identify only one of the two true line locations . to visualize the uncertainty of the fitted delta function line location(s ) ,the bottom two rows of figure [ park : fig : simul ] show the hpd graphs constructed with 100 hpd regions as described in [ park : sec : mm ] .the multimodality in the marginal posterior distribution of a single line location may indicate the existence of multiple lines in a spectrum , provided the lines are well separated .when a model is fitted with one emission line , modes in the likelihood function of the line location correspond to ranges of energy with excess emission relative to the continuum .multiple modes in the likelihood indicate that there are multiple ranges of energy with such excess emission .the height of the mode is indicative of the degree of excess .thus , if there are several emission lines , we might expect to see several corresponding modes in the likelihood . if there is one energy range that dominates in terms of excess emission , however , it corresponds to the dominate mode of the likelihood .thus , if there are lines of very different intensities , only the strongest ones may show up as a mode of the likelihood .this can be seen by comparing case 5 and case 6 in figure [ park : fig : simul ] .if there is evidence for multiple lines in a spectrum or if we suspect multiple lines a priori , we can fit a model with two or more lines .we illustrate this using simulated data under case 2 ( one narrow line ) and case 5 ( two narrow lines ) . beginning with case 2 ,the actual spectrum has only one line , but we investigate what happens when we fit two lines to this data .a scatterplot of the two fitted line locations identified when fitting two emission lines to one of the data sets generated under case 2 is presented in the top left panel of figure [ park : fig : simul - twolines ] .there is a label switching problem between the two fitted line locations because of the symmetry of the emission lines in the model .we can remove the symmetry by imposing a constraint on line locations . to do this, we first fit the model with a single delta function line profile and compute the posterior mode of its line location . returning to the model with two fitted delta functions , we separate the two fitted line locations by setting the first " line location to be the one closest to the posterior mode . in case 2 ,the posterior mode for the single line location is 2.815 kev , so that the first line location is the line location closest to 2.815 kev and the second line is the other location .the two panels in the top right corner of figure [ park : fig : simul - twolines ] show the resulting marginal posterior distributions of the two fitted line locations .as shown in the figure , the marginal posterior distribution of the first line location correctly identifies the true line location .the spectrum used to generate the data under case 2 has no second line , so that the marginal posterior distribution of the second line is highly multimodal . in practice, we may take the local mode(s ) in the second marginal posterior distribution as candidates for another line location .however , the resulting hpd regions for the second line are wide , indicating that either there is no second line or if there is , it can not be well identified .when two emission lines are present in a spectrum , we follow the same procedure as illustrated using the data generated under case 5 .the bottom left panel of figure [ park : fig : simul - twolines ] shows the scatterplot of two line locations identified in the spectrum of case 5 .label - switching is handled as above using the posterior mode for the single line location , which is computed as 2.855 kev .the fitted marginal posterior distributions of the first and second line locations are given in the bottom right corner of figure [ park : fig : simul - twolines ] .when there are two emission lines in a spectrum , the two true line locations are precisely specified by the two marginal posterior distributions .this is a verification of what is suggested by the multiple modes in the marginal posterior distribution of the single line location shown in figure [ park : fig : simul ] .the possibility of model misspecification when using a delta function to model an emission line depends on both the width of the true line and the resolution of the detector .misspecification only occurs when the line is not contained in one energy bin , and we shall illustrate that such misspecification only has statistical consequences for the fitted line location if it is very severe . indeed there can be a possible statistical advantage of using a delta function rather than a gaussian line if we know the spectral line is not too wide .as a toy example , consider a simple gaussian model with known standard deviation : when , a 95% confidence interval for is given by .if we misspecify the modal as with , the resulting interval for is shorter and has lower coverage .we similarly underrepresent the error bars of an emission line location when we use a delta function for a line that is not contained in one energy bin .we expect this to reduce both the length and coverage of the confidence regions .the advantage or disadvantage of this strategy is not immediately clear , however , since the nominal coverage of the intervals is based on an asymptotic gaussian approximation to the posterior distribution which clearly does not apply in this setting .nonetheless , our simulation study illustrates that the use of a delta function line profile can result in a shorter and more informative hpd region while maintaining good coverage .we now turn to the computation of hpd regions and the possible statistical advantage of using delta functions in place of narrow gaussian emission lines .we fit a spectral model that includes a single delta function line or a single narrow gaussian line to the twenty simulated data sets generated under each of the 6 cases . after smoothing the marginal posterior distribution using gaussian kernel smoothing ,we construct 95% hpd regions for the line location , as shown in figure [ park : fig : simul-10datasets ] . for visual clarity , we present results only for the first ten simulated data sets in figure [ park : fig : simul-10datasets ] ; results from all twenty simulated data sets are discussed in table [ park : tbl : simulhpd ] . because there is no emission line in the spectrum used to simulate data under case 1 , the 95% hpd regions for the line location are very wide and show large uncertainties for the fitted line location .when there is at least one strong emission line ( i.e. , cases 3 , 4 , and 6 ) , both line models produce comparable hpd regions , although those computed under the gaussian line model appear somewhat wider .the tradeoff between the two line models becomes evident when there is no strong emission line in the spectrum ( i.e. , cases 2 and 5 ) . in case 2 , the 95% hpd regions for a single gaussian line location are somewhat wider . with the same nominal level , the delta function line model yields more compact and informative hpd regions .an added advantage of the delta function line model occurs in case 5 when the 95% hpd regions for a single delta function line location consist of two disjoint hpd intervals which simultaneously contain the two true line locations ; this behavior is more often observed with the delta function line model ( 2 times out of 20 ) than the gaussian line model ( 1 time out of 20 ) ..summary of 95% hpd regions for the line location in the simulation study . [ cols="^,^,^,^,^,^,^ " , ] for example , the 95% hpd regions of the delta function line location are presented in table [ park : tbl : hpds ] along with local modes of the posterior distribution associated with each interval .each of the 95% hpd regions is composed of a number of disjoint intervals .only the intervals that have posterior probabilities greater than 5% are presented in table [ park : tbl : hpds ] , so that the probabilities may sum to less than 95% .for example , the two intervals of obs - id 47 presented in table [ park : tbl : hpds ] have a combined posterior probability of 80.96% and the other eleven intervals not shown in the table have a posterior probability of about 14.04% , for a total of 95% .the posterior modes of the delta function line location that are located near 2.74 kev are indicated in bold face in table [ park : tbl : hpds ] . and the right panel over a range near 2.85 kev.,width=576 ] the six observations of pg1634 + 706were independently observed with _chandra_. thus , under the flat prior distribution on , , the posterior distribution of the line location given all six observations is given by where denotes the six observations , denotes the delta function line location parameter , denotes the set of model parameters other than for the six observations , and represents a likelihood function of given .( here we allow to vary among the six observations ; i.e. , we do not exclude the possibility that the six observations have somewhat different power law normalizations and photon indexes . ) the values of the posterior distribution given one of the individual data sets is sometimes indistinguishable from zero because of numerical inaccuracies .thus we add 1/15000 to the posterior probability of each energy bin and renormalize each of the posterior distributions .this allows the product given in equation [ park : eq : combined ] to be computed for each energy bin and is somewhat conservative as it increases the posterior uncertainty corresponding to each of the individual data sets .figure [ park : fig : combined ] presents the marginal posterior distribution of the delta function line location given all six observations computed in this way ; the left panel examines the whole range of the line location while the right panel focuses on the range near 2.74 kev . as shown in figure [ park : fig : combined ] , the posterior distribution given all six observations is fairly unimodal and symmetric , except the little local mode near 2.655 kev . thus ,using a nominal 95% hpd region , the most probable delta function line location is summarized as kev with posterior probability of 95.3% .posterior predictive methods can be employed to check the specification of the spectral model .this methods aim to check the self - consistency of a model , i.e. , the ability of the fitted model to predict the data to which the model is fit . to evaluate and quantify evidence for the inclusion of an emission line in the spectrum , we extend the method of posterior predictive p - values proposed by and .with the _ chandra _ observations of pg1634 + 706 , we consider the same spectral model discussed in [ park : sec : model ] except that we compare three models for the emission line : model 0 : : : there is no emission line in the spectrum .model 1 : : : there is a delta function emission line with location fixed at 2.74 kev but unknown intensity in the spectrum .model 2 : : : there is a delta function emission line with unknown location and intensity in the spectrum .we could equally well consider a gaussian line profile in models 1 and 2 ; either line profile model results in a valid test .we consider a delta function line profile simply because we are looking for evidence of a narrow emission line .we use ppp - values to compare the three models and quantify the evidence in the data for the delta function emission line ; see for details of this method and its advantages over the standard f - test , the standard cash statistic ( or likelihood ratio test statistic ) , and bayes factors . in the posterior predictive checks , model 1 fixes the delta function line location at 2.74 kev using prior information as to the location of the fe - k - alpha emission line . in order to combine the evidence for the line from all six observations with different exposure areas and exposure times , we base our comparisons on the test statistic that is the sum of the six loglikelihood ratio statistics for comparing model and model 0 , i.e. , where , , and represent the parameter spaces under models 0 , 1 , and 2 , respectively , and denotes the collection of six data sets simulated under model 0 .specifically , we generate 1000 replications of each data set from the posterior predictive distribution under model 0 and compute for .histograms of and appear in figure [ park : fig : pp - check ] . comparing the histogram of the simulated test statistics with the observed value of the test statistic yields the ppp - values shown in figure [ park : fig : pp - check ] .the ppp - value is the proportion of the simulated test statistics that are as extreme as or more extreme than the observed test statistic .smaller ppp - values give stronger evidence for the alternative model , i.e. , model 1 or model 2 , thereby supporting the inclusion of the line in the spectrum in our case . as shown in figure [ park : fig : pp - check ] , there is evidence for the presence of the spectral line given all six observations .the comparison between models 0 and 1 shows stronger evidence for the line location because we are using extra a priori information about the plausible line location .this article presents methods to detect , identify , and locate narrow emission lines in x - ray spectra via a highly structured multilevel spectral model that includes a delta function line profile .modeling narrow emission lines with a delta function causes the em algorithms and mcmc samplers developed in to break down and thus requires more sophisticated statistical methods and algorithms .the marginal posterior distribution of the delta function emission line location tends to be highly multimodal when the emission line is weak or multiple emission lines are present in the spectrum .because basic summary statistics are not appropriate to summarize such a multimodal distribution , we instead develop and use hpd graphs along with a list of posterior modes .testing for an emission line in the spectrum is a notoriously challenging problem because the value of the line intensity parameter is on the boundary of the parameter space , i.e. , zero , under a model that does not include an emission line .thus , we extend the posterior predictive methods proposed by to test for the evidence of a delta function emission line with unknown location in the spectrum . using the simulation study in [ park : sec : simul ] ,we demonstrate the potential advantage of model misspecification using a delta function line profile in place of a gaussian line profile .we show that the delta function line profile may provide more precise and meaningful summaries for line locations if the true emission line is narrow .when multiple lines are present in the spectrum , the marginal posterior distribution of a single delta function line location may indicate multiple lines in the spectrum .our methods are applied to the six different _ chandra _ observations of pg1634 + 706 in order to identify a narrow emission line in the x - ray spectrum .given all the six observations , the most probable delta function line is identified at kev in the observed frame .the corresponding rest frame energy for the line is kev , which may suggest the high ionization of iron in the emitting plasma .there is some recent evidence that high ionization iron line can be variable on short timescales ( see for example mkn766 in * ? ? ?such variability would explain no detection of the emission line in one of the six _ chandra _ observations .the authors gratefully acknowledge funding for this project partially provided by nsf grant dms-04 - 06085 and by nasa contract nas8 - 39073 and nas8 - 03060 ( cxc ) . this work is a product of joint work with the california - harvard astrostatistics collaboration ( chasc )whose members include j. chiang , a. connors , d. van dyk , v. l. kashyap , x .-meng , j. scargle , a. siemiginowska , e. sourlas , t. park , a. young , y. yu , and a. zezas . the authors also thank the referee for many helpful and constructive comments .in this section we give an overview of the mode finding and posterior simulation methods used to fit the spectral model with a narrow emission line .our summary is brief , and thus readers who are interested in a more detailed description should refer to and . in order to illustrate our computational strategy ,consider a simplified example of an _ ideal instrument _ that is not subject to the data contamination processes .in particular , the redistribution matrix is an identity matrix , the effective area is constant , there are no absorption features , and there is no background contamination .in addition , we assume that the continuum is specified with no unknown parameters and that there is a single gaussian emission line that has a known width and a known intensity .thus , the line location is the only unknown parameter , the source model given in equation [ park : eq : ideal ] simplifies to and the counts are modeled as .this model can be fit using the method of data augmentation by setting , where and are the counts due to the continuum and the emission line in bin , respectively .in particular , the em iteratively split the counts into continuum counts and emission line counts .given the current iterate of the line location , , the e - step updates the line counts via e - step : : : compute $ ] for each bin .that is , = y_j \frac{{\lambda}\pi_j(\mu^{(t)},\nu_0 ) } { \delta_j f(e_j)+{\lambda}\pi_j(\mu^{(t)},\nu_0)}. \label{park : eq : toy - estep } \end{aligned}\ ] ] which is the weighted average of the bin energies and uses the emission line counts as weights .although this em algorithm is simple , it breaks down when fitting the location of a _ narrow _ emission line , i.e. , when is small relative to the size of the bins . in the extreme ,the gaussian line profile becomes a delta function , so that is zero for all bins except the bin containing .this results in an e - step that computes zero line counts in all bins except the bin containing and finally an m - step that computes .this means that em will return the same line location at each iteration and that the algorithm will not converge to a mode . in this simplified example , this difficulty can be avoided by directly maximizing the posterior distribution because of the binning of the data , we can treat possible line locations within each bin as indistinguishable and compute the mode by evaluating equation [ park : eq : toy - obs - like ] on the fine grid that corresponds to the binning of the data .the situation is more complicated in the full spectral model described in [ park : sec : model ] .the method of data augmentation can be used to construct efficient algorithms that both fit the parameters in the continuum and the lines and account for instrument response and background contamination . in the case of the narrow emission line ,however , we must implement a strategy that uses less data augmentation when updating the line location / width than when updating the other model parameters .the expectation / conditional maximization either ( ecme ) algorithm allows us to use no data augmentation when updating the line location / width , but the resulting m - step is time consuming owing to the multiple evaluations of the conditional posterior distribution of the line location / width which involve the large dimensional redistribution matrix , .an intermediate strategy uses the standard data augmentation scheme to adjust for instrument response and background contamination but does not separate continuum and line photons when updating the line location / width .this strategy is an instance of the alternating expectation / conditional maximization ( aecm ) algorithm and each iteration is much quicker than with ecme but more iterations are required for convergence .the algorithms we use for mode finding aim to combine the advantages of ecme and aecm by running one ecme iteration followed by aecm iterations and repeating until convergence . call this a _rotation( ) _ algorithm and illustrate the computational advantage of the strategy . returning to the simplified example of appendix [ ap : toyem ], we can formulate a gibbs sampler using the same data augmentation scheme .given the current iterate , , step 1 simulates the line counts in bin via when this algorithm is applied with a narrow emission line , it breaks down just as the em algorithm does .when a delta function is used for the line profile , the simulation in equation [ park : eq : step1 ] results in no line counts in any bin except the one containing the line , and again does not move from its starting value .when a narrow gaussian line profile is used , the situation is less extreme , but the sampler exhibits very high autocorrelations and typically can not jump among the posterior modes . just as with em, the difficulty can be avoided by computing the posterior distribution of on a find grid and directly simulating when the method of data augmentation is used to account for data contamination processes described in [ park : sec : model ] , however , this approach should be modified in a manner analogous to the ecme and aecm algorithms .this leads to the strategy of using conditional distributions from different data augmentation schemes . in this case , however , the resulting set of conditional distributions used to construct the gibbs sampler may be _ incompatible _ and there may be no joint distribution that corresponds to this set of conditional distributions .although such a sampler may result in efficient computation , care must be taken to be sure the sampler delivers simulations from the target posterior distribution .this is formalized through the partially collapsed gibbs ( pcg ) sampler of which outlines the steps that should be taken to ensure proper convergence ; refer to for the applications and illustrations of pcg samplers .the pcg samplers can be viewed as the stochastic version of ecme and aecm , thereby allowing us to sample the line location with no data augmentation ( as in ecme ) or partial data augmentation ( as in aecm ) .thus , the pcg sampler differs from the gibbs sampler developed in in sampling the line location ( and line width ) , and in the order of sampling steps . design two pcg samplers to fit the spectral model with a delta function emission line or a narrow gaussian emission line , which are called pcg i and pcg ii , respectively .as compared to pcg i , pcgii requires one additional sampling step for the line width , so that fitting the narrow gaussian emission line is computationally more demanding than fitting the delta function emission line .yaqoob , t. , george , i. m. , nandra , k. , turner , t. j. , serlemitsos , p. j. , & mushotzky , r. f. 2001 , , 546 , 759 yaqoob , t. , padmanabhan , u. , dotani , t. , george , i. m. , nandra , k. , tanaka , y. , turner , t. j. , & weaver , k. a. 2001 , x - ray emission from accretion onto black holes , yaqoob , t. 2007 , astronomical society of the pacific conference series , 373 , 109 | the detection and quantification of narrow emission lines in x - ray spectra is a challenging statistical task . the poisson nature of the photon counts leads to local random fluctuations in the observed spectrum that often results in excess emission in a narrow band of energy resembling a weak narrow line . from a formal statistical perspective , this leads to a ( sometimes highly ) multimodal likelihood . many standard statistical procedures are based on ( asymptotic ) gaussian approximations to the likelihood and simply can not be used in such settings . bayesian methods offer a more direct paradigm for accounting for such complicated likelihood functions but even here multimodal likelihoods pose significant computational challenges . the new markov chain monte carlo ( mcmc ) methods developed in and , however , are able to fully explore the complex posterior distribution of the location of a narrow line , and thus provide valid statistical inference . even with these computational tools , standard statistical quantities such as means and standard deviations can not adequately summarize inference and standard testing procedures can not be used to test for emission lines . in this paper , we use new efficient mcmc algorithms to fit the location of narrow emission lines , we develop new statistical strategies for summarizing highly multimodal distributions and quantifying valid statistical inference , and we extend the method of posterior predictive p - values proposed by to test for the presence of narrow emission lines in x - ray spectra . we illustrate and validate our methods using simulation studies and apply them to the _ chandra _ observations of the high redshift quasar pg1634 + 706 . |
the optic nom is a deployable , long - arm , non - contact profilometer consisting of a scanning pentaprism and a digital autocollimator ; this type of profilometer is also known as a deflectometer . a deflectometer can be used as a non - contact probe to provide accurate measurement of the height profile across an optical surface .the pentaprism traverses along a stiff linear guide - bar above an optical surface , relaying the autocollimator beam to provide a set of surface slope ( angle ) measurements along that path .pentaprism position information is provided via a linear encoder and so a height profile of the optical surface can be calculated .a diagram of the set - up is given in fig .[ fig : nom ] .the instrument is being used to confirm the base radius of curvature ( ) and conic constant ( ) of prototype european extremely large telescope ( e - elt ) segments that are being manufactured at optic glyndr .this profilometry technique originated in the synchrotron community to provide accurate measurements of x - ray focussing optics .these mirrors for synchrotron x - rays generally fall under two categories : ( 1 ) long ( up to 1.5 m ) and narrow ( a few 10 s of mm ) with very long focal lengths ( i.e. very large radii of curvature with from 100 s of metres up to a few kilometres ) and ( 2 ) nano or micro - focussing x - ray mirrors that can have higher surface curvature ( i.e. m ) but are only mm long . in both these cases the change in slope over the surface of the mirrors is usually < 10 milliradians so the limited angular measurement range of the digital autocollimator is not an issue and angular `` stitching '' techniques are not required in the data analysis . for our purposewe do not require the extreme accuracy and precision that is needed for x - ray optics , but we need to measure large optical surfaces ( e.g. > 1 m diameter areas ) with slope variations that exceed the range of most commercially available , sub - arcsecond accuracy , digital autocollimators .also , in contrast to the x - ray optic community , the optic nom measurements are subjected to a significantly higher level of environmental noise due to the fact that the instrument is portable and used in - situ over a robotic polishing machine .this paper describes a numerical method that can be used to process sets of deflectometer line data that have been taken over an optical surface , where the individual line scans in that data set have been taken in difference coordinate frames , and fit an aspheric ( conic - section ) surface to them .simulated datasets have been used to test the fitting method in order to determine the accuracy of the method and any limiting behaviour .the software has been implemented using matlab .to and to ) to enable measurements over the entire surface ( e.g. dotted path ) . ]for a primary mirror ( m1 ) as large as that proposed for the e - elt , it is not feasible to manufacture the mirror in one continuous piece ; it must be constructed from many smaller mirrors that are of a size more practical to manufacture , measure and transport .the primary aperture of the e - elt at the time of this study was a 42 m diameter ellipsoid .the surface form of the primary mirror is given by the formula : ( ref : , p. 16, the ellipsoid formula ) . where is the base radius of curvature , is the conic constant and , , are the coordinates in the m1 ( primary mirror ) reference frame .the following production and measurement accuracies were specified by eso : * the first polished segment should have a base radius of curvature and conic constant which satisfy mm mm and . *the accuracy of the knowledge of the nominal radius of curvature , shall be better or equal to 14 mm rms . * the maximum allowable surface error ( before removal of low and mid - spatial frequency terms ) over the useful area of any prototype segment is 50 nm rms ( 40 nm goal ) .the 42 m primary consists of 1148 hexagonal mirror segments of approximately 1.4 m diameter .the set of prototype segments to be manufactured and measured were a group of 7 segments towards the outer edge of the primary mirror assembly , i.e. each mirror segment is an off - axis ellipsoid .one of the challenges in manufacturing the primary mirror is that each individual segment must appear to have been `` cut - out '' from this larger ellipsoid - each segment must be made to the same and ( within tolerance ) and this surface form must be correctly aligned to the geometry of the segment ( which are slightly irregular hexagons ) .interferometric testing provides accurate surface - form maps in relation to the geometry of the segment , however it can not provide a direct measurement of .a more direct measure ( i.e. one not requiring a reference optic ) can be provided using a deflectometer .therefore , the primary task of the optic nom is to provide a confirmation of the base radius of curvature ( ) and conic constant ( ) of prototype e - elt mirror segments during the end stages of optical polishing .in the following discussion , reference is made to the m1 ( primary mirror ) coordinate reference frame and the segment ( denoted by the segment number : e.g. s1 , s4 etc . )coordinate reference frame .the position and surface form of the segments is defined in m1 coordinates as described by equation [ eq : ellipform ] .when measuring a segment by interferometry , the segment is supported in its gravitationally symmetric position : the mirror segment is oriented so that the normal to the surface at its centre coordinate is pointing vertically upwards ( along the z - axis ) . in comparison ,in the m1 coordinate reference frame , the normal vector of a general segment is not pointing along z ; only a segment centred at the centre of m1 would possess this property .when measuring with the deflectometer the mirror segment is , on average , oriented with its centre - normal pointed vertically upwards . however , as previously described , the range of surface slope on a segment exceeds the measurement range of the autocollimator and so the segment must be pitched and rolled ( see fig .[ fig : nom ] ) to allow scans to be taken over the entire surface .the range of these angular re - positionings is of the order of degrees . withthe segment tilted in various orientations for different line - scans the measurement coordinate frame is no longer co - aligned with the segment coordinate frame .this makes co - locating all the separate line scans to a single surface - which needs to be a best fit to this data - non - trivial .figure [ fig : scanlines](a ) gives an example of a set of scan - lines taken over a segment surface , fig.[fig : scanlines](b ) shows the effect that the tilting method used to overcome the limited measurement range has on the data and fig.[fig : scanlines](c ) shows how the line scans appear when each scan is properly corrected for z - height offsets and rotations after the optimisation / fitting procedure .a suite of matlab programs have been written to read - in , process and find the best - fit conic surface for the nom line - scan datasets .the data flowchart for these processes is shown in fig .[ fig : dataflow ] .as previously described , the challenge with fitting this data to a surface is that due to the limited angular range of the autocollimator , each scan - line taken on the mirror surface is in an unknown different coordinate reference frame and so the fitting routine must translate and rotate the scans to correct for this .also , given that the ( height ) data of the surface is obtained via integration of slope data ( of the form d/d = tan ) , the absolute position of each scan is unknown ( the unknown integration constant ) , so the relative height of all the scans with reference to each other must also be part of the optimisation routine . .the final step ( 10 ) is beyond the scope of this work . ] for a given set of scan - lines the parameters that require fitting are : and ( globally ) and also 6 parameters per scan ( 3 translations and 3 rotations ) .this means there are parameters to fit per set of line - scans ( where is the number of line scans in a set ) . for the line - scan pattern shown in fig .[ fig : scanlines ] , this means there are parameters to optimise in the conic fitting . to perform this optimisationa trust - region - reflective algorithm is used to determine the minimum of a cost - function . as expected ,a gradient minimisation over so many dimensions is computationally costly and prone to stopping in local minima . to help the optimiser , `` good '' initial values of and be provided and boundary limits applied to the individual scan - line transformations . in addition a global search strategy is used to circumvent the local - minima problem starting the optimiser with many different sets of initial conditions and selecting the most optimal. this fitting routine can be run on a standard desktop pc ( for this benchmark the processor is an intel^^ xeon^^ cpu e3 - 1240 v3 3.40 ghz ) . as an example of the run - time , a single run of the optimisation / fitting routine ( i.e. using only a single pair as an initial starting value ) on a typical dataset simulated here ( 24 scan - lines containing 2562 data points in total ) which requires 146 parameters to optimise takes seconds to converge .an important step in setting up this optimisation routine is the formulation of an appropriate cost function , .the quantity that we minimise is the sum of the squared differences between the height of each point in each line scan ( ) and the calculated of an ellipsoid surface at the same ( , ) coordinate positions for a given and ( called here ) . to enablethe comparison between and , either the line scans must be transformed into the m1 coordinate frame or the ellipsoid formula ( equation [ eq : ellipform ] ) transformed into the relevant segment coordinate system .the more reliable of these for the cost function is the latter - a comparison in the segment coordinate frame .one of the reasons for this is that in the segment reference frame the cost function ( ) is less weighted by the outer scan lines ( furthest from the segment centre ) for the same angular displacement , another is that the cost function is smoother since most of the different parameter values are closer to zero than in the m1 frame .figure [ fig : refframes ] presents an illustration of this issue .the transformation of the ellipsoid formula from m1 to segment reference frame uses the pre - defined centre coordinate of the segment under manufacture and the measured centre coordinate determined experimentally from the segment geometry .the value of the segment centre , in segment coordinates is set to be .the form of the conic equation used to calculate is : where and are the ellipsoid parameters , and are the coordinates in the segment frame and is the rotation angle around the x - axis that brings the normal vector of the segment at its centre in the m1 frame to a vertical orientation ( i.e. along the positive z - axis ) ; this is also illustrated in fig .[ fig : refframes ] . the equation can be formulated in this way because the ellipsoid formula ( equation [ eq : ellipform ] ) is symmetric about the z - axis , and for the purposes of comparing the theoretical surface with the real data the data can be transformed so that the segment centre lies on the y - axis .the optimization routine being used implicitly computes the sum of squares as part of the algorithm , so our cost function , , is multi - valued and creates a length vector of z differences , one for each measured ( ,, ) value : \label{eq : f } \end{split}\ ] ] where is the number of line scans and and where , , are the rotated and translated line scan measurements given by here and where , , and are the measured coordinate position and height data for point and is the line scan number which contains the point . , and are the standard rotation matrices around the , and axes respectively for the angle parameters , , for line scan number .a trust - region - reflective algorithm is used in the optimiser and it is set up as a bounded problem - initial values are allocated to the parameters that require fitting and limits are also defined for those parameters .the optimisation is based around a gradient minimisation ; the cost function gradient is calculated using the central finite differencing method . for this problemthe upper and lower boundary limits were chosen to be symmetric around the initial value , , of each parameter : for the 6 degrees of freedom in positioning each scan line into a common ( the segment ) coordinate frame the , , limits are set at mm and the 3 rotations are limited to radians ; the initial values are set to zero .these limits have been chosen to be suitable for the measurement set - up as previously described and the expected displacements and rotations of the line scans are well within these boundary limits . for and , the global parameters across all scan lines , the initial values , and , are set to a good estimate based on other measurements ( e.g. interferometric measurements ) and the boundary limits in the program are set to mm and . section [ sec : performance ] demonstrates the method used to reach the correct solution even if good initial estimates for and are not available .( i.e. d/d ) over and .both plots show the same jacobian but calculated using different forms of the quadratic equation solution - the left plot is calculated before the function calculations were optimised for numerical precision and the right plot is after the reformulations to improve numerical noise .these have been calculated for s1 ( near the outer edge of m1 ) , which is the worst case ( in terms of the numerical noise ) due to the magnitude of the y - coordinates . towards the centre of m1, the centre coordinates describing the segment positions are more moderate in magnitude resulting in a smoother jacobian . ] in order to allow the optimisation algorithm to operate efficiently , it is important to present it with a cost function that is as smooth as possible .any noise in the cost function causes fluctuating gradient measurements which at best will cause the optimisation to take a longer path to the solution , and at worst may cause it to terminate prematurely in a false local minimum .an example of problems with numerical noise is shown in fig .[ fig : numnoise](left plot ) : the initial formulation of the problem results in such significant loss of numerical precision that there is no discernible gradient for the optimiser to follow and it terminates in an apparent local minimum very rapidly ( it hardly moves from the initial starting values ) .figure [ fig : numnoise](right plot ) shows the same gradient function after applying several improvements as described below .numerical noise in the cost function is still present , but it is now significantly below the level of the actual function evaluation values , and low enough that the derived gradients can be followed by the optimiser .a standard formulation of equations in the cost function can result in a significant loss of numerical precision due to cancellation between terms of the ellipsoid equation . by reformulating the calculation of the surface using a different form of the quadratic equation ( , ch.5 , pp.227 )the numerical accuracy of the calculation is vastly improved .the form of the cost function ( ) used is calculated from equation [ eq : zref ] . in addition , the order of arithmetic operations throughout the cost function was carefully analysed in the context of the expected magnitudes of the inputs and improved where possible to maximise the precision of the cost function . the choice of how the gradient is calculated has a significant effect on the solution convergence - the central ( or symmetric ) finite differencing method has been chosen which is computationally more costly than the forward finite differencing method but the central method has higher numerical precision ( , ch.5.7 , pp.229 ) .use of the forward method results in a solution of much lower accuracy and is insufficient for the solution accuracy required here .the default step size for jacobian calculations in matlab is quite small ( 1e-8 ) , although it can be set to any value from 0 to inf .increasing the step size will sample over a size greater than the granularity of the numerical noise features and so a smoother function is produced which is then less likely to present the optimiser with false ( noise based ) local minima .however , the step size also needs to be minimised to provide an accurate estimation of the gradient function and ensure that the optimiser will converge on a solution of the required accuracy . with the initial cost function formulation , a step size of 1e-3 ( *eps( ) ) , where , was needed to produce the required smoothness , but this results in very poor gradient estimates .machine epsilon ( eps ) is the approximate relative error due to rounding in floating point arithmetic of a 64 bit double representing that value . with the improved cost function , a step size of 1e-7 ( *eps( ) )could be chosen which provides a good balance between noise reduction and accuracy of estimation .the optimisation algorithm allows only a single value for the gradient estimation step size , , to be chosen to apply to all the parameters .the parameters in our problem differ by several orders of magnitude , so for the optimisation process the inputs are normalised such that their bounds map to [ -1 1 ] .a step size of 1e-7 is then suitable for the entire n - dimensional gradient space minimisation . to guard against solutions that may have converged in a local minimum rather than the global minimum a global search strategyis used , starting the optimisation routine using many sets of different initial conditions and comparing the results .an efficient way to implement this strategy is to generate a randomised sobol sequence over the required n - dimensional search space . by using ( where is the number of sets of initial parameters ) points to generate the sobol setthe points will be uniformly distributed over the required search space . for this workwe have generally been using this to generate a set of initial values to verify the solution convergence . , parameter space that occupies the same rms surface error contours map solutions having the same rms surface error .the inner 10 contour levels are ( in nm ) : 40 , 50 , 100 , 150 , 200 , 300 , 400 , 500 , 700 and 900 . given mm and for a `` perfect '' first segment , the eso allowable surface error of 50 nm rms for subsequent segments corresponds to the plotted elliptical boundary limits in ( ( mm ) , ) of ( 83990.7 , -0.995295 ) and ( 84009.3 , -0.991291 ) . ]for the real data , sources of error such as environment noise ( vibration , dust , air turbulence ) , alignment errors , encoder errors etc . lead to measurement errors in the autocollimator slope angle . in principlethese slope errors could be estimated using classical error propagation , but this alone does not give us the error on the parameters we are trying to determine : ( base radius of curvature ) and ( conic constant ) .the slope angles are converted into gradients and integrated to values . the cumulative integration error could be estimated and would vary depending on the numerical integration method .as previously described , the ellipsoid formula ( equation [ eq : ellipform ] ) is fitted to the set of line scans . due to the limited range of the autocollimator , line scans covering the full extent of the segmentcan only be obtained by tip , tilt and rotation of the segment on its support structure . due to this ,the parameters in the fitting routine include and , but also separate spatial translation and rotation parameters for each line scan , to allow for the fact that we only approximately know their relative locations . an analytical propagation of errors to the parameters and is not possible in this situation ( a non - linear iterative fitting process ) ; the error in and from this fitting process is better estimated using monte carlo methods and will depend on the optimisation algorithm and the exact cost function formulation as well as the errors in and . in theory , errors in our estimates for the translations / rotations of the line scans can be corrected by the optimiser but in practice they will also affect the ability of the algorithm to fully converge . to test the behaviour and accuracy of the fitting routine simulated nom scans are generated .as shown in fig .[ fig : scanlines](a ) , a set of scan coordinates defined in and in the segment reference frame are passed to the nom scan simulator along with the surface specification in and , the centre coordinate in the m1 reference frame and the 1-sigma level of z - noise to add to the generated values . based on a measurement procedure of ensuring each scan is centred within the surface slope range of the autocollimator, the data for each scan line is then rotated in the and axes around the centre point of the scan line so that the , slopes at this centre point are zero .note that the specification of the , coordinates of the scan lines are chosen to ensure that the slope measurement along its length does not exceed the total measurement range of the autocollimator .realistic levels of random noise on are chosen based on real data acquired with the optic nom .the simulated dataset then looks like fig .[ fig : scanlines](b ) and these data are passed to the fitting routine . in terms of the whole process shown in fig .[ fig : dataflow ] , the simulated dataset comes in at step 5 and represents the data as if it has gone through the previous steps and the data taking process ( not indicated ) .for all the performance tests presented here the simulated datasets were generated in the format as shown in fig .[ fig : scanlines](a ) ( i.e. a scan pattern consisting of a grid of 7 lines across each of the x and y axis to produce 24 scan lines that are within the measurement range of the nom autocollimator ) .the data points within a scan line are separated by 5 mm .since the data used in these tests are simulated , the and is known and the level of random noise injected into the data is known .sanity checks on the reconstructed / fitted data are made by checking that the final value of the cost function ( which is basically a sum of the residuals ) corresponds to the level of noise that was injected ( i.e. an rms of the residuals should be close to the 1-sigma level of noise that was used ) . if the residuals are higher than expected it is usually because the optimiser has terminated too early .this can happen if the initial parameters were beyond the bounds of the optimiser ( see section [ sec : outbounds ] ) , or the initial values were located unfavourably in the gradient - space so that the route taken by the optimiser needed more steps than the allocated maximum number of iterations ; it is for these reasons that a multi - start strategy is best used for unknown data . the reconstructed data ( as per fig . [fig : scanlines](c ) ) as well as the residual plot ( fig .[ fig : scanlines](d ) ) can also be plotted to double - check that the scan lines have been sensibly located on the best - fit surface and that there is nothing unexpected in the residuals ( i.e. that they are consistent with the expected noise ) .finally , in order to determine whether a solution has converged within the limits of the allowed rms surface error a contour plot of rms surface error was derived over the , parameter space .the eso allowable surface error is defined to be 50 nm rms . for segment s1, whose centre coordinate is positioned near the outer edge of the primary mirror this contour plot is given in fig .[ fig : contours ] . the rms surface error is calculated based on the 2562 data points in the line scan pattern ( fig .[ fig : scanlines](a ) ) of the typical simulated scan set used , with the minimum surface error ( injected noise only ) set at mm and . the rms surface error versus varies with position on m1 , so a different contour map must be calculated for each segment set to enable an accurate evaluation of the , solution convergence .an example of this for a segment closer to the inner edge ( named s12 here ) is given in section [ sec : innerseg ] .comparing the , limits on the 50 nm rms contour for s12 gives : rlimits = 83991.1 84008.9 mm ( diff = 17.8 mm ) and klimits = -1.009890 - -0.976677 ( diff = 0.033213 ) .the kdiff for s1 is 0.00392392 ( an order of magnitude smaller ) and the rdiff is 18.6 mm .when using this analysis method on a real data - set there are certain unknowns that the analysis must take into account and be robust to . it is assumed that approximate values are known for the key parameters that define the shape of the surface ( e.g. base radius of curvature and conic constant ) , so that these can be used as reasonable initial values in the optimisation process .however , even if there is no prior knowledge of these parameters a wider search strategy can be employed using more steps in the process ; the time taken to reach a satisfactory solution is increased accordingly . a number of different data sets were created to test the fitting method and these are described in the following sub - sections. the boundary box in the optimiser is set at mm and ( where and are the initial values provided to start the optimiser ) ; so if the initial values fall beyond the real solution the and boundary limits then the routine will fail to find the correct solution because it is beyond its search - space .it is unlikely that the user would ever have this level of uncertainty when providing a good initial value , however an example is given ( in section [ sec : outbounds ] ) of what happens in this case . and ( indicated by the green star on the inset top plot ) and at the same outer segment position ( s1 ) .each dataset is analysed using the same set of 32 points as initial values to demonstrate the multi - start strategy .the inset plot at the top is a zoom - in on the central region of the main plot ; the median result of each group is indicated with a `` + '' marker .the main plot is shown to demonstrate that all the results converged within the inner contour ( 40 nm ) , and within the eso allowable error . ] to investigate how the fitting routine behaves in the presence of different levels of noise , datasets were generated from a surface of a fixed base radius of curvature ( ) and conic constant ( ) ( as per equation [ eq : ellipform ] ) with only the 1-sigma level of random noise on the height data varying between the datasets . for these experiments : mm and and the 1-sigma level of random noise on the data = 1.2 nm , 3.0 nm , 6.0 nm and 12 nm .these noise levels are based around the maximum measurement error of the optic nom autocollimator which is arcsec and random position error of m for a single point measurement . for a 5 mm step size between measurements, this angle measurement error converts to a z error of 6.0 nm ( i.e. ) . in a real - life measurement25 autocollimator measurements are acquired in rapid succession for each data point reducing this error to 1.2 nm .the position uncertainty converts to a error of approximately 4 nm ( varies depending on segment and position on segment ) .setting the 1-sigma level of z - noise in the simulations to 6.0 nm is close to an expected maximum error level and using 3.0 nm represents a more modest level of noise .the lower and upper limits of noise ( 1.2 and 12 nm ) have been chosen to represent an environment / set - up of excellent stability and accuracy and a more noisy / less stable environment than we assume are typical of the optic nom .the data generated is for s1 which is positioned near the outer edge of m1 and so is the most off - axis ellipsoid . using the scan pattern as previously described, 5 datasets were generated using each of the 4 specified levels of random noise .a randomised sobol set containing 32 points was generated over an r - range of 83000 to 85000 mm and a k - range of -1.1 to -0.8 .these 32 points provide the initial , values for the multi - start strategy ; the same set of initial values was used to start the fitting / optimisation process for all the datasets .every dataset is run through the fitting process 32 times , each time using a different initial value for and as provided by the sobol set .the results of the final fitted , for each of these datasets is shown in fig .[ fig : diffnoise ] , over - plotted on the contour map of --rms surface error . the median result for each of the different noise groupsis indicated .the best - fit , to the simulated nom data is taken as this median value .the median , rather than the mean is used as it is a better representation of the majority of the fitting results since we know that some , initial values will have been poorly placed in the gradient space and that there will be a few spurious / outlier results .using the same and ( 84000 mm and -0.993295 ) , a z - noise level of 3.0 nm and the same set of 32 initial values , the fitting process was repeated on 5 datasets generated at a different m1 centre coordinate in order to see if there are any differences in fitting data from the inner edge of m1 i.e. closer to the edge of the central cut - out and the least off - axis ellipsoid compared to the outer edge of m1 ( s1 ) .the centre coordinate of s1 ( outer edge ) in the m1 frame is [ 0 , 18469.942 ] mm and of this ( inner edge ) segment is [ 0 , 6393.5 ] mm . as described in section 4 , a new contour map of --rms surface error was generated to compare results at this new m1 mirror position .the results are shown in fig .[ fig : innerseg ] . and values ( lower cost function value indicates a better fit ) given an over - wide search space for the multi - start optimisation strategy .the square outlined points show the initial values that were passed to the optimiser for this dataset ; the green shaded rectangular area indicates the boundary limit around the correct solution initial values outside of this boundary will not be able to converge to the correct solution .the small grey points show the end - point , values of the optimiser with a light - grey line linking it to its initial value .for the 8 start values within the boundary ( filled green squares ) the optimiser reached the correct solution ( on the contour plot there are 8 almost overlapping points at ( 84000 , -0.993295 ) ) . ] an extended range 32 point sobol set of initial values was generated to demonstrate the results when the routine is provided with initial values that are outside the boundary limits around the real solution .these boundary limits are set within the optimisation function to be : mm around and around .the extended sobol set was generated using a rrange of 80000 to 90000 and a krange of -1.5 to 0 .the nom dataset for testing was generated with the same fixed and as previous ( 84000 mm , -0.993295 ) , z - noise = 3.0 nm and at the s1 position .the results of fitting this dataset for each of the initial values in this extended range set is shown in fig .[ fig : extrange ] .it demonstrates the use of the multi - start strategy not only to avoid local minima issues , but also as a tool to `` zoom - in '' on the correct answer even if there is very poor knowledge of the shape of the optic from which the data was taken .firstly the results with the lowest value of the cost function will give a good indication of , values that are at or approaching the correct results .also , as in the case shown in fig .[ fig : extrange ] , there are 8 initial values that lie within the boundary box centred on the correct , value and so there is a `` point '' on fig .[ fig : extrange ] that consists of 8 almost overlapping results that started from those points and have converged on the correct solution .once an indication of the correct result is gleamed from an `` over - wide search '' , the optimisation can be repeated with a smaller range of and around the region where the correct solution is believed to lie .a 128-point set of pairs was generated using a randomised sobol set and used to generate 128 different nom datasets .the range for this sample was 60000 to 90000 mm and the range -1.5 to 0 .as a side - note , this range includes the current specification for the e - elt of mm and .an arbitrary off - axis position was chosen for the simulated segment data , this was fixed for all the 128 datasets at cm = [ 0 , 12000 ] mm , which is around the mid - section of a primary mirror of diameter 40000 mm .the 1-sigma level for random noise on the data - points was also fixed at 3.0 nm . given the large amount of processing time that would be required to perform a many point multi - start strategy as demonstrated in section [ sec : fixrk ] ,it was assumed for these analyses that a reasonable knowledge of the value of the mirror is known and so the initial values given to the optimiser was set to : mm and ; where and are the values used to generate the simulated datasets .the results of these optimisations are shown in fig . [fig : varrk ] .the average of the residual rms surface error is 2.95 nm which corresponds well to the expected noise in the data ( 3.0 nm ) .the median value of the rdiff ( fitted r value ( ) - r value used to generate the data ( ) ) is 0.202 mm ; the median value of the kdiff is 6.65e-5 .these are well within acceptable tolerances . on a `` real '' optic ,the surface form is unlikely to be perfectly described by equation [ eq : ellipform ] with just random noise on the data points and line - scan positions .it is likely that the shape will be slightly modified by higher order terms such as astigmatism . for the manufacturing of the eso e - elt segmentsthere is a certain amount of allowable misfigure in terms of zernikes , since the active mirror support structure can remove a certain amount of form error . for these tests we chose two of the higher order allowable errors to add to the simulated datasets to investigate how this affected the result on fitted and ;the misfigure in zernike terms we added to this data are astigmatism ( z = 4 ) and higher order `` trefoil '' ( z = 9 ) . the misfigure terms were considered individually and were added in to the ellipsoid data either at the maximum allowed amount of error for that term or at half the allowed amount .a 32-point multi - start strategy was used to analyse each dataset , the results are plotted in fig . [fig : formerror ] . , position of the generated data .only 2 ( out of 160 ) results from the maximum allowed z9 multi - start set fall outside the boundary . ]as expected the presence of form error in the data modifies the best fit and .the presence of either astigmatism or trefoil results in a fitted and that is shifted towards a higher and than the values that were used to generate the data .again , as might be expected , astigmatism had a more significant effect on the fitted results .this is partly to do with the fact that more error in astigmatism is allowed in the data but also because it can be seen how the two axes of different curvature on an astigmatic surface can add to the ellipsoid surface ( also with two axes of different curvature ) to produce a surface which closely resembles an ellipsoid surface with slightly modified and parameters .figure [ fig : formerror ] shows that the rms surface error due to the difference between the fitted , and the actual , is within allowable limits . if the maximum allowed form errors are known to be present within the data ( i.e. it is known from the interferometric data ) it is advisable to subtract these form errors from the profilometer data and fit for and again to get a result with an improved accuracy .this work has demonstrated that it is possible to converge on an , solution of sufficient accuracy for the described purpose by using a gradient minimisation optimisation over many ( 100 + ) dimensions to place scan - lines in a common coordinate frame of reference and simultaneously fit the best aspheric surface .the simulations performed here have assumed a fixed scan pattern of 7 line sections across x and y with data points taken every 5 mm along each of these lines ( divided into 24 line scans due to the limited autocollimator measurement range ) .this pattern was chosen since it represented one of the best compromises between data acquisition time and adequate sampling of the optic being measured .the number of degrees of freedom in the optimisation is altered by the number of scan lines ( 6n + 2 where n is the number of scan lines ) .the behaviour of the optimisation process with respect to number of scan lines has not been completely investigated .the optimisation process is robust to a wide range of random noise on the z ( height ) data , a 1-sigma level of 1.2 to 12 nm has been investigated here and shown to satisfy requirements .the process can cope with small ( m ) random offsets of entire line scan positions since x , y , z position is part of the optimisation for each line ( in order to account for line shifts when rotating the line scans into their common reference frame and the unknown integration constant ) . exact knowledge ( i.e. within the noise ) of the segment centre position in the data is assumed in order to have accurate mapping to the required segment centre coordinate in the m1 reference frame .any large or systematic errors in the coordinate system or elsewhere in the data should be corrected or removed before fitting to obtain the most accurate result ; if this is not possible their influence on the fitting process will need to be investigated .although the focus of these fitting tests were for the old e - elt specification ( mm , , 42000 mm diameter primary ) this fitting / optimisation process has also been shown to be effective for surfaces described by different ( 60000 to 90000 mm ) and ( 0 to -1.5 ) and different off - axis positions ( i.e. different segment centre coordinates in the m1 frame ) .it also performs within specification in the presence of the allowed higher - order surface error terms ( only z4 and z9 tested here ) as defined for the e - elt segment manufacturing , although for the best accuracy it is recommended to subtract those terms from the data before ( re)fitting , especially if high levels of those errors are known to be present .assuming the optic to be measured can be supported on a suitably stiff structure that allows the optic to be rotated around its x and y axes without introducing significant ( i.e. below the measurement error ) distortion on the surface form then the fitting method described provides a useful way to effectively extend the angular measurement range of an autocollimator in a non - contact profilometer and allows the line scans taken across the optic to be reconstructed to the best - fit aspheric ( conic ) surface .s. alcock , k. sawhney , s. scott , u. pedersen , r. walton , f. siewert , t. zeschke , f. senf , t. noll , and h. lammert , `` the diamond nom : a non - contact profiler capable of characterizing optical figure error with sub - nanometre repeatability , '' nuclear instruments and methods in physics research a * 616 * , 224 ( 2010 ) . | we present a description of our method to process a set of autocollimator - based deflectometer 1-dimensional line - scans taken over a large optical surface and reconstruct them to a best - fit conic - section surface . the challenge with our task is that each line - scan is in a different ( unknown ) coordinate reference frame with respect to the other line - scans in the set . this problem arises due to the limited angular measurement range of the autocollimator used in the deflectometer and the need to measure over a greater range ; this results in the optic under measurement being rotated ( in pitch and roll ) between each scan to bring the autocollimator back into measurement range and therefore each scan is taken in a different coordinate frame . we describe an approach using a dimension optimisation ( where is the number of scan lines taken across the mirror ) that uses a gradient - based non - linear least squares fitting combined with a multi - start global search strategy to find the best - fit surface . careful formulation of the problem is required to reduce numerical noise and allow the routine to converge on a solution of the required accuracy . = 1 120.3940 metrology ; 120.6650 surface measurements , figure ; 220.1250 aspherics . 2016 optical society of america . one print or electronic copy may be made for personal use only . systematic reproduction and distribution , duplication of any material in this paper for a fee or for commercial purposes , or modifications of the content of this paper are prohibited . the version of this paper as published in applied optics can be viewed here : http://dx.doi.org/10.1364/ao.55.002827 |
in many practices , spectral problems are faced for differential equations which have discontinuous coefficient and discontinuity conditions in interval ( - ) .these problems generally emerge in physics , mechanics and geophysics in non - homogeneous and discontinuous environments .we consider a heat problem in a rod which is composed of materials having different densities . in the initial time , let the temperature is given arbitrary .let be the temperature is zero in one end of the rod and the heat is isolated at the other end of the rod . in this casethe heat flow in non - homogeneous rod is expressed with the following boundary problem : where , are physical parameters and have specific properties .for instance , defines the density of the material and piecewise - continuous function . applying the method of separation of variables to this problem, we get the spectral problem below : here is a real - valued function , piecewise - continuous function the following : is spectral parameter and . when or , that is , in continuous case , the solution of inverse problem is given in - .the spectral properties of sturm - liouville opeator with discontinuos coefficient in different boundary conditions are examined in - . in this study ,the main equation is obtained which has an important role in solution of inverse problem for boundary value problem and according to spectral data , the uniqueness of solution of inverse problem is proved .similar problems are examined for the equation ( [ 1 ] ) with different boundary conditions in .it was proved ( see ) , that the solution of the equation ( [ 1 ] ) with initial conditions can be represented as where belongs to the space for each fixed ] the kernel from the representation ( [ 4 ] ) satisfies the following linear functional integral equation are eigenvalues and are norming constants of the boundary value problem ( [ 1 ] ) , ( [ 2 ] ) when from ( [ 4 ] ) we have follows from ( [ 4 ] ) and ( [ 11 ] ) that the last two equalities , we obtain is easily found by using ( [ 9 ] ) and ( [ 10]) be an absolutely continuous function , then using expansion formula ( see ), ( [ 13 ] ) we have: obtain uniformly on ] ( [ 9 ] ) , uniformly on ] main equation ( [ 8 ] ) has a unique solution .we show that for each fixed the equation ( [ 8 ] ) is equivalent to the equation of the form where is a completely continuous operator , is an identity operator in the space .( when this fact is obvious . )when rewrite ( [ 8 ] ) as it is sufficient to prove that is invertible , i.e. has a bounded inverse in .consider the equation i.e. here it is easily to obtain show that in fact , put , when .then so the operator is invertible in . then according to theorem 3 from ( see p. 275 ) it is sufficient to prove that the equation only trivial solution let be a non - trivial solution of ( [ 22 ] ) .then from ( [ 9 ] ) we have using ( [ 7 ] ) and ( [ 16 ] ) we obtain substituting in third , fourth , ninth , and tenth double integrals and in fifth , sixth , eleventh and twelfth double integrals we get which we have we obtain the parseval s equality the function have the system is compete in we have .e . where the operator is defined by ( [ 21 ] ) . from invertibility of in get let and be two boundary value problems and .\ ] ] according to ( [ 9 ] ) and ( [ 10 ] ) and . then from the main equation ( [ 8 ] ) , we have .it follows from ( [ 5 ] ) that $ ] .using , we can transform the main equation ( [ 8 ] ) to the following equation : where and we assume that from the formula ( [ 24 ] ) , we obtain substituting ( [ 25 ] ) into the main equation ( [ 23 ] ) we obtain ,\ ] ] where substituting ( [ 26 ] ) into ( [ 27 ] ) we obtain using the ( [ 6 ] ) , we calculate the integral into last equation . then , we have where +\ ] ] .\ ] ] from ( [ 28 ] ) , ^{-1 } \quad , \quad 0<x < a \\\varphi_0(x,\lambda_1)\left[1+\frac{1}{\pi}\phi(x,\lambda_1)\right]^{-1 } \quad , \quad a < x<\pi ,\end{array } \right . \label{29}\ ] ] substituting ( [ 29 ] ) into ( [ 26 ] ) we get ^{-1 } \quad , \quad 0<x < a \\-\frac{1}{\pi}\rho(t)\varphi_0(t,\lambda_1)\varphi_0(x,\lambda_1)\left[1+\frac{1}{\pi}\phi(x , lambda_1)\right]^{-1 } \quad , \quad a < x<\pi , \end{array}\right . \label{30}\ ] ] where and .thus , we obtain the solution of main equation ( [ 8 ] ) .if we use the formula ( [ 5 ] ) then , we obtain the potential .this work is supported by the scientific and technological research council of turkey ( tubitak ) .anderssen rs .the effect of discontinuities in destiny and shear velocity on the asymptotic overtone structure of torsional eigenfrequencies of the earth ., geophys .j. r. astr . soc . ,1977 , 50 , 303 - 309 .aliev ba , yakubov ys .solvability of boundary value problems for second - order elliptic differential - operator equations with a spectral parameter and with a discontinuous coefficient at the highest derivative .differential equations , 2014 , 50(4 ) , 464475 .altinisik n , kadakal m , mukhtarov o. eigenvalues and eigenfunctions of discontinuous sturm - liouville problems with eigenparameter dependent boundary conditions .acta math . hung . , 2004 , 102(12 ) , 159175 . | in this work , a boundary value problem for sturm - liouville operator with discontinuous coefficient is examined . the main equation is obtained which has an important role in solution of inverse problem for boundary value problem and uniqueness of its solution is proved . uniqueness theorem for the solution of the inverse problem is given . [ multiblock footnote omitted ] [ multiblock footnote omitted ] |
nonlinear filters are important tools for dynamical data assimilation with applications in a variety of research areas , including biology , mathematical finance , signal processing , image processing , and multi - target tracking . to put it succinctly , nonlinear filtering is an extension of the bayesian framework to the estimation and prediction of nonlinear stochastic dynamics . in this effort, we consider the following nonlinear filtering model where and are two nonlinear functions , and are the stochastic state and observation processes , respectively , is a random vector representing the uncertainty in , and denotes the random measurement error in . in the discrete setting , the nonlinear filtering model in takes the form where and are mutually independent white noises .let denote the filed generated by the observational data up to the step .the goal of nonlinear filtering is to find the posterior probability density function ( pdf ) of the state , given the observation data , so as to compute the quantity of interest ( qoi ) , given by = \inf\left\ { \mathbb{e } [ | \phi({x}_k ) - z |^2 ] ; z \in \mathcal{z}_k \right\},\ ] ] where is a test function , and denotes the space of all -measurable and square integrable random variables .tremendous efforts have been made to solve nonlinear filtering problems in the last few decades .two of the well - known bayesian filters are extended kalman filters ( ekfs ) , and particle filters .the key ingredient of the ekfs is the linearization of both and in , so that the standard kalman filter can be applied directly .thus , if the nonlinearity of the state and the observation systems is not severe , then the ekfs can provide efficient and reasonable inferences about the state , otherwise , the performance of the ekfs can be very poor . for particle filters , the central theme is to approximate the desired posterior pdf of the state by the empirical distribution of a set of adaptively selected random samples ( referred to as `` particles '' ) .the particle filter method is essentially a sequential monte carlo approach , which requires no assumption on the linearity of the underlying system . as such , with sufficiently large number of particles, it is capable of providing an accurate approximation of the posterior pdf for a highly nonlinear filtering problems .however , there are some fundamental issues concerning the efficiency and robustness of particle filters . for example , since the empirical pdf is constructed based on particles with equal weights after resampling , the particle filter still needs a lot of samples in order to accurately approximate the target distribution . to overcome such a disadvantage , the authors proposed a new nonlinear filter named implicit filter " .this approach adopts the framework of bayesian filtering , which has two stages at each time step , i.e. , prediction and update . at the prediction stage, we estimate the prior pdf of the future state given the current available observation information ; at the update stage , we update the prior pdf by assimilating the newly received data to obtain the estimate of the posterior pdf . the implicit filter is distinguished from the particle filters by the use of interpolatory approximations to the prior and posterior pdfs .specifically , in the particle filter , is approximated by _explicitly _ propagating the samples of the current state through the nonlinear state equation , and constructing the empirical pdf of . in the implicit filter, the interpolation of requires its function values at a set of grid points of the future state . under the condition that , we solve _ implicitly _ the state equation given a set of monte carlo samples of , so that the value of , at the grid point of , can be estimated by averaging the function values of at all the solutions of the state equation . as an implicit scheme , the implicit filter has a stabilizing effect which provides more accurate numerical approximations to the solution of the nonlinear filtering problem than the particle filter method .the main challenge of the implicit filter method is that the conditional pdf of the nonlinear filtering solution is estimated at grid points .as such the method suffers the so called `` the curse of dimensionality '' when the dimension of the state variable is high .in addition , the efficiency of the method may be significantly reduced when the domain of the pdf is unbounded . in this paper, we propose to construct a meshfree implicit filter algorithm to alleviate the aforementioned challenges .motivated by the particle filter method , we first generate a set of random particles and propagate these particles through the system model and use these particles to replace the grid points in the state space .after that we generate other necessary points through the shepard s method which constructs the interpolant by the weighted average of the values on state points . in order to prevent particle degeneracy in the generation of random state points, we introduce a resample step in the particle propagation .in addition we choose state points according to the system state , which make them adaptively located in the high probability region of the pdf of state . in this way , we solve the nonlinear filtering problem in a relatively small region in the state space at each time step and approximate the solution on a set of meshfree state points distributed adaptively to the desired pdf of the state .furthermore , since we approximate the pdf as a function on each state point , instead of using state points themselves to describe the empirical distribution , the implicit filter algorithm requires much fewer points than the particle filter method to depict the pdf of the state .the rest of this paper is organized as follows . in [ bf ] , we introduce the mathematical framework of the bayesian optimal filter . in [ algorithm ] , we construct meshfree implicit algorithm . in [ sec : ex ] , we demonstrate the efficiency and accuracy of our algorithm through numerical experiments . finally , [ sec : con ] contains conclusions and directions for the future research .for , let and denote the fields generated by and , respectively . for , we use to represent a realization of the random variable , and define for notational simplicity .it is easy to see that the dynamical model in is markovian in the sense that we also know that the measurements are conditionally independent given , i.e. , the bayesian optimal filter constructs the conditional distribution recursively in two stages : prediction stage and update stage .for , assume that is given .in the prediction stage is evaluated through the chapman - kolmogorov formula : in the update stage , the prior pdf obtained in is used to obtain the posterior pdf via the bayes formula : this section , we construct the meshfree implicit filter algorithm .the algorithm is based the implicit filter algorithm on grid points . the implicit filter algorithm introduced in developed from the general framework of the bayesian optimal filter discussed above , in which the primary computational challenge is the numerical approximation of the term in . for ,the goal of this stage is to approximate the prior distribution of the state , given the posterior distribution of the state . due to the the fact that = \int_{\mathbb{r}^{r } } p(x_k | x_{k-1 } , w_{k-1 } ) \cdot p(w_{k-1 } ) d w_{k-1 } , \ ] ] the prior pdf derived in identity can be rewritten as p(x_{k-1 } | y_{1:k-1 } ) d x_{k-1 } , \end{aligned}\ ] ] where ] . to this end , we first draw independent samples of the white noise , and define an approximation to as with which is essentially a restriction of in the subset . therefore , the expectation ] is the approximation error .then , by further fixing , the location of the mass of in the space of , denoted by , can be obtained by _implicitly _ solving the state equation which is the reason we named the approach the implicit filter . now substituting into , and using the same sample set as above to approximate the integral on the right hand side of, we obtain & = \sum_{j=1}^m \left({\frac}{1}{m}\sum_{j'=1}^m \delta_{w_{k-1}^{j}}(x_k^i| x_{k-1 } , w_{k-1}^{j'})\right)\\ & = { \frac}{1}{m}\sum_{j=1}^m \delta_{w_{k-1}^{j}}\left(x_k^i| x_{k-1 } , w_{k-1}^{j}\right ) , \end{aligned}\ ] ] then replacing ] , where , is a uniform partition of the interval ] needs to be defined sufficiently large , so as to capture the statistically significant region of the pdf .this may lead to a great waste of computation effort in the low probability region of .to alleviate such disadvantages , we propose to develop a distribution - informed meshfree interpolation approach to efficiently approximate the prior pdf .the central idea of the generation of random points for the state variable is to build a set of points , denoted by , according to the state distribution . to begin with ,we generate of random samples from the initial pdf of the initial state : if the initial pdf is close to the true state distribution , it s obvious that our random state points are more concentrated near the target state . for ,we propagate points to through the state equation : where are random samples according to the pdf of . denote and approximate the conditional pdf on with the scheme given by . in this way , the random points in move according to the state model .as opposed to particle filter methods , which use the number of particles to represent empirical distributions and require a large number of particles to follow the state distribution , in the implicit filter method we provide an approximation of the value of the pdf at each state point .therefore , much fewer points are needed to describe the state pdf and the random state points are not necessary to accurately follow the state distribution . by incorporating the new data , we update the prior pdf at each grid point , using the bayesian formula , to obtain where is given in , is the normalization factor , and is the approximation error . by neglecting the error term in , we obtain the following iterative numerical scheme for the update stage on , i.e. , where is desired the approximation of the posterior pdf .next , we use interpolation methods to construct the approximation of from values via where is the set of basis functions .since the state points in are generated randomly in the meshfree framework , standard polynomial interpolation is unstable due to the uncontrollable lebesgue constant .instead , we propose to use the shepard s method , which is an efficient meshfree interpolation technique , to construct the interpolant .the basic idea of the shepard s method is to use the weighted average of in the interpolating approximation .specifically , for a given point , we re - order the points in by the distances to to get a sequence such that where is the euclidean norm in .then , for a pre - chosen integer we use the first values in to approximate as follows where the weight is defined by note that . from, we have where is the error of the shepard s interpolation .we assume that has bounded first order derivative . for each pair and the approximation error is controlled by the distance and the derivative , where is a point between and .it is reasonable to assume that in high probability region of the derivative is large .it s worth pointing out that the random state points generated in this algorithm are concentrated in the high probability region .thus , if lies in the high probability region , the distance is small , which balances the error brought by the large derivative . on the other hand , if lies in the low probability region , although the distance is relatively large , the approximation error is still small due to the small value of the derivative .similar to the particle filter method , the above random state points generation suffers from the degeneracy problem for long term simulations , especially for high - dimensional problems .after several time steps , the probability density tends to concentrate on a few points which dramatically reduces the number of effective sample points in . in this work ,we propose an occasional resampling procedure to address these problems and rejuvenate the random points cloud . at the time step , the resampling procedure takes place after we obtain , in order to remove the degenerated points in using the information provided by .specifically , the first step is to develop a degeneracy metric to determine the necessity of doing resampling . to this end , we define the following degenerated subset , where is a user - defined threshold .we also define to be the index set of .then , the degeneracy of can be measured by the ratio ] , then we will skip the resampling step and propagate to get ; otherwise , the set is considered degenerated , and the resampling procedure is needed . in resampling , instead of propagating to , we aim at constructing an intermediate point set , denoted by and propagate through the state model to obtain . according to the definition of in, we consider the state points in are in the statistically significant region of , so that we first put those points in , i.e. , for the state points in , we replace them by generating new samples from using the importance sampling , i.e. , as a result , the resampling procedure helps us remove the state points with low probabilities , and makes the state point set concentrated in the high probability region of the posterior pdf at each time step .finally , we summarize the entire meshfree implicit filter algorithm introduced in [ prediction]-[resampling ] in algorithm 1 below .p0.95 : _ the meshfree implicit filter algorithm _ + 1.1 [ algorithm2 ] : set the number of samples for estimating $ ] , the number of state points , the resampling thresholds and compute the ratio propagate through the state model to obtain resample and construct the intermediate state set propagate through the state model to obtain : solve using , at each point in : solve using and +in this section , we present two numerical examples to examine the performance of our meshfree implicit filter method . in example 1, we use a two dimensional nonlinear filtering problem to show the distributions of the random points . in example 2 , we solve a three dimensional bearing - only tracking problem , which is a six dimensional nonlinear filtering problem . for this higher dimensional problem , we compare the accuracy and efficiency of our meshfree implicit filter method with the extended kalman filter and the particle filter . in this example , we consider the two dimensional noise perturbed tumoral growth model where is a two dimensional standard brownian motion and .the state process is a two dimensional vector , is defined as and here , models the gompertzian growth rate of the tumor and gives the degree of vascularization of the tumor which is also called `` angiogenic capacity '' . to approximate the state variables ,we discretize the dynamic system in time and obtain a discrete state model here , is a two dimensional zero mean gaussian white noise process with covariance , where is the identity matrix and is the time partition stepsize .the measurement of the state model is given by where is a two dimensional zero mean gaussian white noise process with covariance , is a identity matrix and . in the numerical experiment ,we use uniform time partition with stepsize and simulate the state process for with initial state and parameters , , . at time step , we initialize the prior pdf by , where and [ scatter_1 ] [ scatter_2 ] + [ scatter_3 ] [ scatter_10 ] + [ scatter_20 ] [ scatter_40 ] + in figure [ scatter_0 ] , we plot random samples generated from the initial pdf , which are our initial random points .figure [ 2d_grids ] illustrates the behavior of random state points at time steps , respectively . in figure[ 2d_grids ] , the blue dots in each figure plot the random state points obtained by using the dynamic state points generation method introduced in section [ algorithm ] and the red cross in each figure gives the true state at the corresponding time step . from the figures we can see that all the points are moving according to the state model andare concentrated around the true state . to present the accuracy of the algorithm, we show the simulation of the tumoral growth states in figure [ 2d_simulation ] .the black curves are the true and coordinate values of the tumoral growth states , respectively .the blue curves show the simulated states obtained by using the meshfree implicit filter method .[ 2d_simulation_x1 ] [ 2d_simulation_x2 ] in this example , we study a six dimensional target tracking problem . in figure[ model_6d ] , the target , denoted by the red line , moves in the three dimensional space and two platforms on the ground , denoted by pentagons , take angular observations of the moving target .the state process is described by the following dynamic model where describes the position of the moving target which is controlled by parameters .the system noise is a zero mean gaussian white noise process with covariance , is the identity matrix and is a given time period , is a constant vector and is given by the measurements of the state process from the two locations are given by where is a 4 dimensional zero mean gaussian white noise process with covariance , is a identity matrix , , and are locations of two observers .[ com_6d_x1 ] [ com_6d_x2 ] + [ com_6d_x3 ] [ com_6d_x4 ] + [ com_6d_x5 ] [ com_6d_x6 ] + we choose , , .also , we assume that platforms are located at , and the initial sate is given by a gaussian where and the target will be observed over the time period . in the numerical experiments ,we compare the performance of our meshfree implicit filter with the extended kalman filter and the particle filter .in particular , we compare the estimated mean values of the states process along each dimension in figure [ 6d_state_comparison ] . in the particle filter method , we choose particles . in the meshfree implicit filter method, we choose the number of state points to be and the number of random samples in the implicit filter monte carlo simulation to be .the black curves in figure [ 6d_state_comparison ] show the real states process along each direction , the green curves give the estimated means obtained by the extended kalman filter method , the red curves give the estimated means obtained by the particle filter method , and the blue curves give the estimated means obtained by the meshfree implicit filter .we also plot the error corresponding to all three methods in figure [ 6d_l2 ] .as we can see from figure [ 6d_state_comparison ] and [ 6d_l2 ] , the implicit filter and the particle filter are much more accurate than the extended kalman filter and the implicit filter is the most accurate approximation in this experiment . to further compare the efficiency between the meshfree implicit filter and the particle filter , we repeat the above experiment over realizations and show the average cpu time and the corresponding global root mean square error defined by where is the error of the -realization at time step . in table [ efficiency ] , we can see that with particles , the cpu time of the particle filter method is comparable to that of the implicit filter with random state points , but the global rmse of the particle filter is more than doubled the rmse of the implicit filter . with particles , the particle filter method achieves an accuracy comparable to the implicit filter , but at a significantly higher cost ..example 2 : efficiency comparison [ cols="^,^,^",options="header " , ]in this work , we proposed an efficient meshfree implicit filter algorithm by evaluating the conditional pdf on meshfree points in the state space .these meshfree points are chosen adaptively according to the system state evolution .we also apply shepard s method as the meshfree interpolation method to compute interplants with random state points . in order to address the degeneracy of the random points , we use importance sampling method to construct a resample step .numerical examples demonstrate the effectiveness and efficiency of our algorithm . in the future, we plan to perform a rigorous numerical analysis for the meshfree implicit filter algorithm . | in this paper , we propose a meshfree approximation method for the implicit filter developed in , which is a novel numerical algorithm for nonlinear filtering problems . the implicit filter approximates conditional distributions in the optimal filter over a deterministic state space grid and is developed from samples of the current state obtained by solving the state equation implicitly . the purpose of the meshfree approximation is to improve the efficiency of the implicit filter in moderately high - dimensional problems . the construction of the algorithm includes generation of random state space points and a meshfree interpolation method . numerical experiments show the effectiveness and efficiency of our algorithm . nonlinear filtering , implicit algorithm , meshfree approximation , shepard s method |
the basic problem in numerical probability is to _ optimize _ some way or another the computation by a monte carlo simulation of a real quantity known by a probabilistic representation \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}}\ ] ] where is a random vector having values in a banach space and is a borel function ( and is square integrable ) .the space is but can also be a functional space of paths of a process } ] , the one with the lowest variance is the one with the lowest quadratic norm : minimizing the variance amounts to finding the parameter solution ( if any ) to the following minimization problem where , for every , \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } = { \e\ ! { } { { \ifcase 6\or [ \or \bigl[\or \bigl[\or \biggl[\or \biggl[\or \left[\fi } f^2(x ) \frac{p(x)}{p_\theta(x ) } { \ifcase 6\or ] \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } \le + \infty.\ ] ] a typical situation is importance sampling by mean translation in a finite dimensional gaussian framework _i.e. _ \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}}.\ ] ] then the second equality in ( [ v = e ] ) is simply the cameron - martin formula .this specific framework is very important for applications , especially in finance , and was the starting point of the new interest for recursive importance sampling procedures , mainly initiated by arouna in ( see further on ) .in fact , as long as variance reduction is concerned , one can consider a more general framework without extra effort . as a matter of fact , if the distributions satisfy and satisfies \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } < + \infty ] , then is strictly convex and is reduced to a single .these results follow from the second representation of as an expectation in ( [ v = e ] ) which is obtained by a second change of probability ( the reverse one ) . for notational conveniencewe will temporarily assume that in this introduction section , although our main result needs no such restriction .a classical procedure to approximate is the so - called robbins - monro algorithm .this is a recursive stochastic algorithm ( see ( [ rm ] ) below ) which can be seen as a stochastic counterpart of deterministic recursive zero search procedures like the newton - raphson one .it can be formally implemented provided the gradient of the ( convex ) target function admits a representation as an expectation . since we have no _a priori _ knowledge about the regularity of ( is smooth enough alternative approaches have been developed based on some large deviation estimates which provide a good approximation of by deterministic optimization methods ( see ) . ] ) and do not wish to have any , we are naturally lead to _ formally _ differentiate the second representation of in ( [ v = e ] ) to obtain a representation of as \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}}.\ ] ] then , if we consider the function such that naturally defined by ( [ grad1 ] ) , the derived robbins - monro procedure writes with a _ step _ sequence decreasing to 0 ( at an appropriate rate ) , a sequence of i.i.d . random variables with distribution . to establish the convergence of a robbins - monro procedure to seemingly not so stringent assumptions .we mean by that : not so different from those needed in a deterministic framework .however , one of them turns out to be quite restrictive for our purpose : the sub - linear growth assumption in quadratic mean which is the stochastic counterpart of the classical non - explosion condition needed in a deterministic framework . in practice , this condition is almost never satisfied in our framework due to the behaviour of the term as goes to infinity .the origin of recursive importance sampling as briefly described above goes back to kushner and has recently been brought back to light in a gaussian framework by arouna in .however , as confirmed by the numerical experiments carried out by several authors ( ) , the regular robbins - monro procedure ( [ rm ] ) does suffer from a structural instability coming from the violation of ( [ nec ] ) .this phenomenon is quite similar to the behaviour of the explicit discretization schemes of an when has a super - linear growth at infinity .furthermore , in a probabilistic framework no implicit scheme " can be devised in general . then the only way out _ mutatis mutandis _ is to kill the procedure when it comes close to explosion and to restart it with a smaller step sequence .formally , this can be described as some repeated projections or truncations when the algorithm leaves a slowly growing compact set waiting for stabilization which is shown to occur .then , the algorithm behaves like a regular robbins - monro procedure .this is the so - called projection la chen " avatar of the robbins - monro algorithm , introduced by chen in and then investigated by several authors ( see ) formally , repeated projections la chen " can be written as follows : where denotes the projection on the convex compact ( is increasing to as ) . in established a a central limit theorem for this version of the recursive variance reduction procedure . some extensions to non gaussian frameworkhave been carried out by arouna in his phd thesis ( with some applications to reliability ) and more recently to the marginal distributions of a lvy processes by kawai in .however , convergence occurs for this procedure after a long stabilization phase " provided that the sequence of compact sets have been specified in an appropriate way .this specification turns out to be a rather sensitive phase of the tuning " of the algorithm to be combined with that of the step sequence . in this paper , we show that as soon as the growth of at infinity can be explicitly controlled , it is always possible to design a regular robbins - monro algorithm which converges to a variance minimizer with no risk of explosion ( and subsequently no need of repeated projections ) . to this endthe key is to introduce a _third _ change of probability in order to control the term . in a gaussian frameworkthis amounts to switching the parameter from the density to the function by a third mean translation .this of course corresponds to a new function but can also be interpreted _ a posteriori _ as a way to introduce an _ adaptive _ step sequence ( in the spirit of ) . in terms of formal importance sampling , we introduce a new positive density ( everywhere positive on ) so that the gradient writes \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } = { \e\ ! { } { { \ifcase 6\or [ \or \bigl[\or \bigl[\or \biggl[\or \biggl[\or \left[\fi } \widetilde h_v(\theta,\widetilde { x^{(\theta ) } } ) { \ifcase 6\or ] \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}},\ ] ] where .the weight " may seem complicated but the rle of the density is to control the critical term by a ( deterministic ) quantity only depending on .then we can replace by a function in the above robbins - monro procedure ( [ rm ] ) where is a positive function used to control the behaviour of for large values of ( note that \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } = 0 { \ifcase 2\or \\or \bigr\\or \bigr\\or \biggr\\or \biggr\\or \right\\fi}\or \bigl\ { } { \e\ ! { } { { \ifcase 2\or [ \or \bigl[\or \bigl[\or \biggl[\or \biggl[\or \left[\fi } h(.,\widetilde { x^{(\theta ) } } ) { \ifcase 2\or ] \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } = 0 { \ifcase 2\or \\or \bigr\\or \bigr\\or \biggr\\or \biggr\\or \right\\fi}\or \bigl\ { } { \e\ ! { } { { \ifcase 2\or [ \or \bigl[\or \bigl[\or \biggl[\or \biggl[\or \left[\fi } h(.,\widetilde { x^{(\theta ) } } ) { \ifcase 2\or ]\or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } = 0 { \ifcase 2\or \\or \bigr\\or \bigr\\or \biggr\\or \biggr\\or \right\\fi}\or \biggl\ { } { \e\ ! { } { { \ifcase 2\or [ \or \bigl[\or \bigl[\or \biggl[\or \biggl[\or \left[\fi } h(.,\widetilde { x^{(\theta ) } } ) { \ifcase 2\or ] \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } = 0 { \ifcase 2\or \\or \bigr\\or \bigr\\or \biggr\\or \biggr\\or \right\\fi}\or \biggl\ { } { \e\ ! { } { { \ifcase 2\or [ \or \bigl[\or \bigl[\or \biggl[\or \biggl[\or \left[\fi } h(.,\widetilde { x^{(\theta ) } } ) { \ifcase 2\or ] \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } = 0 { \ifcase 2\or \\or \bigr\\or \bigr\\or \biggr\\or \biggr\\or \right\\fi}\or \left\ { } { \e\ ! { } { { \ifcase 2\or [ \or \bigl[\or \bigl[\or \biggl[\or \biggl[\or \left[\fi } h(.,\widetilde { x^{(\theta ) } } ) { \ifcase 2\or ]\or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } = 0 { \ifcase 2\or \\or \bigr\\or \bigr\\or \biggr\\or \biggr\\or \right\\fi}\fi } } = { { \ifcase 6\or \ { } \nabla v = 0 { \ifcase 6\or \\or \bigr\\or \bigr\\or \biggr\\or \biggr\\or \right\\fi}\or \bigl\ { } \nabla v = 0 { \ifcase 6\or \\or \bigr\\or \bigr\\or \biggr\\or \biggr\\or \right\\fi}\or \bigl\ { } \nabla v = 0 { \ifcase 6\or \\or \bigr\\or \bigr\\or \biggr\\or \biggr\\or \right\\fi}\or \biggl\ { } \nabla v = 0 { \ifcase 6\or \\or \bigr\\or \bigr\\or \biggr\\or \biggr\\or \right\\fi}\or \biggl\ { } \nabla v = 0 { \ifcase 6\or \\or \bigr\\or \bigr\\or \biggr\\or \biggr\\or \right\\fi}\or \left\ { } \nabla v = 0 { \ifcase 6\or \\or \bigr\\or \bigr\\or \biggr\\or \biggr\\or \right\\fi}\fi}} ] where is a path - dependent diffusion process and is a functional defined on the space ,\r^d) ] .we consider a -dimensional it process } ] is a -dimensional standard brownian motion , } ] and \times { \cal c}([0,t],\r^d)\to { \cal m}(d , q) ] and continuous in \times { \cal c}([0,t],\r^d) ] with values in ( where is a free integral parameter ) . thena girsanov transform yields that for every ,\r^p) ] .let fixed in .the function is concave , hence is convex so that , owing to the young inequality , the function is convex since it is non - negative . to prove that tends to infinity as goes to infinity , we consider two cases : * if for every , the result is trivial by fatou s lemma . *if for every , we apply the reverse hlder inequality with conjugate exponents to obtain \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}}^3 { \e\ ! { } { { \ifcase 6\or [ \or \bigl[\or \bigl[\or \biggl[\or \biggl[\or \left[\fi } { { \ifcase 6\or ( \or \bigl(\or \bigl(\or \biggl(\or \biggl(\or \left(\fi } \frac{p(x)}{p_{\theta/2}(x ) } { \ifcase 6\or ) \or \bigr)\or \bigr)\or \biggr)\or \biggr)\or \right)\fi}}^{-1 } { \ifcase 6\or ] \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}}^{-2 } , \\ & \ge { \e\ ! { } { { \ifcase 5\or [ \or \bigl[\or \bigl[\or \biggl[\or \biggl[\or \left[\fi } f^{2/3}(x ) { { \ifcase 4\or ( \or \bigl(\or \bigl(\or \biggl(\or \biggl(\or \left(\fi } \frac{p^2_{\theta/2}(x)}{p(x)p_\theta(x ) } { \ifcase 4\or ) \or \bigr)\or \bigr)\or \biggr)\or \biggr)\or \right)\fi}}^{\frac{1}{3 } } { \ifcase 5\or ] \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}}^3,\end{aligned}\ ] ] ( and are probability density functions ) .one concludes again by fatou s lemma . the set , or to be precise, the random vectors taking values in will the target(s ) of our new algorithm .if is strictly convex , if \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } > 0,\ ] ] then .nevertheless this will not be necessary owing to the combination of the two results that follow .[ un]let be a convex differentiable function , then furthermore , if is nonempty , it is a convex closed set ( which coincide with ) and a sufficient ( but in no case necessary ) condition for a nonnegative convex function to attain a minimum is that .now we pass to the statement of the convergence theorem on which we will rely throughout the paper .it is a slight variant of the regular robbins - monro procedure whose proof is rejected in an annex .[ thmrz ] ( extended robbins - monro theorem ) let a borel function and an -valued random vector such that \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}}<+\infty ] , all defined on the same probability space .then , the recursive procedure defined by satisfies : the convergence also holds in , .the proof is postponed to the appendix at the end of the paper .the natural way to apply this theorem for our purpose is the following : * step 1 : we will show that the convex function in ( [ v = e ] ) is differentiable with a gradient having a representation as an expectation formally given \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}} ] so that is well defined . in , arouna considers the function defined by it is clear that the condition is not satisfied even if we simplify this function by ( which does not modify the problem ) .when have finite moments of any order , a naive way to control directly by an explicit deterministic function of ( in order to rescale it ) is to proceed as follows : one derives from hlder inequality that for every couple , of conjugate exponents setting and , yields then , satisfies the condition and theoretically the standard robbins - monro algorithm implemented with converges and no projection nor truncation is needed .numerically , the solution is not satisfactory because the correcting factor goes to zero much too fast as goes to infinity : if at any iteration at the beginning of the procedure is sent too far " , then it is frozen instantly .if is too small it will simply not prevent explosion .the tuning of becomes quite demanding and payoff dependent .this is in complete contradiction with our aim of _ a self - controlled variance reducer_. a more robust approach needs to be developed .on the other hand this kind of behaviour suggests that we are not in the right asymptotics to control .note however that when is bounded with a compact support , then one can set and the above approach provides an efficient answer to our problem .we consider the density by , we have \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}},\ ] ] with , _i.e. _ .since is the gaussian density , we have . as a consequence ,the function defined by provides a representation \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}} ] ) yields one concludes by the point of .thus the normal distribution satisfies with and .moreover , note that the last inequality in the above proof holds as an equality .now we are in position to derive an unconstraint ( extended ) robbins - monro algorithm to minimize the function , provided the function satisfies a sub - multiplicative control property , in which is a real parameter and a function from to , such that , namely \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}}<+\infty .\end{array}\right . \ ] ] * remark .* assumption seems almost non - parametric . however , its field of application is somewhat limited by ( [ h2 ] ) for the following reason : if there exists a positive real number such that is concave , then for some real constant ; which in turn implies that the function in needs to satisfy for some and some .( then with if ] , and , and that the step sequence satisfies the usual decreasing step assumption then the recursive procedure defined by where is an i.i.d . sequence with the same distribution as and converges toward an -valued ( square integrable ) random variable .in order to apply theorem [ thmrz ] , we have to check the following fact : _ mean reversion _ : the mean function of the procedure defined by ( [ algorm ] ) reads \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } = \frac{e^{-2\delta |\theta|^a}}{1+\widetilde f(-\theta)^{2c}}\nabla v(\theta)\ ] ] so that and if and , for every . _ linear growth of _ : all our efforts in the design of the procedure are motivated by this assumption ( [ lingrowth ] ) which prevents explosion .this condition is clearly fulfilled by since \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}}^2 & = \frac{e^{-4\delta |\theta|^a}}{(1+\widetilde f(-\theta)^{2c})^2 } { \e\ ! { } { { \ifcase 6\or [ \or \bigl[\or \bigl[\or \biggl[\or \biggl[\or \left[\fi } f^4(x-\theta)\left(\frac{p^2(x-\theta)}{p(x)p(x-2\theta)}\frac{|\nabla p(x-2\theta)|}{p(x-2\theta)}\right)^2 { \ifcase 6\or ] \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } , \\ & \le c e^{-4\delta |\theta|^a } { \e\ ! { } { { \ifcase 6\or [ \or \bigl[\or \bigl[\or \biggl[\or \biggl[\or \left[\fi } ( 1+\widetilde f(x)^{2c})^2 ( a(|x|^{a-1}+|\theta|^{a-1})+b)^2 { \ifcase 6\or ] \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}},\end{aligned}\ ] ] where we used assumption in the first line and inequality ( [ ineqtech ] ) from lemma [ lemmetec ] in the second line .one derives that there exists a real constant such that \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}}^2 \le c { \e\ ! { } { { \ifcase 6\or [ \or \bigl[\or \bigl[\or \biggl[\or \biggl[\or \left[\fi } \widetilde f(x)^{4c}(1+|x|)^{2(a-1 ) } { \ifcase 6\or ] \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } ( 1+|\theta|^{2(a-1)}).\ ] ] this provides the expected condition since holds . * the normal distribution .its density is given on by so that is satisfied as well as for , .assumption is satisfied iff .then the function has a particularly simple form * _ the hyper - exponential distributions _ \ ] ] where is polynomial function .this wide family includes the normal distributions , the laplace distribution , the symmetric gamma distributions , etc . * _ the logistic distribution _ its density on the real line is given by is satisfied as well as for ( ) , .assumption is satisfied iff .a second classical approach is to consider an exponential change of measure ( or esscher transform ) .this transformation has already been consider for that purpose in to extend the procedure with repeated projections introduced in .we denote by the cumulant generating function ( or log laplace ) of _ i.e. _ \or\bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}} ] suppose satisfies and satisfies \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } < + \infty.\ ] ] then is fulfilled and the function is differentiable on with a gradient given by \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } , \\ & = { \e\ ! { } { { \ifcase 6\or [ \or \bigl[\or \bigl[\or \biggl[\or \biggl[\or \left[\fi } { { \ifcase 2\or ( \or \bigl(\or \bigl(\or \biggl(\or \biggl(\or \left(\fi } \nabla \psi(\theta ) - { x^{(-\theta)}}{\ifcase 2\or ) \or \bigr)\or \bigr)\or \biggr)\or \biggr)\or \right)\fi } } f^2({x^{(-\theta ) } } ) { \ifcase 6\or ]\or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } e^{\psi(\theta)-\psi(-\theta ) } , \label{diff_esscher}\end{aligned}\ ] ] where \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } } { { \e\ ! { } { { \ifcase 6\or [ \or \bigl[\or \bigl[\or \biggl[\or \biggl[\or \left[\fi } e^ { { { \ifcase 1\or \langle\or \bigl\langle\or \bigl\langle\or \biggl\langle\or \biggl\langle\or \left\langle\fi } \theta , x { \ifcase 1\or \rangle\or \bigr\rangle\or \bigr\rangle\or \biggr\rangle\or \biggr\rangle\or \right\rangle\fi } } } { \ifcase 6\or ] \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}}} ]we consider a -dimensional it process } ] is a -dimensional standard brownian motion , } ] and \times { \cal c}([0,t],\r^d)\to { \cal m}(d , q) ] . for further details we refer to , p. 124thus , if and for every ,\r^d) ] where , then is _ the continuous euler scheme with step _ of the above diffusion with drift and diffusion coefficient . an easy adaptation of standard proofs for regular sde s show( see ) that strong existence and uniqueness of solutions for ( [ sdext ] ) follows from the following assumption ,\ ; \forall\ , x,\ , y\!\in { \cal c}([0,t],\r^d),\ , { { \ifcase 1\or \lvert\or \bigl\lvert\or \bigl\lvert\or \biggl\lvert\or \biggl\lvert\or \left\lvert\fi } b(t , y)-b(t , x ) { \ifcase 1\or \rvert\or \bigr\rvert\or \bigr\rvert\or \biggr\rvert\or \biggr\rvert\or \right\rvert\fi}}+ { { \ifcase 6\or \lvert\or \bigl\lvert\or \bigl\lvert\or \biggl\lvert\or \biggl\lvert\or \left\lvert\fi } \sigma(t , y)-\sigma(t , x ) { \ifcase 6\or \rvert\or \bigr\rvert\or \bigr\rvert\or \biggr\rvert\or \biggr\rvert\or \right\rvert\fi } } \lec_{b,\sigma } { { \ifcase 6\or \lvert\or \bigl\lvert\or \bigl\lvert\or \biggl\lvert\or \biggl\lvert\or \left\lvert\fi } x - y { \ifcase 6\or \rvert\or \bigr\rvert\or \bigr\rvert\or \biggr\rvert\or \biggr\rvert\or \right\rvert\fi}_{\scriptscriptstyle{\infty}}}. \end{cases}\end{aligned}\ ] ] our aim is to devise an adaptive variance reduction method inspired from section [ dimfinie ] for the computation of \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}}\ ] ] where is an borel functional defined on ,\r^d) ] and by \times { \cal c}([0,t],\r^d ) \rightarrow { \cal m}(q , p),\ ] ] a bounded borel function and ( represented by a borel function ) for . in the sequel ,we use the following notations , where denotes the solution to .first we need the following standard abstract lemma .[ girs ] suppose holds .+ the sde satisfies the weak existence and uniqueness assumptions and for every non negative borel functional ,\r^{d+1 } ) \to \r_+ ] we have , with the above notations , \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } = { \e\ ! { } { { \ifcase 4\or [ \or \bigl[\or \bigl[\or \biggl[\or \biggl[\or \left[\fi } g { { \ifcase 3\or ( \or \bigl(\or \bigl(\or \biggl(\or \biggl(\or \left(\fi } { x^{(\theta ) } } , \int_0^.\!\ ! { { \ifcase 1\or \langle\or \bigl\langle\or \bigl\langle\or \biggl\langle\or \biggl\langle\or \left\langle\fi } f(s , { x^{(\theta),s } } ) , \operatorname{d\!}w_s { \ifcase 1\or \rangle\or \bigr\rangle\or \bigr\rangle\or \biggr\rangle\or \biggr\rangle\or \right\rangle\fi } } + \int_0^. \!\ ! { { \ifcase 1\or \langle\or \bigl\langle\or \bigl\langle\or \biggl\langle\or \biggl\langle\or \left\langle\fi } f , \theta { \ifcase 1\or \rangle\or \bigr\rangle\or \bigr\rangle\or \biggr\rangle\or \biggr\rangle\or \right\rangle\fi}}(s , { x^{(\theta),s } } ) \operatorname{d\!}s { \ifcase 3\or ) \or \bigr)\or \bigr)\or \biggr)\or \biggr)\or \right)\fi } } \\ \times e^{- \int_0^t\!\ ! { { \ifcase 1\or \langle\or \bigl\langle\or \bigl\langle\or \biggl\langle\or \biggl\langle\or \left\langle\fi } { \theta^{(\theta)}}_s , \operatorname{d\!}w_s { \ifcase 1\or \rangle\or \bigr\rangle\or \bigr\rangle\or \biggr\rangle\or \biggr\rangle\or \right\rangle\fi } } -\frac{1}{2 } { { { \ifcase 6\or \lvert\or \bigl\lvert\or \bigl\lvert\or \biggl\lvert\or \biggl\lvert\or \left\lvert\fi } { \theta^{(\theta)}}{\ifcase 6\or \rvert\or \bigr\rvert\or \bigr\rvert\or \biggr\rvert\or \biggr\rvert\or \right\rvert\fi}_{\scriptscriptstyle{\!l^2_{t , q}}}}}^2 } { \ifcase 4\or ] \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}},\end{gathered}\ ] ] and \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } = { \e\ ! { } { { \ifcase 4\or [ \or \bigl[\or \bigl[\or \biggl[\or \biggl[\or \left[\fi } g { { \ifcase 3\or ( \or \bigl(\or \bigl(\or \biggl(\or \biggl(\or \left(\fi } x , \int_0^. \!\ !{ { \ifcase 1\or \langle\or \bigl\langle\or \bigl\langle\or \biggl\langle\or \biggl\langle\or \left\langle\fi } f(s , x^s ) , \operatorname{d\!}w_s { \ifcase 1\or \rangle\or \bigr\rangle\or \bigr\rangle\or \biggr\rangle\or \biggr\rangle\or \right\rangle\fi } } - \int_0^. \!\ ! { { \ifcase 1\or \langle\or \bigl\langle\or \bigl\langle\or \biggl\langle\or \biggl\langle\or \left\langle\fi } f , \theta { \ifcase 1\or \rangle\or \bigr\rangle\or \bigr\rangle\or \biggr\rangle\or \biggr\rangle\or \right\rangle\fi}}(s , x^s ) \operatorname{d\!}s { \ifcase 3\or ) \or \bigr)\or \bigr)\or \biggr)\or \biggr)\or \right)\fi}}\\ \times e^{\int_0^t \!\ ! { { \ifcase 1\or \langle\or \bigl\langle\or \bigl\langle\or \biggl\langle\or \biggl\langle\or \left\langle\fi } \theta_s , \operatorname{d\!}w_s { \ifcase 1\or \rangle\or \bigr\rangle\or \bigr\rangle\or \biggr\rangle\or \biggr\rangle\or \right\rangle\fi } } -\frac{1}{2 } { { { \ifcase 6\or \lvert\or \bigl\lvert\or \bigl\lvert\or \biggl\lvert\or \biggl\lvert\or \left\lvert\fi } \theta{ \ifcase 6\or \rvert\or \bigr\rvert\or \bigr\rvert\or \biggr\rvert\or \biggr\rvert\or \right\rvert\fi}_{\scriptscriptstyle{\!l^2_{t , q}}}}}^2 } { \ifcase 4\or ] \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}},\end{gathered}\ ] ] this is a straightforward application of theorem 1.11 , p.372 ( and the remark that immediately follows ) in once noticed that , and are predictable processes with respect to the completed filtration of . the dolans exponential } ] .it follows from the first identity in lemma [ girs ] that for every bounded borel function \times { \cal c}([0,t ] , \r^d ) \to { \cal m}(q , p) ] .we have then \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}}^3 ] .the second claim easily follows from assumption ( [ general ] ) . as a first step , we show that the random functional from into ( ) , is differentiable . indeed , it from the below inequality , where is clearly a bounded random functional from into , with an operator norm ( ( this follows from hlder and _ b.d.g . _inequalities ) .then , we derive that is differentiable form into every with differential .this follows from standard computation based on ( [ diffl2lp ] ) , the elementary inequality and the fact that where we used both hlder and _ b.d.g . _ inequality . one concludes that \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}} ] , let and let and denote a strong solutions of and driven by the same brownian motion .then , for every , there exists a real constant such that } { { \ifcase 2\or \lvert\or \bigl\lvert\or \bigl\lvert\or \biggl\lvert\or \biggl\lvert\or \left\lvert\fi } x_t - { x^{(\theta)}}_t { \ifcase 2\or \rvert\or \bigr\rvert\or \bigr\rvert\or \biggr\rvert\or \biggr\rvert\or \right\rvert\fi } } { \ifcase 4\or \rvert\or \bigr\rvert\or \bigr\rvert\or \biggr\rvert\or \biggr\rvert\or \right\rvert\fi}_{\scriptscriptstyle{\!r } } } \le c_{b,\sigma } e^{c_{b , \sigma } t } { { \ifcase 4\or \lvert\or \bigl\lvert\or \bigl\lvert\or \biggl\lvert\or \biggl\lvert\or \left\lvert\fi } \int_0^t \!\ ! { { \ifcase 2\or \lvert\or \bigl\lvert\or \bigl\lvert\or \biggl\lvert\or \biggl\lvert\or \left\lvert\fi } \sigma(s , { x^{(\theta),s } } ) { \theta^{(\theta)}}_s { \ifcase 2\or \rvert\or \bigr\rvert\or \bigr\rvert\or \biggr\rvert\or \biggr\rvert\or \right\rvert\fi } } \operatorname{d\!}s { \ifcase 4\or \rvert\or \bigr\rvert\or \bigr\rvert\or \biggr\rvert\or \biggr\rvert\or \right\rvert\fi}_{\scriptscriptstyle{\!r}}}. \ ] ] the proof follows the lines of the proof of the strong rate of convergence of the euler scheme ( see ) . the main result of this section is the following theorem .suppose that assumption ( [ nonvide ] ) and hold .let be a bounded borel -valued function ( with ) defined on \times { \cal c}([0,t ] , \r^d) ] , where for then the recursive sequence a.s .converges toward an -valued ( squared integrable ) random variable . for a practical implementation of this algorithm, we must have _ for all _ brownian motions a strong solution of .in particular , this is the case if the driver is locally lipshitz ( in space ) or if is the continuous euler scheme of a diffusion with step ( using the driver ) .note that if is continuous ( in space ) but not necessarily locally lipshitz , the euler scheme converges in law to the solution of the sde .when the diffusion coefficient is bounded , it follows from lemma [ xmoinsxtheta ] that , for every , } { { \ifcase 2\or \lvert\or \bigl\lvert\or \bigl\lvert\or \biggl\lvert\or \biggl\lvert\or \left\lvert\fi } x_t - { x^{(\theta)}}_t { \ifcase 2\or \rvert\or \bigr\rvert\or \bigr\rvert\or \biggr\rvert\or \biggr\rvert\or \right\rvert\fi } } { \ifcase 4\or \rvert\or \bigr\rvert\or \bigr\rvert\or \biggr\rvert\or \biggr\rvert\or \right\rvert\fi}_{\scriptscriptstyle{\!r } } } \le c_{b,\sigma , t } { { \ifcase 6\or \lvert\or \bigl\lvert\or \bigl\lvert\or \biggl\lvert\or \biggl\lvert\or \left\lvert\fi } \varphi { \ifcase 6\or \rvert\or \bigr\rvert\or \bigr\rvert\or \biggr\rvert\or \biggr\rvert\or \right\rvert\fi}_{\scriptscriptstyle{\infty } } } { { { \ifcase 6\or \lvert\or \bigl\lvert\or \bigl\lvert\or \biggl\lvert\or \biggl\lvert\or \left\lvert\fi } \theta { \ifcase 6\or \rvert\or \bigr\rvert\or \bigr\rvert\or \biggr\rvert\or \biggr\rvert\or \right\rvert\fi}_{\scriptscriptstyle{\!l^2_{t , p } } } } } { { \ifcase 6\or \lvert\or \bigl\lvert\or \bigl\lvert\or \biggl\lvert\or \biggl\lvert\or \left\lvert\fi } \sigma { \ifcase 6\or \rvert\or \bigr\rvert\or \bigr\rvert\or \biggr\rvert\or \biggr\rvert\or \right\rvert\fi}_{\scriptscriptstyle{\infty}}},\ ] ] where \times{\cal c}([0,t ] , \r^d ) } { { \ifcase 6\or \lvert\or \bigl\lvert\or \bigl\lvert\or \biggl\lvert\or \biggl\lvert\or \left\lvert\fi } \sigma(t , x ) { \ifcase 6\or \rvert\or \bigr\rvert\or \bigr\rvert\or \biggr\rvert\or \biggr\rvert\or \right\rvert\fi}} ] , elementary computation based on and lemma [ girs ] yield for every ( assumption implies that for every ) .following the same proof to the bounded case , we obtain easily the results with .we conclude by noting that is an arbitrary parameter to cancel the denominator . if the functional is bounded ( ) , we prove in the same way that the algorithm without correction , _i.e. _ build with , a.s .for the sake of simplicity we focus in this section on importance sampling by mean translation in a finite dimensional setting ( section [ translation ] ) although most of the comments below can also be applied at least in the path - dependent diffusions setting .as proved by arouna ( see ) , we can consider a purely adaptive approach to reduce the variance .it consists to perform the robbins - monro algorithm simultaneously with the monte carlo approximation .more precisely , estimate \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}} ] owing to the assumptions on and schwarz inequality which also implies that \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } \le \frac 12 { { \ifcase 6\or ( \or \bigl(\or \bigl(\or \biggl(\or \biggl(\or \left(\fi } { \e\ ! { } { { \ifcase 6\or [ \or \bigl[\or \bigl[\or \biggl[\or \biggl[\or \left[\fi } { { \ifcase 1\or \lvert\or \bigl\lvert\or \bigl\lvert\or \biggl\lvert\or \biggl\lvert\or \left\lvert\fi } \theta_n-\theta^ * { \ifcase 1\or \rvert\or \bigr\rvert\or \bigr\rvert\or \biggr\rvert\or \biggr\rvert\or \right\rvert\fi}}^2 { \ifcase 6\or ] \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } + { \e\ ! { } { { \ifcase 6\or [ \or \bigl[\or \bigl[\or \biggl[\or \biggl[\or \left[\fi } { { \ifcase 1\or \lvert\or \bigl\lvert\or \bigl\lvert\or \biggl\lvert\or \biggl\lvert\or \left\lvert\fi } h(\theta_n , z_{n+1 } ) { \ifcase 1\or \rvert\or \bigr\rvert\or \bigr\rvert\or \biggr\rvert\or \biggr\rvert\or \right\rvert\fi}}^2 { \ifcase 6\or ] \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi } } } { \ifcase 6\or ) \or \bigr)\or \bigr)\or \biggr)\or \biggr)\or \right)\fi } } \le c(1 + { \e\ ! { } { { \ifcase 6\or [ \or \bigl[\or \bigl[\or \biggl[\or \biggl[\or \left[\fi } { { \ifcase 1\or \lvert\or \bigl\lvert\or \bigl\lvert\or \biggl\lvert\or \biggl\lvert\or \left\lvert\fi } \theta_n-\theta^ * { \ifcase 1\or \rvert\or \bigr\rvert\or \bigr\rvert\or \biggr\rvert\or \biggr\rvert\or \right\rvert\fi}}^2 { \ifcase 6\or ] \or \bigr]\or \bigr]\or \biggr]\or \biggr]\or \right]\fi}}},\ ] ] for an appropriate real constant .then , one shows by induction on from ( [ ineql2 ] ) that is square integrable for every and that is integrable , hence a true martingale increment .now , one derives from the assumptions ( [ stepcond ] ) and ( [ ineql2 ] ) that is a ( non negative ) super - martingale with .this uses the mean - reverting assumption .hence is - converging toward an integrable r.v . . consequently , using that , one gets the super - martingale being -bounded , one derives likewise that is -bounded since now , a series with nonnegative terms which is upper bounded by an ( ) converging sequence , converges in so that it follows from ( [ cvrz ] ) that , - , which is integrable since is -bounded and consequently finite . let . set it follows from the finiteness of that .now we consider the compact set .it is separable so there exists an everywhere dense sequence in , denoted for convenience .the above proof shows that - , for every , as .then set which satisfies .assume .up to two successive extractions , there exists a subsequence such that the function being continuous which implies that .hence .then any limiting value of the sequence will satisfy which in turn implies that by considering a subsequence .so , is the unique limiting value of the sequence as .the fact that the resulting random vector is square integrable follows from fatou s lemma and the -boundedness of the sequence . | we propose an _ unconstrained _ stochastic approximation method of finding the optimal measure change ( in an _ a priori _ parametric family ) for monte carlo simulations . we consider different parametric families based on the girsanov theorem and the esscher transform ( or exponential - tilting ) . in a multidimensional gaussian framework , arouna uses a projected robbins - monro procedure to select the parameter minimizing the variance ( see ) . in our approach , the parameter ( scalar or process ) is selected by a classical robbins - monro procedure without projection or truncation . to obtain this unconstrained algorithm we intensively use the regularity of the density of the law without assume smoothness of the payoff . we prove the convergence for a large class of multidimensional distributions and diffusion processes . we illustrate the effectiveness of our algorithm via pricing a basket payoff under a multidimensional nig distribution , and pricing a barrier options in different markets . _ key words : stochastic algorithm , robbins - monro , importance sampling , esscher transform , girsanov , nig distribution , barrier options . _ |
let be an edge - labeled directed graph ( referred to hereafter simply as a graph ) , where is the vertex set , is the edge set , and is the edge labeling taking values on a finite alphabet .we require that the labeling is deterministic : edges that start at the same vertex have distinct labels .we further assume that has finite memory . the one - dimensional ( 1-d ) _ constraint _ that is presented by is defined as the set of all words that are generated by paths in ( i.e. , the words are obtained by reading - off the edge labels of such paths ) .examples of 1-d constraints include runlength - limited ( rll ) constraints , symmetric runlength - limited ( srll ) constraints , and the charge constraints .the capacity of is given by an -track _ parallel encoder _ for at rate is defined as follows ( see figure [ fig : encoder ] ) .-track parallel encoder.,title="fig : " ] 1 . at stage ,the encoder ( which may be state - dependent ) receives as input ( unconstrained ) information bits . 2 .the output of the encoder at stage is a word of length over .3 . for , track _ of any given length , belongs to .4 . there are integers such that the encoder is _ -sliding - block decodable _ ( in short , -sbd ) : for , the information bits which were input at stage are uniquely determined by ( and can be efficiently calculated from ) .the decoding window size of the encoder is , and it is desirable to have a small window to avoid error propagation . in this work , we will be mainly focusing on the case where , in which case the decoding requires no look - ahead . in , it was shown that by introducing parallelism , one can reduce the window size , compared to conventional serial encoding .furthermore , it was shown that as tends to infinity , there are -sbd parallel encoders whose rates approach . a key step in using some perturbation of the conditional probability distribution on the edges of , corresponding to the maxentropic stationary markov chain on .however , it is not clear how this perturbation should be done : a naive method will only work for unrealistically large .also , the proof in of the -sbd property is only probabilistic and does not suggest encoders and decoders that have an acceptable running time . in this work , we aim at making the results of more tractable . at the expense of possibly increasing the memory of the encoder ( up to the memory of )we are able to define a suitable perturbed distribution explicitly , and provide an efficient algorithm for computing it .furthermore , the encoding and decoding can be carried out in time complexity , where the multiplying constants in the term are polynomially large in the parameters of .denote by the diameter of ( i.e. , the longest shortest path between any two vertices in ) and let be the adjacency matrix of , i.e. , is the number of edges in that start at vertex and terminate in vertex .our main result , specifying the rate of our encoder , is given in the next theorem .[ theo : main ] let be a deterministic graph with memory .for sufficiently large , one can efficiently construct an -track -sbd parallel encoder for at a rate such that where ( respectively , ) is the smallest ( respectively , largest ) _ nonzero _ entry in .the structure of this paper is as follows . in section [ sec : twodimensionalconstraints ] we show how parallel encoding can be used to construct an encoder for a 2-d constraint .as we will show , a parallel encoder is essentially defined through what we term a multiplicity matrix .section [ sec : encoder ] defines how our parallel encoder works , assuming its multiplicity matrix is given .then , in section [ sec : computingmultiplicitymatrix ] , we show how to efficiently calculate a good multiplicity matrix .although 2-d constraints are our main motivation , section [ sec:1d ] shows how our method can be applied to 1-d constraints .section [ sec : twoopt ] defines two methods by which the rate of our encoder can be slightly improved .finally , in section [ sec : fastenumerativecoding ] we show a method of efficiently realizing a key part of our encoding procedure .our primary motivation for studying parallel encoding is to show an encoding algorithm for a family of two - dimensional ( 2-d ) constraints .the concept of a 1-d constraint can formally be generalized to two dimensions ( see ) .examples of 2-d constraints are 2-d rll constraints , 2-d srll constraints , and the so - called square constraint .let be a given 2-d constraint over a finite alphabet .we denote by ] . as a concrete example , consider the square constraint : its elements are all the binary arrays in which no two ` 1 ' symbols are adjacent on a row , column , or diagonal .we first partition our array into two alternating types of vertical strips : _ data strips _ having width , and _ merging strips _ having width . in our example , let and ( see figure [ fig : example : square ] ) . secondly , we select a graph with a labeling ] .we then fill up the data strips of our array with arrays corresponding to paths of length in .thirdly , we assume that the choice of allows us to fill up the merging strips in a row - by - row ( causal ) manner , such that our array is in .any 2-d constraint for which such , , and can be found , is in the family of constraints we can code for ( for example , the 2-d srll constraints belong to this family ) .consider again the square constraint : a graph which produces _ all _ arrays that satisfy this constraint is given in figure [ fig : example : g ] .also , for , we can take the merging strips to be all - zero .( there are cases , such as the 2-d srll constraints , where determining the merging strips may be less trivial . )whose paths generate all arrays satisfying the square constraint .the label of an edge is given by the label of the vertex it enters.,title="fig : " ] suppose we have an -sbd parallel encoder for at rate with tracks .we may use this parallel encoder to encode information in a row - by - row fashion to our array : at stage we feed information bits to our parallel encoder .let be the output of the parallel encoder at stage .we write to row of the data strip , and then appropriately fill up row of the merging strips .decoding of a row in our array can be carried out based only on the contents of that row and the previous rows . since information bits are mapped to symbols in , the rate at which we encode information to the array is we note the following tradeoff : typically , taking larger values of ( while keeping constant ) will increase the right - hand side of the above inequality .however , the number of vertices and edges in will usually grow exponentially with .thus , is taken to be reasonably small .note that in our scheme , a single error generally results in the loss of information stored in the respective vertical sliding - block window .namely , a single corrupted entry in the array may cause the loss of rows .thus , our method is only practical if we assume an error model in which whole rows are corrupted by errors .this is indeed the case if each row is protected by an error - correcting code ( for example , by the use of unconstrained positions ) .let be a positive integer which will shortly be specified .the words , , that we will be writing to the first tracks are all generated by paths of length in . inwhat follows , we find it convenient to regard the arrays as ( column ) words of length of some new 1-d constraint , which we define next . kronecker power _ of , denoted by , is defined as follows .the vertex set is simply the cartesian power of ; that is , an edge goes from vertex to vertex and is labeled whenever for all , is an edge from to labeled .note that a path of length in is just a handy way to denote paths of length in .accordingly , the arrays are the words of length in .let be as in section [ sec : introduction ] and let be the adjacency matrix of . denote by the all - one row vector .the description of our -track parallel encoder for makes use of the following definition .a nonnegative integer matrix is called a ( valid ) _ multiplicity matrix _ with respect to and if ( while any multiplicity matrix will produce a parallel encoder , some will have higher rates than others . in section [ sec : computingmultiplicitymatrix ] , we show how to compute multiplicity matrices that yield rates close to . )recall that we have at our disposal tracks .however , we will effectively be using only the first tracks in order to encode information .the last tracks will all be equal to the first track , say .write .a vertex is a _ typical vertex _( with respect to ) if for all , the vertex appears as an entry in exactly times . also , an edge is a _ typical edge _ with respect to if for all , there are exactly entries which as edges in at vertex and terminate in vertex .a simple computation shows that the number of outgoing typical edges from a typical vertex equals ( where ) .for example , in the simpler case where does not contain parallel edges ( ) , we are in effect counting in permutations with repetitions , each time for a different vertex .the encoding process will be carried out as follows .we start at some fixed typical vertex . out of the set of outgoing edges from , we consider only typical edges .the edge we choose to traverse is determined by the information bits .after traversing the chosen edge , we arrive at vertex . by , is also a typical vertex , and the process starts over .this process defines an -track parallel encoder for at rate this encoder is -sbd , where is the memory of .consider now how we map information bits into an edge choice at any given stage .assuming again the simpler case of a graph with no parallel edges , a natural choice would be to use an instance of enumerative coding .specifically , suppose that for , a procedure for encoding information by an -bit binary vector with hamming weight were given .suppose also that .we could use this procedure as follows .first , for and , the binary word given as output by the procedure will define which of the possible entries in will be equal to the edge in from the vertex to itself ( if no such edge exists , then ) .having chosen these entries , we run the procedure with and to choose from the remaining entries those that will contain the edge in from to .we continue this process , until all entries in containing an edge outgoing from have been picked .next , we run the procedure with and , and so forth .the more general case of a graph containing parallel edges will include a preliminary step : encoding information in the choice of the edges used to traverse from to ( options for each such edge ) .a fast implementation of enumerative coding is presented in section [ sec : fastenumerativecoding ] .the above - mentioned preliminary step makes use of the schnhage strassen integer - multiplication algorithm , and the resulting encoding time complexity is proportional , with a negligible penalty in terms of rate : fix and , and let be an integer design parameter .assume for simplicity that .the number of vectors of length over an alphabet of size is obviously .so , we can encode bits through the choice of such a vector . repeating this process, we can encode bits through the choice of such vectors .the concatenation of these vectors is taken to represent our choice of edges .note that the encoding process is linear in for constant .also , our losses ( due to the floor function ) become negligible for modestly large . ] to .it turns out that this is also the decoding time complexity .further details are given in section [ sec : fastenumerativecoding ] .the next section shows how to find a good multiplicity matrix , i.e. , a matrix such that is close to .in order to enhance the exposition of this section , we accompany it by a running example ( see figure [ fig : running1 ] ) . throughout this section ,we assume a probability distribution on the edges of , which is the maxentropic stationary markov chain on . without real loss of generality, we can assume that is irreducible ( i.e. , strongly - connected ) , in which case is indeed unique .let the matrix be the transition matrix induced by , i.e. , is the probability of traversing an edge from to , conditioned on currently being at vertex .let be the row vector corresponding to the stationary distribution on induced by ; namely , and .let and define taking the number of tracks in our running example ( figure [ fig : running1 ] ) to be gives .also , our running example has and thus , and note that also , observe that hold when we substitute for . thus ,if all entries of were integers , then we could take equal to . in a way , that would be the best choice we could have made : by using stirling s approximation , we could deduce that approaches as .however , the entries of , as well as , may be non - integers .we say that an _ integer _matrix is a _ good quantization _ of if { } } \;\;\leq\;\ ; { \left\lceil\sum_{j \in v } { p}_{i , j}\right\rceil } \ ; , \\ \label{eq : preflow_entries } { \left\lfloor{p}_{i , j}\right\rfloor } & \leq & \textstyle { \makebox[3.4em]{ } } \;\;\leq\;\ ; { \left\lceil{p}_{i , j}\right\rceil } \ ; , \quad \textrm{and---}\qquad \\ \label{eq : preflow_sumi } \textstyle { \left\lfloor\sum_{i \in v } { p}_{i , j}\right\rfloor } & \leq & \textstyle { \makebox[3.4em]{ } } \;\;\leq\;\ ; { \left\lceil\sum_{i \in v } { p}_{i , j}\right\rceil } \ ; .\end{aligned}\ ] ] namely , a given entry in is either the floor or the ceiling of the corresponding entry in , and this also holds for the sum of entries of a given row or column in ; moreover , the sum of entries in both and are exactly equal ( to ) .[ lemm : preflow ] there exists a matrix which is a good quantization of .furthermore , such a matrix can be found by an efficient algorithm . .an edge labeled has lower and upper bounds and , respectively.,title="fig : " ] we recast as an integer flow problem ( see figures [ fig : preflow ] and [ fig : running3 ] ) . consider the following flow network , with upper and lower bounds on the flow through the edges .the network has the vertex set with source and target .henceforth , when we refer to the upper ( lower ) bound of an edge , we mean the upper ( lower ) bound on the flow through it .there are four kinds of edges : 1 .an edge with upper and lower bounds both equaling to .2 . for every , with the upper and lower bounds and , respectively .3 . for every , with the upper and lower bounds and , respectively .4 . for every , with the upper and lower bounds and , respectively .we claim that can be satisfied if a legal integer flow exists : simply take as the flow on the edge from to .it is well known that if a legal _ real _ flow exists for a flow network with integer upper and lower bounds on the edges , then a legal _ integer _flow exists as well ( * ? ? ?* theorem 6.5 ) .moreover , such a flow can be efficiently found .to finish the proof , we now exhibit such a legal real flow : 1 .the flow on the edge is .the flow on an edge is .the flow on an edge is .the flow on an edge is . in running example 2 .an edge labeled has lower and upper bounds and , respectively .a legal real flow is given by .a legal integer flow is given by .the matrix resulting from the legal integer flow is given , as well as the matrix ( again).,title="fig : " ] for the remaining part of this section , we assume that is a good quantization of ( say , is computed by solving the integer flow problem in the last proof ) . the next lemma states that `` almost '' satisfies ( [ eq : sum_up ] ) .[ lemm : disc ] let and .then , for all , from , we get that for all , recall that is satisfied if we replace by . thus , by , we have that also holds if we replace by .we conclude that .the proof follows from the fact that entries of are integers , and thus so are those of and . the following lemma will be the basis for augmenting so that ( [ eq : sum_up ] ) is satisfied .[ lemm : fmat ] fix two distinct vertices .we can efficiently find a matrix with non - negative integer entries , such that the following three conditions hold . 1 .[ eq : fsumentries ] 2 . for all , [ eq : fsane ] 3 .denote and .then , for all , let be the vertices along a shortest path from to in . for all , define namely , is the number of edges from to along the path . conditions ( [ eq : fsumentries ] ) and ( [ eq : fsane ] ) easily follow from ( [ eq : f ] ) .condition ( [ eq : fsurp ] ) follows from the fact that ( ) is equal to the number of edges along the path for which is the start ( end ) vertex of the edge. the matrix will be the basis for computing a good multiplicity matrix , as we demonstrate in the proof of the next theorem .[ theo : dfromptilde ] let be a good quantization of .there exists a multiplicity matrix with respect to and , such that 1 .[ it : dagreaterthandalpha ] for all , and 2 . ( where is as defined in ) .moreover , the matrix can be found by an efficient algorithm .consider a vertex .if , then we say that vertex has a _ surplus _ of . in this case , by lemma [ lemm : disc ] , we have that the surplus is equal to 1 . on the other hand , if then vertex has a _ deficiency _ of , which again is equal to 1 . of course , since , the total surplus is equal to the total deficiency , and both are denoted by : denote the vertices with surplus as and the vertices with deficiency as .recalling the matrix from lemma [ lemm : fmat ] , we define we first show that is a valid multiplicity matrix .note that .thus , ( [ eq : sum_entries ] ) follows from ( [ eq : malpha ] ) , ( [ eq : preflow_sumij ] ) , and ( [ eq : fsumentries ] ) .the definitions of surplus and deficiency vertices along with ( [ eq : fsurp ] ) give ( [ eq : sum_up ] ) .lastly , recall that ( [ eq : sane_entries ] ) is satisfied if we replace by .thus , by ( [ eq : preflow_entries ] ) , the same can be said for .combining this with ( [ eq : fsane ] ) yields ( [ eq : sane_entries ] ) .since the entries of are non - negative for every , we must have that for all .this , together with ( [ eq : sum_entries ] ) and ( [ eq : preflow_sumij ] ) , implies in turn that . for the matrix in figure [ fig : running3 ], we have thus , .namely , the vertex has a surplus while the vertex has a deficiency .taking and we get now that theorem [ theo : dfromptilde ] is proved , we are in a position to prove our main result , theorem [ theo : main ] .essentially , the proof involves using the stirling approximation and taking into account the various quantization errors introduced into .the proof itself is given in the appendix .the main motivation for our methods is 2-d constrained coding . however , in this section , we show that they might be interesting in certain aspects of 1-d coding as well .given a labeled graph , a classic method for building an encoder for the 1-d constraint is the state - splitting algorithm .the rate of an encoder built by approaches the capacity of . also , the word the encoder outputs has a corresponding path in , with the following favorable property : the probability of traversing a certain edge approaches the maxentropic probability of that edge ( assuming an unbiased source distribution ) .however , what if we d like to build an encoder with a different probability distribution on the edges ?this scenario may occur , for example , when there is a requirement that all the output words of a given length that are generated by the encoder have a prescribed hamming weight yielding a stationary markov chain with largest possible entropy , subject to a set of edges ( such as the set of edges with label ` 1 ' ) having a prescribed cumulative probability . ] .more formally , suppose that we are given a labeled graph ; to make the exposition simpler , suppose that does not contain parallel edges .let and be a transition matrix and a stationary probability distribution corresponding to a stationary ( but not necessarily maxentropic ) markov chain on .we assume w.l.o.g . that each edge in has a positive conditional probability .we are also given an integer , which we will shortly elaborate on .we first describe our encoder in broad terms , so as that its merits will be obvious .let and be as previously defined , and let be specified shortly .we start at some fixed vertex . given information bits ,we traverse a soon to be defined cyclic path of length in .the concatenation of the edge labels along the path is the word we output .of course , since the path is cyclic , the concatenation of such words is indeed in .moreover , the path will have the following key property : the number of times an edge from to is traversed equals .namely , if we uniformly pick one of the edges of the path , the probability of picking a certain edge is constant ( not a function of the input bits ) , and is equal to the probability of traversing on the markov chain , up to a small quantization error .the rate of our encoder will satisfy ( [ eq : rbound ] ) , where we replace by and by the entropy of .we would like to be able to exactly specify the path length as a design parameter .however , we specify and get an between and . our encoding process will make use of an _ oriented tree _ , a term which we will now define .a set of edges is an oriented tree of with root if and for each there exists a path from to consisting entirely of edges in ( see figure [ fig : orientedtree ] ) . note that if we reverse the edge directions of an oriented tree , we get a directed tree as defined in ( * ? ? ?* theorem 2.5 ) . since reversing the directions of all edges in an irreducible graph results in an irreducible graph, we have by ( * ? ? ?* lemma 3.3 ) that an oriented tree indeed exists in , and can be efficiently found .so , let us fix some oriented tree with root . by (* theorem 2.5 ) , we have that every vertex which is not the root has an out - degree equal to 1 .thus , for each such vertex we may define as the destination of the single edge in going out of ..,title="fig : " ] we now elaborate on the encoding process .the encoding consists of two steps . in the first step , we map the information bits to a collection of lists . in the second step ,we use the lists in order to define a cyclic path .first step : given information bits , we build for each vertex a list of length , the entries of each are vertices in .moreover , the following properties are satisfied for all : * the number of times is an entry in is exactly . * if , then the last entry of the list equals the parent of .namely , recalling ( [ eq : r_d ] ) , a simple calculation shows that the number of possible list collections is thus , we define the rate of encoding as also , note that as in the 2-d case , we may use enumerative coding in order to efficiently map information bits to lists .second step : we now use the lists , , in order to construct a cyclic path starting at vertex .we start the path at and build a length- path according to the following rule : when exiting vertex for the time , traverse the edge going into vertex . of course , our encoding method is valid ( and invertible ) iff we may always abide by the above - mentioned rule .namely , we do nt get `` stuck '' , and manage to complete a cyclic path of length .this is indeed the case : define an auxiliary graph with the same vertex set , , as and parallel edges from to ( for all ) .first , recall that for sufficiently large , the presence of an edge from to in implies that .thus , since was assumed to be irreducible , is irreducible as well .also , an edge in from to implies the existence of an edge in from to . secondly , note that by ( [ eq : sum_up ] ) , the number of times we are supposed to exit a vertex is equal to the number of times we are supposed to enter it .the rest of the proof follows from ( * ? ? ?* , claim 2 ) , applied to the auxiliary graph .namely , our encoder follows directly from van aardenne - ehrenfest and de bruijn s theorem on counting eulerian cycles in a graph .we now return to the rate , , of our encoder . from ( [ eq : malpha ] ) , ( [ eq : preflow_entries ] ) , ( [ eq : preflow_sumi ] ) and theorem [ theo : dfromptilde ] , we see that for sufficiently large , is greater than some positive constant times .thus , ( [ eq : rbound ] ) still holds if we replace by and by the entropy of .recall from section [ sec : twodimensionalconstraints ] the square constraint : its elements are all the binary arrays in which no two ` 1 ' symbols are adjacent on a row , column , or diagonal . by employing the methods presented in , we may calculate an upper bound on the rate of the constraint .this turns out to be .we will show an encoding / decoding method with rate slightly larger than ( about of the upper bound ) . in order to do this, we assume that the array has columns .our encoding method has a fixed rate and has a vertical window of size 2 and vertical anticipation 0 .we should point out now that a straightforward implementation of the methods we have previously defined gives a rate which is strictly than .namely , this section also outlines two improvement techniques which help boost the rate .we start out as in the example given in section [ sec : twodimensionalconstraints ] , except that the width of the data strips is now ( the width of the merging strips remains ) .the graph we choose produces all width- arrays satisfying the square constraint , and we take the merging strips to be all - zero .our array has columns , so we have tracks ( the last , say , column of the array will essentially be unused ; we can set all of its values to 0 ) .define the normalized capacity as the graph has vertices and normalized capacity this number is about from the upper bound on the capacity of our 2-d constraint .thus , as expected , there is an inherent loss in choosing to model the 2-d constraint as an essentially 1-d constraint .of course , this loss can be made smaller by increasing ( but the graph will grow as well ) . from theorem [ theo : main ], the rate of our encoder will approach the normalized capacity of as the number of tracks grows .so , once the graph is chosen , the parameter we should be comparing ourselves to is the normalized capacity .we now apply the methods defined in section [ sec : computingmultiplicitymatrix ] and find a multiplicity matrix .recall that the matrix defines an encoder . in our case, this encoder has a rate of about .this is of the normalized capacity , and is quite disappointing ( but the improvements shown in sections [ ssec : moore ] and [ ssec : breakmerge ] below are going to improve this rate ) . on the other hand ,note that if we had limited ourselves to encode to each track independently of the others , then the best rate we could have hoped for with 0 vertical anticipation turns out to be ( see ( * ? ? ?* theorem 5 ) ) .we now define a graph which we call the reduction of .essentially , we will encode by constructing paths in , and then translate these to paths in . in both and ,the maxentropic distributions have the same entropy .the main virtue of is that it often has less vertices and edges compared to .thus , the penalty in ( [ eq : rbound ] ) resulting from using a finite number of tracks will often be smaller . for , we now recursively define the concept of -equivalence ( very much like in the moore algorithm ) . * for , any two vertices are 0-equivalent . * for , two vertices are -equivalent iff 1 ) the two vertices are -equivalent , and 2 ) for each -equivalence class , the number of edges from to vertices in is equal to the number of edges from to vertices in .denote by the partition induced by -equivalence .for the graph given in figure [ fig : example : g ] , note that , by definition , is a refinement of .thus , let be the smallest for which = .the set can be efficiently found ( essentially , by the moore algorithm ) .define a ( non - labeled ) graph as follows .the vertex set of is for each , let be a fixed element of ( if contains more than one vertex , then pick one arbitrarily ) .also , for each , let be the class such that .let ( ) and ( ) denote the start and end vertex of an edge in ( ) , respectively .the edge set is defined as where namely , the number of edges from to in is equal to the number of edges in from some fixed to elements of , and , by the definition of , this number does not depend on the choice of .the graph is termed the _ reduction _ of .the reduction of from figure [ fig : example : g ] is given in figure [ fig : example : greduced ] .note that since was assumed to be irreducible , we must have that is irreducible as well . from figure[ fig : example : g].,title="fig : " ] the entropies of the maxentropic markov chains on and are equal .let be the adjacency matrix of , and recall that is the adjacency matrix of .let and be the perron eigenvalue and right perron eigenvector of , respectively . next ,define the vector as it is easily verifiable that is a right eigenvector of , with eigenvalue .now , since is a perron eigenvector of an irreducible matrix , each entry of it is positive .thus , each entry of is positive as well .since is irreducible , we must have that is a perron eigenvector of .so , the perron eigenvalue of is also .the next lemma essentially states that we can think of paths in as if they were paths in .[ lemm : pathconversion ] let .fix some , and .there exists a one - to - one correspondence between the following sets .first set : paths of length in with start vertex and end vertex .second set : paths of length in with start vertex and end vertex in .moreover , for , the first edges in a path belonging to the second set are a function of only the first edges in the respective path in the first set .we prove this by induction on .for , we have to see this , note that we can assume w.l.o.g . that , and then recall ( [ eq : etag ] ) . for , combine the claim for with that for . notice that .we now show why is useful .let be the multiplicity matrix found by the methods previously outlined , where we replace by .let .we may efficiently encode ( and decode ) information to in a row - by - row manner at rate .we conceptually break our encoding scheme into two steps . in the first step ,we `` encode '' ( map ) the information into paths in , each path having length .we do this as previously outlined ( through typical vertices and edges in ) .note that this step is done at a rate of .in the second step , we map each such path in to a corresponding path in . by lemma [ lemm : pathconversion ] , we can indeed do this ( take as the first vertex in the path , as the last vertex , and ) . by lemma [ lemm : pathconversion ]we see that this two - step encoding scheme can easily be modified into one that is row - by - row . applying the reduction to our running example ( square constraint with and ) ,reduces the number of vertices from in to in .the computed increases the rate to about , which is of the normalized capacity .let be the kronecker power of the moore - style reduction .recall that the rate of our encoder is where is the number of typical edges in going out of a typical vertex .the second improvement involves expanding the definition of a typical edge , thus increasing .this is best explained through an example .suppose that has figure [ fig : breakmerge ] as a subgraph ; namely , we show all edges going out of vertices and .also , let the numbers next to the edges be equal to the corresponding entries in .the main thing to notice at this point is that if the edges to and are deleted ( `` break '' ) , then and have exactly the same number of edges from them to vertex , for all ( after the deletion of edges , vertices and can be `` merged '' ) .let be a typical vertex .a short calculation shows that the number of entries in that are equal to ( ) is ( ) .recall that the standard encoding process consists of choosing a typical edge going out of the typical vertex and into another typical vertex .we now briefly review this process . consider the entries in that are equal to .the encoding process with respect to them will be as follows ( see figure [ fig : breakmerge1 ] ) : * out of these entries , choose for which the corresponding entry in will be . since there is exactly one edge from the in , the corresponding entries in must be equal to that edge . * next , from the remaining entries , choose for which the corresponding entries in will be .there are two parallel edges from to , so choose which one to use in the corresponding entries in .* we are left with entries , the corresponding entries in will be . also , we have one option as to the corresponding entries in .a similar process is applied to the entries in that are equal to .thus , the total number of options with respect to these entries is , , where we got from to by the standard encoding process.,title="fig : " ] next , consider a different encoding process ( see figure [ fig : breakmerge2 ] ) . * out of the entries in that are equal to , choose for which the corresponding entry in will be . as before ,the corresponding entries in have only one option . * out of the entries in that are equal to ,choose for the corresponding entry in will be . again , one option for entries in . * now , of the remaining entries in that are equal to or , choose for which the corresponding entry in will be .we have two options for the entries in .* we are left with entries in that are equal to or .these will have as the corresponding entry in , and one option in ., , where we got from to by the improved encoding process .the shaded part corresponds to vertices that were merged.,title="fig : " ] thus , the total number of options is now the important thing to notice is that in both cases , we arrive at a typical vertex . to recap , we first `` broke '' the entries in that are equal to into two groups : those which will have as the corresponding entry in and those which will have or as the corresponding entry . similarly , we broke entries in that are equal to into two groups .next , we noticed that of these four groups , two could be `` merged '' , since they were essentially the same .namely , removing some edges from the corresponding vertices in resulted in vertices which were mergeable .of course , these operations can be repeated .the hidden assumption is that the sequence of breaking and merging is fixed , and known to both the encoder and decoder . the optimal sequence of breaking and merging is not known to us .we used a heuristic .namely , choose two vertices such that the sets of edges emanating from both have a large overlap. then , break and merge accordingly .this was done until no breaking or merging was possible .we got a rate of about , which is of the normalized capacity .recall from section [ sec : encoder ] that in the course of our encoding algorithm , we make use of a procedure which encodes information into fixed - length binary words of constant weight .a way to do this would be to use enumerative coding .immink showed a method to significantly improve the running time of an instance of enumerative coding , with a typically negligible penalty in terms of rate .we now briefly show how to tailor immink s method to our needs .denote by and the length and hamming weight , respectively , of the binary word we encode into .some of our variables will be _ floating - point _numbers with a mantissa of bits and an exponent of bits : each floating - point number is of the form where and are integers such that note that bits are needed to store such a number .also , note that every positive real such that has a floating point approximation with relative precision we assume the presence of two look - up tables .the first will contain the floating - point approximations of .the second will contain the floating - point approximations of , where in order to exclude uninteresting cases , assume that and is such that .also , take enough so that is less than the maximum number we can represent by floating point . thus , we can assume that and .notice that in our case , we can bound both and from above by the number of tracks .thus , we will actually build beforehand two look - up tables of size bits .let denote the floating - point approximation of , and let and denote floating - point multiplication and division , respectively .for we define note that since we have stored the relevant numbers in our look - up table , the time needed to calculate the above function is only .the encoding procedure is given in figure [ fig : enumerativecoder ] .we note the following points : * the variables , , and are integers ( as opposed to floating - point numbers ) . * in the subtraction of from in line 5 , the floating - point number is `` promoted '' to an integer ( the result is an integer ) .* name : * + * input : * integers such that and .* output : * a binary word of length and weight .if ( = = ) // stopping condition : / * 1 * / + return ; / * 2 * / + for ( ; ; ++ ) \ { / * 3 * / + if ( ) / * 4 * / + ; / * 5 * / + else / * 6 * / + return ; / * 7 * / + } / * 8 * / + we must now show that the procedure is valid , namely , that given a valid input , we produce a valid output . for our procedure , this reduce to showing two things : 1 ) if the stopping condition is not met , a recursive call will be made .2 ) the recursive call is given valid parameters as well .namely , in the recursive call , is non - negative .also , for the encoding to be invertible , we must further require that 3 ) for .condition 2 is clearly met , because of the check in line 4 .denote ( and so , ) .condition 3 follows from the next lemma .[ lemm : boundroundoff ] fix .then , the proof is essentially repeated invocations of ( [ eq : fp ] ) on the various stages of computation .we leave the details to the reader .finally , condition 1 follows easily from the next lemma .fix .then , the claim will follow if we show that this is immediate from lemma [ lemm : boundroundoff ] and the binomial identity note that the penalty in terms of rate one suffers because of using our procedure ( instead of plain enumerative coding ) is negligible .namely , can be made arbitrarily close to .since we take and , we can show by amortized analysis that the running time of the procedure is .specifically , see ( * ? ? ?* section 17.3 ) , and take the potential of the binary vector corresponding to as the number of entries in it that are equal to ` ' .the decoding procedure is a straightforward `` reversal '' of the encoding procedure , and its running time is also .let be as in , where we replace by and by . by the combinatorial interpretation of , and the fact that for all , it easily follows that .thus , denote by the base of natural logarithms . by stirling s formulawe have and from we get that by ( [ eq : preflow_sumij ] ) and , since , we have moreover , by and , the rhs of the last equation equals we conclude that lastly , recall that and .thus , where is the entropy of the stationary markov chain with transition matrix . recall that was selected to be maxentropic : .this fact , along with and a short calculation , finishes the proof .the first author would like to thank roee engelberg for very helpful discussions .b. h. marcus , r. m. roth , and p. h. siegel , `` constrained systems and coding for recording channels , '' in _ handbook of coding theory _ , v. pless and w. huffman , eds.1em plus 0.5em minus 0.4emamsterdam : elsevier , 1998 , pp . | a constant - rate encoder decoder pair is presented for a fairly large family of two - dimensional ( 2-d ) constraints . encoding and decoding is done in a row - by - row manner , and is sliding - block decodable . essentially , the 2-d constraint is turned into a set of independent and relatively simple one - dimensional ( 1-d ) constraints ; this is done by dividing the array into fixed - width vertical strips . each row in the strip is seen as a symbol , and a graph presentation of the respective 1-d constraint is constructed . the maxentropic stationary markov chain on this graph is next considered : a perturbed version of the corresponding probability distribution on the edges of the graph is used in order to build an encoder which operates _ in parallel _ on the strips . this perturbation is found by means of a network flow , with upper and lower bounds on the flow through the edges . a key part of the encoder is an enumerative coder for constant - weight binary words . a fast realization of this coder is shown , using floating - point arithmetic . the work of tuvi etzion was supported in part by the united states israel binational science foundation ( bsf ) , jerusalem , israel , under grant no . 2006097 . the work of ron m. roth was supported in part by the united states israel binational science foundation ( bsf ) , jerusalem , israel , under grant no . 2002197 . |
we are interested in this article in the large time behavior of solutions of first - order hamilton - jacobi equations , set in a bounded domain with nonlinear neumann boundary conditions , including the case of dynamical boundary conditions .the main originality of this paper is twofold : on one hand , we obtain results for these nonlinear neumann type problems in their full generality , with minimal assumptions ( at least we think so ) and , on the other hand , we provide two types of proofs following the two classical approaches for these asymptotic problems : the first one by the pde methods which has the advantages of allowing to treat cases when the hamiltonians are non - convex , the second one by an optimal control / dynamical system approach which gives a little bit more precise description of the involved phenomena .for cauchy - neumann problems with linear neumann boundary conditions , the asymptotic behavior has been established very recently and independently by the second author in by using the dynamical approach and the first and third authors in by using the pde approach . in order to be more specific ,we introduce the following initial - boundary value problems u_t+h(x , du)= 0 & in , + b(x , du)=0 & on , + u(x,0)=u_0(x ) & on and u_t+h(x , du)= 0 & in , + u_t+b(x , du)=0 & on , + u(x,0)=u_0(x ) & on , where is a bounded domain of with a -boundary and is a real - valued unknown function on .we , respectively , denote by and its time derivative and gradient with respect to the space variable .the functions are given real - valued continuous function on ; more precise assumptions on and will be given at the beginning of section [ preliminary ] . throughout this article , we are going to treat these problems by using the theory of viscosity solutions and thus the term viscosity " will be omitted henceforth .we also point out that the boundary conditions have to be understood in the viscosity sense : we refer the reader to the `` user s guide to viscosity solutions '' for a precise definition which is not recalled here .the existence and uniqueness of solutions of ( cn ) or ( dbc ) are already well known .we refer to the articles and the references therein .the standard asymptotic behavior , as , for solutions of hamilton - jacobi equations is the following : the solution is expected to look like where the constant and the function are solutions of an _ additive eigenvalue _ or _ ergodic problem_. in our case , we have two different ergodic problems for ( cn ) and ( dbc ) : indeed , looking for a solution of the form for ( cn ) , where is constant and a function defined on , leads to the equation ( e1 ) h(x , dw(x))= a & in , + b(x , dw(x))=0 & on while , for ( dbc ) , the function has to satisfy ( e2 ) h(x , dw(x))= a & in , + b(x , dw(x))= a & on .we point out that one seeks , here , for a pair where and such that is a solution of ( e1 ) or ( e2 ) . if is such a pair , we call an _ additive eigenfunction _ or _ ergodic function _ and an _ additive eigenvalue _ or _ ergodic constant_. a typical result , which was first proved for hamilton - jacobi equations set in in the periodic case by p .-lions , g. papanicolaou and s. r. s. varadhan , is that there exists a unique constant for which this problem has a _ bounded _solution , while the associated solution may not be unique , even up to an additive constant .this non - uniqueness feature is a key difficulty in the study of the asymptotic behavior .the main results of this article are the following : under suitable ( and rather general ) assumptions on and + ( i ) there exists a unique constant such that ( e1 ) ( resp . , ( e2 ) ) has a solution in .+ ( ii ) if is a solution of ( cn ) ( resp . , ( dbc ) ) , then there exists a solution of ( e1 ) ( resp . , ( e2 ) ) , such that the rest of this paper consists in making these claims more precise by providing the correct assumptions on and , by recalling the main existence and uniqueness results on ( cn ) and ( dbc ) , by solving ( e1 ) and ( e2 ) , and proving ( i ) and finally by showing the asymptotic result ( ii ) . in an attempt to make the paper concise ,we have decided to present the full proof of ( ii ) for ( cn ) only by the optimal control / dynamical system approach while we prove the ( dbc ) result only by the pde approach . to our point of view , these proofs are the most relevant one , the two other proofs following along the same lines and being even simpler . in the last decade , the large time behavior of solutions of hamilton - jacobi equation in compact manifold ( or in , mainly in the periodic case ) has received much attention and general convergence results for solutions have been established .g. namah and j .- m .roquejoffre in are the first to prove under the following additional assumption where is a smooth compact -dimensional manifold without boundary .then a. fathi in proved the same type of convergence result by dynamical systems type arguments introducing the `` weak kam theory '' .contrarily to , the results of use strict convexity ( and smoothness ) assumptions on , i.e. , for all and ( and also far more regularity ) but do not require . afterwardsj .- m . roquejoffre and a. davini and a. siconolfi in refined the approach of a. fathi and they studied the asymptotic problem for hamilton - jacobi equations on or -dimensional torus . the second author ,y. fujita , n. ichihara and p. loreti have investigated the asymptotic problem specially in the whole domain without the periodic assumptions in various situations by using the dynamical approach which is inspired by the weak kam theory .see . the first author and p. e. souganidis obtained in more general results , for possibly non - convex hamiltonians , by using an approach based on partial differential equations methods and viscosity solutions , which was not using in a crucial way the explicit formulas of representation of the solutionslater , by using partially the ideas of but also of , results on the asymptotic problem for unbounded solutions were provided in .there also exists results on the asymptotic behavior of solutions of convex hamilton - jacobi equation with boundary conditions .the third author studied the case of the state constraint boundary condition and then the dirichlet boundary conditions .roquejoffre in was also dealing with solutions of the cauchy - dirichlet problem which satisfy the dirichlet boundary condition pointwise ( in the classical sense ) : this is a key difference with the results of where the solutions were satisfying the dirichlet boundary condition in a generalized ( viscosity solutions ) sense .these results were slightly extended in by using an extension of pde approach of .we also refer to the articles for the large time behavior of solutions to time - dependent hamilton - jacobi equations .recently e. yokoyama , y. giga and p. rybka in and the third author with y. giga and q. liu in has gotten the large time behavior of solutions of hamilton - jacobi equations with noncoercive hamiltonian which is motivated by a model describing growing faceted crystals .we refer to the article for the large - time asymptotics of solutions of nonlinear neumann - type problems for viscous hamilton - jacobi equations .this paper is organized as follows : in section [ preliminary ] we state the precise assumptions on and , as well as some preliminary results on ( cn ) , ( dbc ) , ( e1 ) and ( e2 ) .section [ pde ] is devoted to the proof of convergence results by the pde approach . in section [ dsa ]we devote ourselves to the proof of convergence results by the optimal control / dynamical system approach .then we need to give the variational formulas for solutions of ( cn ) and ( dbc ) and results which are related to the weak kam theory , which are new and interesting themselves .in appendix we give the technical lemma which is used in section [ pde ] and the proofs of basic results which are presented in section [ preliminary ] . before closing the introduction, we give a few comments about our notation .we write for , and . for for denote by , , , the space of real - valued continuous , lower semicontinuous , upper semicontinuous , lipschitz continuous and on with values in , respectively . for denote by and the set of all measurable functions whose absolute value raised to the -th power has finite integral and which are bounded almost everywhere on with values in , respectively .we write for the sets of -th continuous differentiable functions for . forgiven and , we use the symbol ,b) ] with values in .we call a function a modulus if it is continuous and nondecreasing on and vanishes at the origin .in this section , we introduce the key assumptions on and we present basic pde results on ( cn ) and ( dbc ) ( existence , comparison , ... , etc . ) which will be used throughout this article . the proofsare given in the appendix .we use the following assumptions .* is a bounded domain of with a -boundary . in the sequel ,we denote by a -defining function for , i.e. a -function which is negative in , positive in the complementary of and which satisfies on .such a function exists because of the regularity of if , we have where is the unit outward normal vector to at in order to simplify the presentation and notations , we will use below the notation for , even if is not on .of course , if , is still an outward normal vector to at , by assumption does not vanish on but it is not anymore a unit vector .* the function is continuous and coercive , i.e. , , there exists a constant such that for all and .* there exists such that for all , and with .* there exists a constant such that for any and .* the function is convex for any .we briefly comment these assumptions .assumption ( a1 ) is classical when considering the large time behavior of solutions of hamilton - jacobi equations since it is crucial to solve ergodic problems .assumption ( a2 ) is a non - restrictive technical assumption while ( a3)-(a4 ) are ( almost ) the definition of a nonlinear neumann boundary condition .finally the convexity assumption ( a5 ) on will be necessary to obtain the convergence result .we point out that the requirements on the dependence of and in are rather weak : this is a consequence of the fact that , because of ( a1 ) , we will deal ( essentially ) with lipschitz continuous solutions ( up to a regularization of the subsolution by sup - convolution in time .therefore the assumptions are weaker than in the classical results ( cf . ) .a typical example for is the boundary condition arising in the optimal control of processes with reflection which has control parameters : where is a compact metric space , are given continuous functions and is a continuous vector field which is oblique to , i.e. , for any and .our first result is a comparison result .[ thm : comparison ] let and be a subsolution and a supersolution of ( cn ) ( resp ., ( dbc ) ) , respectively .if on , then on . then , applying carefully perron s method ( cf . ) , we have the existence of lipschitz continuous solutions .[ thm : existence ] for any , there exists a unique solution of ( cn ) or ( dbc ) .moreover , if , then is lipschitz continuous on and therefore and are uniformly bounded . finally , if and are the solutions which are respectively associated to and , then finally we consider the additive eigenvalue / ergodic problems .[ thm : additive]there exists a solution of ( e1 ) ( resp . , ( e2 ) ) . moreover , the additive eigenvalue is unique and is represented by the following proposition shows that , taking into account the ergodic effect , we obtain bounded solutions of ( cn ) or ( dbc ) . this result is a straightforward consequence of theorems [ thm : additive ] and [ thm : comparison ] . [prop : bound ] let be the additive eigenvalue for ( e1 ) ( resp . , ( e2 ) ) .let be the solution of ( cn ) ( resp ., ( dbc ) ). then is bounded on . from now on , replacing by , we can normalize the additive eigenvalue to be . as a consequence also replaced by and by in the ( dbc)-case . in order to obtain the convergence result, we use the following assumptions . * either of the following assumption ( a6) or ( a6) holds .* there exists such that , for any ] , * there exists such that , for any ] , there exists such that if and ( or if and ) for some and , then for any ] , there exists such that and for all , with .+ ( ii ) * ( asymptotically decreasing property ) * assume that ( a6) holds .for any ] . by the uniform continuity of and , we have .it is easily seen that and for all and ] and some , where and ( ii ) assume that ( a6) holds .the function is a subsolution of for any ] and let be the function given by .we recall that for any , .let and be a strict local minimum of , i.e. , for all \setminus\{(\xi,{\sigma})\} ] at some point which converges to when .then there are two cases : either ( i ) or ( ii ) .we only consider case ( ii ) here too since , again , the conclusion follows by the same argument as in in case ( i ) . in case ( ii ) , since , the -term vanishes and we have by the strict minimum point property . forany let and be , respectively , the functions given in lemma [ lem : func - c ] with and , and let and be , respectively , the functions given in lemma [ lem : coercivity ] with and for .we set \} ] .+ ( i ) assume that ( a6) holds .the function is a supersolution of for any ] .as we mentioned in the introduction , we mainly concentrate on problem ( cn ) in this section .we begin this section with an introduction to the skorokhod problem .let and set by the convex duality , we have note that we set and observe that , for , is a convex subset of and is a closed convex subset of .observe as well that if belongs to for some , then and hence .for example , if for some functions , then \infty&\text { otherwise , } \end{cases}\ ] ] and let and , and let ,{\mathbb{r}}^n) ] and ,{\mathbb{r}}) such that } \lab{i:2 - 5}\ ] ] .\ ] ] observe here that the inclusion is equivalent to the condition that if , and and if .condition is therefore equivalent to the condition that ,\lab{i:2 - 6 } \\ & \text{and } \\dot\eta(t)=v(t ) \\text { if } \ l(t)=0 \\text { for a.e . } t\in[0,\,t ] .\notag\end{aligned}\ ] ] here we have used the fact that is lower semi - continuous ( hence borel ) function bounded from below by the constant .the expression in is actually defined only for those ] by we remark that under assumption we have for a.e . ] .now , given a point , a constant and a function ,{\mathbb{r}}^n) ] and ,\,{\mathbb{r}}) ] .there exists a solution of the skorokhod problem .moreover , there exists a constant , independent of , and , such that , for any solution of the skorokhod problem , the inequalities and hold for a.e . ] , ,{\mathbb{r}}^n) ] such that conditions , are satisfied , and denotes the set of the triples of functions , and on such that , for all , the restriction of to the interval ] so that for a.e . ] , then we have for a.e . ] , and we are done .henceforth we assume that the set \mid l(s)>0\} ] , we find that for any , that is , for all . fix any . since , we have and .accordingly , we get and hence , .finally , we note that latexmath:[ ] . due to lemma[ i : selec ] , there exists a such that for all . according to ( * ? ? ?* theorem 4.1 ) , there exists a pair ,{\mathbb{r}}^n){\times}l^1([0,\,t],{\mathbb{r}}) ] and for a.e . ] , and observe that we have for a.e . ] . by replacing needed , we may assume that and for all , where and are assumed to be defined and continuous on .we may assume that .set , and note that and for .we choose a point )\cap q ] satisfying and , with the function in place of .we apply ( * ? ? ?* lemma 5.5 ) , to find a triple such that for a.e . , note here that , since , we have and for a.e. ] .here we may assume that and .set .we may choose a point so that . we select a triple so that where .we set .it is clear that and for all ] , where .consequently , in view of we get which is a contradiction .the function is thus a supersolution of , .it remains to show the continuity of on . in view of theorem [ thm : comparison ] , we need only to prove that indeed , once this is done , we see by theorem [ thm : comparison ] that on , which guarantees that . to show , fix any .we may select a function such that for all and indeed , we can first approximate by a sequence of functions and then modify the normal derivative ( without modifying too much the function itself ) by adding a function of the form where is a , increasing function such that , and for all . then we may choose a constant so that the function is a ( classical ) subsolution of , .then , for any and , we have setting and for ] , combining these observations , we obtain which ensures that for all , and moreover , for all . next , fix any and set , and for . observe that and that and this shows that for all .thus we find that is valid , which completes the proof .next we present the variational formula for the solution of ( dbc ) .the basic idea of obtaining this formula is similar to that for ( cn ) , and thus we just outline it or skip the details .we define the function on by where the infimum is taken all over , , and ] a pair of functions ,{\mathbb{r}}^{n+1}) ] such that , for ] , if for a.e . ] and for some ,{\mathbb{r}}) ] , the pair of functions ,{\mathbb{r}}^{n+1}) ] is a solution of the skorokhod problem for and if and only if and for all ] and .then there is a triple such that for a.e . , where and for .the above lemma can be proved in a parallel fashion as in the proof of ( * ? ? ?* lemma 5.5 ) , and we leave it to the reader to prove the lemma . in this sectionwe establish the existence of extremal curves ( or optimal controls ) for the variational formula .we set .[ i : exist - extremal - en ] let and let be the unique solution of _( cn)_. let . then there exists a triple such that where .moreover , ,{\mathbb{r}}^n) ] .fix . in view of formula, we may choose a sequence such that for , where .we show that the sequence is uniformly integrable on ] .if we choose a constant so that , then for all and hence , for a.e . ] , which implies that for a.e . ] , and , using the above estimate with and , observe that and hence , }|u|+1+c(c_1)t+c(2c_1+a)|e|,\end{aligned}\ ] ] where denotes the lebesgue measure of . from this , we easily deduce that is uniformly integrable on ] .next , we show that is uniformly integrable on ] .we apply the dunford - pettis theorem to the sequence , to find an increasing sequence and functions ,{\mathbb{r}}^n) ] such that , as , weakly in ,{\mathbb{r}}^{2n+2}) ] , we have uniformly on ] and for a.e . ] .setting and on ] satisfy for all ] , and conclude that .next , we set for ] , we see that for a.e . ] . using and , we get therefore , we have for a.e . ] , and observe as in that here we may choose a constant so that for a.e . ] as well as ,{\mathbb{r}}^{n+1}) ] and ,{\mathbb{r}}^{n+2}) ] , from which we conclude that . throughout this sectionwe fix a subsolution of ( e1 ) with , and a lipschitz curve in , i.e. , ,{\mathbb{r}}^n) ] . henceforth in this sectionwe assume that there is a bounded , open neighborhood of for which , and are defined and continuous on , and , respectively .moreover , we assume by replacing and in ( a3 ) , ( a4 ) respectively by other positive numbers if needed that ( a1 ) , with in place of , and ( a3)(a5 ) , with in place of , are satisfied .( of course , these are not real additional assumptions . )[ i : exist - p - n ] there exists a function ,{\mathbb{r}}^n) ] , , , and if . to prove the above theorem, we use the following lemmas .[ i : basic - p ] let , and ,{\mathbb{r}}^n) ] , then there exists a function ,{\mathbb{r}}^n) ] , , , and if .observe first that for all ] , and then , in view of the banach - sack theorem , we may choose a sequence and a function ,{\mathbb{r}}^n) ] as and for a.e . ] , , and if .moreover , we have , for all ] , , and if , and , for all ] such that for a.e . ] , there exist a neighborhood of , relative to ] .consider first the case where .there is a such that , where \cap[0,\,t] ] , then . set for . then , for a.e . ] , and observe that and .there is a sequence ] as in the dynamical approach .we fix and introduce the function on given by .define the constant ] for some . finally , as , .therefore it is enough to consider for small enough , and we follow the classical proof by introducing for .this maximum is achieved at ] by let achieve its maximum at ] by let achieve its maximum at ^{2}$ ] and set derivating ( formally ) with respect to each variable at , we have we remark that we should interpret and in the viscosity solution sense here .we consider the case where .note that and then we have for which are small enough compared to . in the case where we similarly obtain for which are small enough compared to .the existence part being standard by using the perron s method ( see ) , we mainly concentrate on the regularity of solutions when .we may choose a sequence so that and for some which is uniform for all .we fix .we claim that and are , respectively , a sub and supersolution of ( cn ) or ( dbc ) with for a suitable large .we can easily see that are a sub and supersolution of ( cn ) or ( dbc ) in if .we recall ( see for instance ) that if , then where denotes the super - differential of at .we need to show that for all . by ( a2 ) it is clear enough that there exists such that , if , then . then choosing , the above inequality holds .a similar argument shows that is a supersolution of ( cn ) for large enough .it is worth pointing out that such is independent of . we can easily check that are a sub and supersolution of ( dbc ) on too . by perron s method ( see ) and theorem [ thm : comparison ] , we obtain continuous solutions of ( cn ) or ( dbc ) with that we denote by . as a consequence of perron s method, we have to conclude , we use a standard argument : comparing the solutions and for some and using the above property on the , we have as a consequence we have and , by using the equation together with ( a1 ) , we obtain that is also bounded . finally sending by taking a subsequenceif necessary we obtain the lipschitz continuous solution of ( cn ) or ( dbc ) .we finally remark that , if , we can obtain the existence of the uniformly continuous solution on by using the above result for and ( [ eq : contsol ] ) which is a direct consequence of theorem [ thm : comparison ] .we first prove ( i ) . for any we consider following similar arguments as in the proof of theorem [ thm : existence ] , it is easy to prove that , for large enough and are , respectively , a subsolution and a supersolution of or .we remark that , because of ( a1 ) and the regularity of the boundary of , the subsolutions of such that on satisfy in for some and therefore they are equi - lipschitz continuous on . with these informations ,perron s method provides us with a solution of .moreover , by construction , we have for a fixed . because of andthe regularity of the boundary , is a sequence of equi - lipschitz continuous and uniformly bounded functions on . by ascoli - arzela s theorem, there exist subsequences and such that as for some and . by a standard stability result of viscosity solutionswe see that is a solution of ( e1 ). * acknowledgements . *this work was partially done while the third author visited mathematics department , university of california , berkeley .he is grateful to professor lawrence c. evans for useful comments and his kindness .g. barles and j .-roquejoffre , _ ergodic type problems and large time behaviour of unbounded solutions of hamilton - jacobi equations _ , comm .partial differential equations * 31 * ( 2006 ) , no . 7 - 9 , 12091225 .y. giga , q. liu and h. mitake , _ large - time behavior of one - dimensional dirichlet problems of hamilton - jacobi equations with non - coercive hamiltonians _, j. differential equations 252 ( 2012 ) , 12631282 . .h. ishii , _asymptotic solutions for large time of hamilton - jacobi equations in euclidean n space _ , ann .h. poincar anal .non linaire , * 25 * ( 2008 ) , no 2 , 231266 . h. ishii , weak kam aspects of convex hamilton - jacobi equations with neumann type boundary conditions , j. math .pures appl .( 9 ) * 95 * ( 2011 ) , no .1 , 99135 .e. yokoyama , e , y. giga , and p. rybka , _ a microscopic time scale approximation to the behavior of the local slope on the faceted surface under a nonuniformity in supersaturation _d * 237 * ( 2008 ) , no .22 , 28452855 . | in this article , we study the large time behavior of solutions of first - order hamilton - jacobi equations , set in a bounded domain with nonlinear neumann boundary conditions , including the case of dynamical boundary conditions . we establish general convergence results for viscosity solutions of these cauchy - neumann problems by using two fairly different methods : the first one relies only on partial differential equations methods , which provides results even when the hamiltonians are not convex , and the second one is an optimal control / dynamical system approach , named the `` weak kam approach '' which requires the convexity of hamiltonians and gives formulas for asymptotic solutions based on aubry - mather sets . |
we study here different compositions of point processes , say where and are mutually independent and can possibly be birth , death or homogeneous poisson processes .these compositions arise in the analysis of different population models , when , in some specific experimental circumstances , the time must be replaced by a stochastic process in our analysis we concentrate our attention on the case where the time increments are unit - valued and thus are properly represented by point processes , such as linear birth and poisson processes . in these cases the composed process can be regarded as a randomly sampled population model .the simplest case of the composition of two independent homogeneous poisson processes ( with different rates ) has been already studied in .more recently , the iterated poisson process has been considered in .our investigation concentrates here on the distribution of the first - passage times , i.e. , in this context , substantially differ from the upcrossing and downcrossing times , i.e. , the composed processes jump over ( resp . under ) any level with positive probability .the hitting probabilities display different behaviors .for example , for the iterated birth process ( with independent linear birth processes ) , the probabilities attain their maximal value at . in the linear death process at poisson distributed time the hitting probabilities display an oscillating behavior , which can be perceived when the initial size of the population is sufficiently large .furthermore , we observe that in all the cases considered here the hitting probabilities depend only on the parameter of the outer process. moreover , the processes display sample paths with upward or downward jumps of arbitrary size . other models with the same feature , such as the space - fractional poisson processes or , in general , time - changed poisson processes with berntein subordinators , have been analyzed in .multiple jumps are also displayed by the generalized fractional birth processes studied in .as in this last work , this property is reflected in the form of the equation governing the state probabilities where the time derivative is shown to depend , also here , on all , for in particular , our analysis concerns the following specific cases : * (section 3 ) * (section 4 ) * (section 5 ) * (section 6 ) , where and are independent linear birth ( yule - furry ) processes , is the homogeneous poisson process , is the linear death process and is the sublinear death process .the latter is characterized by the fact that the probability of a further death in depends on the number of deaths recorded up to time ( unlike the linear case where this probability depends on the number of surviving individuals ) .the sublinear birth process was introduced in don , while the sublinear death process has been investigated in , in a fractional context .the processes considered here can be useful to model the population evolution sampled at poisson times. the iterated birth process can be more appropriate for the cases where the time separating occurrences is rapidly decreasing .this structure can be applied , for example , in some experimental studies of diseases or epidemic diffusions .time - changed birth processes of different forms have been studied in din and , with various applications to finance .time - changed poisson and birth processes arise also in the study of fractional point processes ( see , for example , , , and cah2 ) .* list of the main symbols * where is the number of the initial components of the population and us first consider the composition of a linear birth ( yule - furry ) process , with one initial progenitor and birth rate with an independent process of the same kind , with parameter i.e. a consequence of the definition ( [ def ] ) , the initial number of progenitors of the iterated linear birth process is random , since a.s .the probability mass function of this process can be written , for any as observe that prove now that the probabilities decrease as increases , since we have that \\ & = & e^{-\lambda t-\alpha } \sum_{l=1}^{k-1}\binom{k-2}{l-1}\frac{(-e^{-\alpha } ) ^{l}}{1-e^{-\alpha ( l+1)}(1-e^{-\lambda t } ) } \\ & = & -e^{-\lambda t-2\alpha } \sum_{m=0}^{\infty } ( 1-e^{-\lambda t})^{m}e^{-2\alpha m}\sum_{l=0}^{k-2}\binom{k-2}{l}(-1)^{l}e^{-\alpha l(m+1 ) } \\ & = & -e^{-\lambda t-2\alpha } \sum_{m=0}^{\infty } ( 1-e^{-\lambda t})^{m}e^{-2\alpha m}(1-e^{-\alpha ( m+1)})^{k-2}<0,\end{aligned}\]]for any the sample paths of the iterated birth process display upward jumps of size larger or equal to one . for this reasonthe analysis of the first - passage time through an arbitrary level , i.e. of a certain importance .moreover , we shall prove that is strictly less than one , as it happens for the iterated poisson process ( see ) and thus any level can be avoided with positive probability , because of `` multiple jumps '' .we present the explicit distribution of in the next theorem .the distribution of reads , \label{bb}\]]for any by a conditioning argument we get coincides with ( [ bb ] ) .we note that , in the special case , we get ^{2}}% dt=\lambda e^{\lambda t}(1-e^{-\alpha } ) [ q_{1}^{\mathcal{z}}(t)]^{2}dt\ ] ] by integrating ( [ bb ] ) , we get that , \label{cc}\]]which can be alternatively written as in order to analyze some special cases , we supply also the following finite sum form of the hitting probabilities given in ( [ cc]): \notag \\ & = & e^{-\alpha k}\sum_{r=0}^{k-2}\binom{k-1}{r}(-1)^{r}\frac{e^{-\alpha ( r+1)}% } { 1-e^{-\alpha ( r+1)}}\left ( e^{\alpha ( k - r-1)}-1\right ) .\label{ic}\end{aligned}\]]it is now easy to show that the following relationships hold , for : we have that \\\pr \{t_{4}^{\mathcal{z}}<\infty \ } & = & e^{-2\alpha } \left [ 1-e^{-\alpha } % \frac{(1-e^{-\alpha } ) ( 1+e^{-3\alpha } ) } { 1+e^{-\alpha } + e^{-2\alpha } } \right ] .\end{aligned}\]]the following figures ( produced by the software r ) describe the behavior of the probabilities ( [ ic ] ) , for different values of we remark that the scales are not the same in different figures . ]the figures above show that the probabilities , for , are decreasing functions which , for vanish more rapidly than for the rate of decreasing is slow , for all since , from ( [ pre ] ) , we get that}{1-e^{-\alpha } + e^{-\alpha ( j+1)}}-\frac{% e^{-3\alpha } ( 1-e^{-\alpha } ) } { 1-e^{-\alpha } + e^{-2\alpha } } \\ & = & ( 1-e^{-\alpha } ) \left\ { \sum_{j=2}^{\infty } \frac{(1-e^{-\alpha j})[1+e^{-\alpha ( j+1)}]}{1-e^{-\alpha } + e^{-\alpha ( j+1)}}-\frac{% e^{-3\alpha } } { 1-e^{-\alpha } + e^{-2\alpha } } \right\ } = \infty , \end{aligned}\]]since}{% 1-e^{-\alpha } + e^{-\alpha ( j+1)}}=\frac{1}{1-e^{-\alpha } } .\ ] ]we now pass to the analysis of the process defined as as a composition of a linear birth process with an independent homogeneous poisson process with rate the process can be regarded as a linear birth process randomly sampled at poisson times .we first note that , for \right\ } .\notag\end{aligned}\]]in particular , for , we get thus the waiting time for the appearance of the first offspring is exponentially distributed with parameter it is easy to show that variance of can be obtained as follows + \mathbb{e}var\left [ \left . b_{\alpha } ( n_{\lambda } ( t))\right\vert n_{\lambda } ( t)\right ] \label{he } \\ & = & var\left [ n_{0}e^{\alpha n_{\lambda } ( t)}\right ] + \mathbb{e}\left [ n_{0}e^{2\alpha n_{\lambda } ( t)}-n_{0}e^{\alpha n_{\lambda } ( t)}\right ] \notag \\ & = & n_{0}(n_{0}+1)e^{\lambda t(e^{2\alpha } -1)}-n_{0}^{2}e^{\lambda t(e^{\alpha } -1)}-n_{0}e^{2\lambda t(e^{\alpha } -1)}. \notag\end{aligned}\]]for the sake of simplicity , from now on , we assume and denote for brevity as .the factorial moments of can be evaluated as follows : ... [ % \mathcal{x}(t)-r+1]\right\vert \mathcal{x}(t)=1\right\ } \label{ci } \\ & = & \sum_{k = r}^{\infty } k(k-1) ...(k - r+1)q_{k}^{\mathcal{x}}(t ) \notag \\ & = & [ \text{by ( \ref{si } ) ] } \notag \\ & = & r!e^{-\lambda t}\sum_{k = r}^{\infty } \binom{k}{r}\sum_{j=0}^{\infty } e^{-\alpha j}\left ( 1-e^{-\alpha j}\right ) ^{k-1}\frac{(\lambda t)^{j}}{j ! } \notag \\ & = & r!e^{-\lambda t}\sum_{j=0}^{\infty } e^{-\alpha j}\frac{(\lambda t)^{j}}{j!% } \sum_{l=0}^{\infty } \binom{r+l}{l}\left ( 1-e^{-\alpha j}\right ) ^{r+l-1 } \notag \\ & = & r!e^{-\lambda t}\sum_{j=0}^{\infty } e^{-\alpha j}\frac{(\lambda t)^{j}}{j!% } \frac{\left ( 1-e^{-\alpha j}\right ) ^{r-1}}{[1-(1-e^{-\alpha j})]^{r+1 } } \notag\end{aligned}\ ] ] using result ( [ ci ] ) we can check formulas ( [ bu ] ) and ( [ he ] ) , by means of independent calculations .our main interest here is to study the distribution of the first - passage time for an arbitrary level , i.e. in the next theorem we derive the following explicit distribution of for and we have that \ } \label{two}\ ] ] we start by considering that coincides with ( [ two ] ) . in the particular case formula ( [ two ] ) reduces to is an exponential r.v . with parameter andthis clearly shows that for the first - passage time through the level the following result holds for any , by integrating formula ( [ two ] ) , we get\sum_{m=0}^{\infty } e^{-\alpha ( l+1)m } \notag \\ & = & e^{-\alpha k}\sum_{m=0}^{\infty } e^{-\alpha m}\sum_{l=0}^{k-2}(-1)^{l}% \binom{k-1}{l}[e^{\alpha ( k-1)-\alpha l(m+1)}-e^{-\alpha lm } ] \notag \\ & = & \sum_{m=0}^{\infty } e^{-\alpha ( m+1)}\left [ ( 1-e^{-\alpha ( m+1)})^{k-1}-(-1)^{k-1}e^{-\alpha ( m+1)(k-1)}\right ] + \notag \\ & & -e^{-\alpha k}\sum_{m=0}^{\infty } e^{-\alpha m}\left [ ( 1-e^{-\alpha m})^{k-1}-(-1)^{k-1}e^{-\alpha m(k-1)}\right ] \notag \\ & = & \sum_{m=0}^{\infty } \left [ e^{-\alpha ( m+1)}(1-e^{-\alpha ( m+1)})^{k-1}-e^{-\alpha k}e^{-\alpha m}(1-e^{-\alpha m})^{k-1}\right ] \notag \\ & = & ( 1-e^{-\alpha k})\sum_{m=1}^{\infty } e^{-\alpha m}(1-e^{-\alpha m})^{k-1 } \notag \\ & = & ( 1-e^{-\alpha k})\sum_{m=1}^{\infty } \pr \{b_{\alpha } ( m)=k\}. \notag\end{aligned}\]]as in the case of the iterated birth process , the hitting probabilities ( for ) are not affected by the parameter of the inner process ( , in this case) moreover , we remark that the relationship given in the last line is equal to the analogous one presented in ( [ pre ] ) ( except for the additional term in ( [ pre ] ) , which is due to the different starting point of the birth process with respect to the poisson one): any thus , for the iterated linear birth process , the probability of reaching any level in a finite time is strictly smaller than the corresponding probability for .this difference decreases monotonically with for the first line of formula ( [ for ] ) reduces to ( [ rel]) in the special case we have that \\ & = & e^{-\alpha } \frac{1+e^{-2\alpha } } { 1+e^{-\alpha } } \\ & = & \pr \{\left .t_{2}^{\mathcal{x}}<\infty \right\vert \mathcal{x}(0)=1\}% \frac{1+e^{-2\alpha } } { 1+e^{-\alpha } } < \pr \{\left .t_{2}^{\mathcal{x}% } < \infty \right\vert \mathcal{x}(0)=1\}<1.\end{aligned}\]]we show now that , for any the distribution of the first - passage time for the level is monotonically decreasing : we present the following heuristic argument \\ & = & \frac{(1-e^{-\alpha } ) } { \alpha } \left\ { \left ( 1+ ... +e^{-\alpha ( k-2)}\right ) \left [ \frac{1}{k}-\frac{1}{k-1}\right ] + \frac{e^{-\alpha ( k-1)}}{k}\right\ } \\ & \leq & \frac{(1-e^{-\alpha } ) } { \alpha } \left [ -\frac{e^{-\alpha ( k-2)}}{% k(k-1)}(k-1)+\frac{e^{-\alpha ( k-1)}}{k}\right ] < 0,\end{aligned}\]]for the plots of fig.2 confirm the decreasing structure of the probabilities ] from formula ( [ for ] ) we derive the following equality is clear from ( [ io ] ) that , for large values of , we get thus the hitting probabilities slowly change with as fig . 2 confirms . by applying formula ( [ io ] )we can prove also the following relationship + \sum_{m=1}^{\infty } e^{-3\alpha m}(1-e^{-\alpha m})^{k-3}. \notag\end{aligned}\]]formula ( [ in ] ) shows that , for large values of , here consider linear and sublinear death processes at poisson times .the linear death process , with initial population of individuals and death rate , has a binomial distribution , i.e. probabilities ( [ i ] ) satisfy the difference - differential equations moreover , we denote the sublinear death process , with death rate , as and its distribution is given by probabilities satisfy the following difference - differential equations with in order to catch the probabilistic mechanism underlying ( [ mo ] ) we should consider that note that in the linear death process the probability of cancellation of an individual in is proportional to the number of existing components at time on the other hand , in the sublinear case , this probability is proportional to the number of deaths recorded up to time this property makes the sublinear process more suitable for describing epidemics and the diffusion of rumors . the expected value of can be evaluated as follows -\frac{1}{\mu } % ( 1-e^{-\mu t})\frac{d}{dt}\left [ \sum_{k=0}^{n_{0}}(1-e^{-\mu t})^{k}\right ] \notag \\ & = & n_{0}\left [ 1-\left ( 1-e^{-\mu t}\right ) ^{n_{0}+1}\right ] -(1-e^{-\mu t})% \left [ e^{\mu t}-e^{\mu t}\left ( 1-e^{-\mu t}\right ) ^{n_{0}+1}-(n_{0}+1)\left ( 1-e^{-\mu t}\right ) ^{n_{0}}\right ] \notag \\ & = & n_{0}+\left ( 1-e^{-\mu t}\right ) ^{n_{0}+1}+e^{\mu t}\left ( 1-e^{-\mu t}\right ) ^{n_{0}+2}+1-e^{\mu t } \notag \\ & = & n_{0}+1-e^{\mut}[1-(1-e^{-\mu t})^{n_{0}+1}],\qquad n_{0}\geq 1 .\notag\end{aligned}\]]it is easy to check that as expected . for the linear death process, the hitting time of the -th event , i.e. the following distribution we get any on the other hand , in the sublinear case , the distribution of we get any let is a linear death process with parameter and is an independent homogeneous poisson process with parameter then , for the probability mass function reads waiting time of the first death of is exponentially distributed with parameter and has the same form of the corresponding probability of the subordinated linear birth process , which was given in formula ( [ st ] ) .the probability generating function of can be written as follows:^{n_{0 } } \label{due } \\ & = & \sum_{m=0}^{n_{0}}\binom{n_{0}}{m}(-1)^{m}(1-u)^{m}\exp \{-\lambda t(1-e^{-\mu m}\}. \notag\end{aligned}\]]the mean value and variance are respectively given by + \mathbb{e}var\left [ \left . d_{\mu } ( n_{\lambda } ( t))\right\vert n_{\lambda } ( t)\right ] \\ & = & var\left [ n_{0}e^{-\mu n_{\lambda } ( t)}\right ] + \mathbb{e}\left [ n_{0}e^{-\mu n_{\lambda } ( t)}(1-e^{-\mu n_{\lambda } ( t)})\right ] \notag \\ & = & n_{0}(n_{0}-1)e^{\lambda t(e^{-2\mu } -1)}-n_{0}^{2}e^{2\lambda t(e^{-\mu } -1)}+n_{0}e^{\lambda t(e^{-\mu } -1)}. \notag\end{aligned}\ ] ] we are interested now in the differential equation satisfied by the distribution of the process as a preliminary result we prove the following lemma .the probability generating function given in ( [ due ] ) satisfies the following initial - value problem: g^{\mathcal{y}}(u , t ) \\g^{\mathcal{y}}(u,0)=u^{n_{0}}% \end{array}% \right . ,\quad t , u\geq 0 , \label{tre}\]]where is the shift operator . by considering the equation governing the state probabilities of the poisson process, we can write is the probability generating function of . since , for any g^{d}(u , t ) \\& = & -\mu ^{2}(1-u)\frac{\partial } { \partial u}g^{d}(u , t)+\mu ^{2}(1-u)^{2}% \frac{\partial ^{2}}{\partial u^{2}}g^{d}(u , t)\end{aligned}\]] , analogously can write are the stirling numbers of the second kind ( see ) . thus we get ( by considering that ) g^{\mathcal{y}% } ( u , t),\end{aligned}\]]where we have applied the dobinski formula for the bell polynomial , i.e. ( [ tre ] ) follows by applying the well - known exponential generating function of the bell s polynomial ( see e.g. ) , i.e. the probability mass function of satisfies the following equation the initial condition in order to derive the differential equation satisfied by ( [ uno ] ) we rewrite the first line in ( [ tre ] ) as follows: \\ & = & \lambda \sum_{k=0}^{n_{0}}q_{k}^{\mathcal{y}}(t)\sum_{j=0}^{k}(1-e^{-\mu } ) ^{j}\binom{k}{j}\sum_{l^{\prime } = k - j}^{k}\binom{j}{k - l^{\prime } } % ( -1)^{j - k+l^{\prime } } u^{l^{\prime } } -\lambda g^{\mathcal{y}}(u , t ) \\ & = & \lambda \sum_{k=0}^{n_{0}}q_{k}^{\mathcal{y}}(t)\sum_{l=0}^{k}u^{l}\binom{% k}{l}\sum_{j = k - l}^{k}(1-e^{-\mu } ) ^{j}\binom{l}{k - j}(-1)^{j - k+l}-\lambda g^{% \mathcal{y}}(u , t ) \\ & = & \lambda \sum_{l=0}^{n_{0}}u^{l}\sum_{k = l}^{n_{0}}\binom{k}{l}q_{k}^{% \mathcal{y}}(t)\sum_{j = k - l}^{k}(1-e^{-\mu } ) ^{j}\binom{l}{l - k+j}% ( -1)^{j - k+l}-\lambda g^{\mathcal{y}}(u , t ) \\ & = & [ j = k - l+j^{\prime } ] \end{aligned}\ ] ] \\ & = & \lambda \sum_{l=0}^{n_{0}}u^{l}\sum_{r=0}^{n_{0}-l}\binom{r+l}{l}q_{r+l}^{% \mathcal{y}}(t)(1-e^{-\mu } ) ^{r}e^{-\mu l}-\lambda g^{\mathcal{y}}(u , t).\end{aligned}\ ] ] by comparing equation ( [ bo ] ) with ( [ acc ] ) , we can see that , in the subordinated case the time - derivative of depends on the probabilities , for any , while , in the standard case , the derivative of depends only on this is due to the fact that the process performs downward jumps of arbitrary size , whose law can be derived from ( [ uno ] ) and can be written as follows: first line in ( [ iss ] ) means that , during the infinitesimal interval either the poisson process ( the `` time '' ) does not change or it changes and all the individuals survive during a time span of unit length . in our view the main result of this section is the density of the first passage - time through the level * * * * , i.e. is presented in the following theorem .the probability density of reads , for any ^{n_{0}-k}-\left [ 1-e^{-\mu j}\right ] ^{n_{0}-k}\right\ } .\notag\end{aligned}\ ] ] we start by considering that , for ^{n_{0}-k}-\left [ 1-e^{-\mu j}\right ] ^{n_{0}-k}\right\ } + \notag \\ & & + \lambda dte^{-\lambda t}\binom{n_{0}}{k}e^{-\mu k}(1-e^{-\mu } ) ^{n_{0}-k } , \notag\end{aligned}\ ] ] which coincides with ( [ fr ] ) .by integrating ( [ fr ] ) we get that , for ^{n_{0}-k}-\left [ 1-e^{-\mu j}\right ] ^{n_{0}-k}\right\ } .\label{ic3}\]]we note that the probability ( [ ic3 ] ) has the same structure of formula ( 36 ) in , which is related to the iterated poisson process , despite the fact that the outer process has decreasing paths .an alternative form of ( [ fr ] ) , as a finite sum , can be obtained as follows ,\]]which , by integration , gives any note that , for the extinction probability is given by , we have instead that ] unexpectedly enough fig.3 ( which is obtained here for ) shows that the probabilities do not display a monotonic behavior for sufficiently large values of .let is a sublinear death process with parameter and is an independent poisson process with parameter then the probability mass function reads , while , for , is given by is a decreasing function of the initial number of individuals , as can be directly checked , by considering ( [ mo ] ) .the expected value of can be evaluated , by considering ( [ e ] ) , as follows\right\ } \frac{% ( \lambda t)^{j}}{j ! } \\ & = & n_{0}+1-e^{-\lambda t(1-e^{\mu } ) } + \sum_{j=0}^{\infty } ( 1-e^{-\mu j})^{n_{0}+1}\frac{(\lambda te^{\mu } ) ^{j}}{j ! } \\ & \leq & n_{0}-\left ( 1-e^{-\lambda t(1-e^{-\mu } ) } \right ) .\end{aligned}\]]in the case of the subordinated sublinear death process , we have no markovianity and thus we can not evaluate explicitly the distribution of the hitting times for this reason we consider here the instant of the first downcrossing of the level , i.e. , we can write \\ & = & \sum_{l = k}^{n_{0}}\sum_{r=0}^{n_{0}-l}\binom{n_{0}-l}{r}(-1)^{r}\exp \left\ { -\lambda t(1-e^{-\mu ( 1+r)})\right\ } \\ & = & \sum_{r=0}^{n_{0}-k}(-1)^{r}\exp \left\ { -\lambda t(1-e^{-\mu ( 1+r)})\right\ } \sum_{l = k}^{n_{0}-r}\binom{n_{0}-l}{r}.\end{aligned}\]]the inner sum can be treated as follows \\ & = & \binom{n_{0}-k+1}{r+1},\end{aligned}\]]so that we get vanishes , for , for any . for by applying ( [ no2 ] ) we have , instead , the following extinction probability: , for , is equal to thank dr .bruno toaldo for providing the figures presented in this paper . | in this paper we study the iterated birth process of which we examine the first - passage time distributions and the hitting probabilities . furthermore , linear birth processes , linear and sublinear death processes at poisson times are investigated . in particular , we study the hitting times in all cases and examine their long - range behavior . the time - changed population models considered here display upward ( birth process ) and downward jumps ( death processes ) of arbitrary size and , for this reason , can be adopted as adequate models in ecology , epidemics and finance situations , under stress conditions . * keywords and phrases * : yule - furry process , linear and sublinear death processes , hitting times , extinction probabilities , first - passage times , stirling numbers , bell polynomials . _ ams mathematical subject classification ( 2010 ) : 60g55 , 60j80 . _ |
in the last decade , systematic measurements of the cosmic microwave background anisotropies as well as large - scale structure of the universe have led to the establishment of the `` standard cosmological model '' ( e.g. , ) .the universe is close to a flat geometry , and is filled with the hypothetical cold dark matter ( cdm ) particles , together with a small fraction of baryons , which serve as the seeds of structure formation of the universe .the most striking feature in the standard cosmological model is that the energy contents of the universe is dominated by the mysterious energy component called dark energy , which is supposed to drive the late - time cosmic acceleration discovered by the observation of distant supernovae ( e.g. , ) .currently , our understanding of the nature of dark energy is still lacking . although the observation is roughly consistent with cosmological constant and with no evidence for time dependence of dark energy , long - distance modifications of general relativity have been proposed alternative to the dark energy and these reconcile with the observation of late - time acceleration ( see for reviews ) .while a fully consistent model of modified gravity has not yet been constructed ( see for popular models ) , a possibility of break - down of general relativity still remains and should be tested .to understand deeply the nature of dark energy or origin of cosmic acceleration , a further observational study is definitely important .there are two comprehensive ways to distinguish between many models of dark energy and discriminate the dark energy from modified gravity .one is to precisely measure the expansion history of the universe , and the other is to observe the growth of structure . among various observational techniques ,baryon acoustic oscillations ( baos ) imprinted on the matter power spectrum or two - point correlation function can be used as a standard ruler to measure the cosmic expansion history ( e.g. , , see also for recent bao measurements ) .the characteristic scale of baos , which is determined by the sound horizon scale of primeval baryon - photon fluid at the last scattering surface , is thought to be a robust measure and it lies on the linear to quasi - linear regimes of the gravitational clustering of large - scale structure . with a percent - level determination of the characteristic scale of baos , the expansion history can be tightly constrained , and the equation - of - state parameter of the dark energy , , defined by the ratio of pressure to energy density of dark energy , would be precisely determined within the precision of a few % level .this is the basic reason why most of the planned and ongoing galaxy redshift surveys aim at precisely measuring the baos ( e.g. , ) . while the robustness of the baos as a standard ruler has been repeatedly stated and emphasized in the literature , in order to pursue an order - of - magnitude improvement , a precise theoretical modeling of baos definitely plays an essential role for precision measurement of bao scale , and it needs to be investigated taking account of the various systematic effects . among these , the non - linear clustering and redshift - space distortion effects as well as the galaxy biasing can not be neglected , and affect the characteristic scale , although their effects are basically moderate at the relevant wavenumber , .recently , several analytic approaches to deal with the non - linear clustering have been developed , complementary to the n - body simulations .in contrast to the standard analytical calculation with perturbation theory ( pt ) , these have been formulated in a non - perturbative way with techniques resumming a class of infinite series of higher - order corrections in perturbative calculation .thanks to its non - perturbative formulation , the applicable range of the prediction is expected to be greatly improved , and the non - linear evolution of baryon acoustic oscillations would be accurately described with a percent - level precision .the purpose of this paper is to investigate the viability of this analytic approach , focusing on a specific improved treatment . in the previous paper ,we have applied a non - linear statistical method , which is widely accepted in the statistical theory of turbulence , to the cosmological perturbation theory of large - scale structure .we have derived the non - perturbative expressions for the power spectrum , coupled with non - linear propagator , which effectively contain the information on the infinite series of higher - order corrections in the standard pt expansion .based on this formalism , the analytic treatment of the non - perturbative expression is developed employing the born approximation , and the leading - order calculation of power spectrum is compared with n - body simulations in real space , finding that a percent - level agreement is achieved in a mildly non - linear regime ( see also ) . here, we extend the analysis to those including the next - to - leading order corrections of born approximation .in addition to the power spectrum , we will consider the two - point correlation function , paying a special attention on the baryon acoustic peak , i.e. , a fourier counterpart of baos in power spectrum .further , we also discuss the non - linear clustering in redshift space , and the predictions of improved pt are compared with n - body results , combining a non - linear model of redshift - space distortion .we examine how well the present non - linear model accurately describe the systematic effects on baos and/or baryon acoustic peak .this paper is organized as follows . in sec .[ sec : preliminaries ] , we briefly mention the basic equations for cosmological pt as our fundamental basis to deal with the non - linear gravitational clustering .we then discuss in some details in sec . [sec : cla ] how to compute the non - linear power spectrum or two - point correlation functions .starting from the discussions on standard treatment of perturbative calculation and its non - perturbative reformulation called renormalized pt , we introduce the closure approximation , which gives a consistent non - perturbative scheme to treat the infinite series of renormalized pt expansions , and obtain a closed set of non - perturbative expressions for power spectrum . based on this , we present a perturbative treatment of the closed set of equations while keeping important non - perturbative properties .section [ sec : pt_vs_n - body ] gives the main result of this paper , in which a detailed comparison between improved pt calculation and n - body simulation is made , especially focusing on the non - linear evolution of baos .we compute the power spectrum and two - point correlation function in both real and redshift spaces , and investigate the accuracy of both predictions by comparing improved pt with n - body results .finally , section [ sec : conclusion ] is devoted to the discussion and conclusion .throughout the paper , we consider the evolution of cold dark matter ( cdm ) plus baryon systems neglecting the tiny fraction of ( massive ) neutrinos . owing to the single - stream approximation of the collisionless boltzmann equation , which is thought to be a quite accurate approximation on large scales , the evolution of the cdm plus baryon system can be treated as the irrotational and pressureless fluid system whose governing equations are continuity and euler equations in addition to the poisson equation ( see ref . for review ) . in the fourier representation , these equations are further reduced to a more compact form .let us introduce the two - component vector ( e.g., ) : where the subscript selects the density and the velocity components of cdm plus baryons , with and , where and are the scale factor of the universe and the hubble parameter , respectively . the function is given by and the quantity being the linear growth factor .then , in terms of the new time variable , the evolution equation for the vector quantity becomes \phi_b({\mbox{\boldmath}};\eta)=\int\frac{d^3{\mbox{\boldmath}}_1\,d^3{\mbox{\boldmath}}_2}{(2\pi)^3 } \delta_d({\mbox{\boldmath}}-{\mbox{\boldmath}}_1-{\mbox{\boldmath}}_2)\,\gamma_{abc}({\mbox{\boldmath}}_1,{\mbox{\boldmath}}_2)\ , \phi_b({\mbox{\boldmath}}_1;\eta)\,\phi_c({\mbox{\boldmath}}_2;\eta ) , \label{eq : vec_fluid_eq}\ ] ] where is the dirac delta function .here and in what follows , we use the summation convention that the repetition of the same subscripts indicates the sum over the whole vector components .the time - dependent matrix is given by the quantity is the density parameter of cdm plus baryons at a given time .each component of the vertex function becomes note that the formal solution of can be obtained from eq .( [ eq : vec_fluid_eq ] ) and is expressed as ( e.g. , ) here , the quantity is the constant vector which specifies the initial condition ( see next section ) , and the quantity denotes the linear propagator satisfying the following equation : g_{bc}(\eta,\eta')=0 , \label{eq : linear_prop}\ ] ] with the boundary condition .the quantity is the random density field given at an early time , which is assumed to obey the gaussian statistic .the power spectrum of density field is defined as eq . ( [ eq : vec_fluid_eq ] ) or ( [ eq : formal_sol ] ) is the fundamental building block of large - scale structure , and the three quantities , and introduced here constitute the basic pieces of standard pt . the graphical representation of them is shown in fig .[ fig : diagram_basic ] ( see also ref .) with boundary condition . the explicit expression of vertex function is given by eq . ( [ eq : def_gamma ] ) .[ fig : diagram_basic],title="fig:",width=151 ] ) with boundary condition .the explicit expression of vertex function is given by eq .( [ eq : def_gamma ] ) .[ fig : diagram_basic],title="fig:",width=151 ] ) with boundary condition .the explicit expression of vertex function is given by eq .( [ eq : def_gamma ] ) .[ fig : diagram_basic],title="fig:",width=188 ] in this paper , we are especially concerned with the non - linear evolution of the two - point statistics , defined as the ensemble average of : in the above , there are four types of power spectra , , , and , which respectively correspond to the auto- and cross - power spectra , , , and .note that in general we have unless .consider how to compute the power spectrum based on the analytic treatment . in the standard treatment of the perturbation theory, we first assume that the field is a small perturbed quantity and it is expanded as the explicit functional form of the quantity is systematically derived through the order - by - order treatment of eq .( [ eq : vec_fluid_eq ] ) . substituting the above expansion into the definition ( [ eq : def_pk ] ) and evaluating it perturbatively ,the power spectrum , shortly abbreviated as , is schematically expressed as where we chose , which implies that the growing - mode solution is imposed at the initial condition .the function is the linear power spectrum given at an early time , obtained from the first - order quantity ( see eq .( [ eq : def_p_0 ] for definition ) . the subsequent terms and represent the corrections to the linear - order perturbation , arising from the higher - order quantities , , , . in terms of the basic pieces of the diagrams shown in fig . [fig : diagram_basic ] , the corrections and can be diagrammatically written as the one - loop and two - loop diagrams , i.e. , connected diagrams including one and two closed loops ( e.g. , see fig.5 in ref . ) , and they are roughly proportional to and , where .the explicit expressions for the power spectra together with the solutions of higher - order perturbation are summarized in appendix [ app : spt ] .it should be noted that in the standard pt expansion , the positivity of the perturbative corrections is not guaranteed .as we show later , the one- and two - loop contributions change the sign depending on the scale , and the absolute values of their amplitudes become comparable at lower redshift . in this respect ,the standard pt has a poor convergence property , and the improvement of pt predictions may not be always guaranteed even including the higher - order corrections .by contrast , renormalized pt ) . ] re - organizes the naive expansions of the standard pt by introducing the non - perturbative statistical quantities . in terms of these quantities , partial resummation of the naive expansion series is made , and the resultant convergence of the expansions is dramatically improved . in the renormalized pt , the power spectrum is expressed in the form as with being the time at which initial condition is imposed . here , is the power spectrum given at an early time .the quantity is one of the non - perturbative statistical quantities called non - linear propagator , together with the non - linear power spectrum .it is defined by where stands for a functional derivative .the propagator describes the influence of an infinitesimal disturbance for on , and it coincides with the linear propagator in the limit .note that there is another non - perturbative statistical quantity called full vertex , which is the non - linear counterpart of the vertex function . in the expression ( [ eq : rpt_expansion ] ), the term represents the corrections coming from the loop diagrams .in contrast to the standard pt , the loop diagrams in are whole _irreducible _ , as the result of renormalization or re - organization .further , each of the irreducible diagrams consists of the non - perturbative quantities of non - linear power spectrum , non - linear propagator and full vertex . in this respect ,renormalized pt is a fully non - perturbative formulation , and even the expansions truncated at some levels still contain the higher - order effects of non - linear gravitational evolution .this is the basic reason why the convergence properties in the renormalized pt are expected to be improved . as a trade - off , however , a straightforward application of renormalized pt seems difficult because of its non - perturbative formulation . while the term collects only the irreducible diagrams , it is expressed as an infinite sum of the loop diagrams , each of which involve the non - linear power spectrum itself . in practice, the approximation or simplification is needed to evaluate the expressions ( [ eq : rpt_expansion ] ) , which we will discuss in next subsection . in this subsection , taking a great advantage of the formulation of renormalized pt, we discuss how to approximately treat eq .( [ eq : rpt_expansion ] ) without losing its non - perturbative aspect as much as possible . in the framework of renormalized pt, the non - perturbative effects on the power spectrum are largely attributed to the non - linear propagator .thus , it seems essential to give a framework to treat both the non - linear propagator and power spectrum on an equal footing . as it has been pointed out by ref . , a similar kind of the renormalized expansion to the power spectrum ( [ eq : rpt_expansion ] ) can be made for the non - linear propagator : where the term represents the mode - coupling correction , which is also made of the infinite sum of irreducible loop diagrams . in order to give a self - consistent treatment for both eqs .( [ eq : rpt_expansion ] ) and ( [ eq : rpt_expansion2 ] ) , a simple but transparent approach is to first ( i ) adopt the tree - level approximation of the full vertex function , and to ( ii ) apply the truncation procedure to the mode - coupling terms .this treatment has been frequently used in the statistical theory of turbulence in order to deal with the navier - stokes equation , and is called _ closure approximation _ . in the first approximation ( i ) , the full vertex function is simply replaced with the linear - order one , i.e. , defined in eq .( [ eq : def_gamma ] ) .as for the truncation ( ii ) , the simplest choice is to keep the one - loop renormalized diagram only , and to discard all other contributions . with this approximation , the mode - coupling terms in and simply described by and .the analytical expressions for the one - loop contributions becomes the integrand in contain the function , which represents the non - linear mode - coupling between different fourier modes , given by note that the mode - coupling function possesses the following symmetry : .the corresponding diagrams to the integral expressions for power spectrum and non - linear propagator , i.e. , eqs .( [ eq : rpt_expansion ] ) and ( [ eq : rpt_expansion2 ] ) with mode - coupling terms ( [ eq : p_mc_cla ] ) and ( [ eq : g_mc_cla ] ) , are shown in fig .[ fig : diagram_cla ] . and .in the renormalized pt , the mode - coupling term is expressed as an infinite sum of the irreducible loop corrections . truncating the infinite sum at one - loop order and adopting the tree - level approximation of the full vertex function , we obtain the closed system of power spectrum and propagator , as shown in the figure .[ fig : diagram_cla],width=529 ] and . in the renormalized pt, the mode - coupling term is expressed as an infinite sum of the irreducible loop corrections . truncating the infinite sum at one - loop order and adopting the tree - level approximation of the full vertex function , we obtain the closed system of power spectrum and propagator , as shown in the figure .[ fig : diagram_cla],width=529 ] it is worth mentioning that the integral equations ( [ eq : rpt_expansion ] ) and ( [ eq : rpt_expansion2 ] ) with truncated mode - coupling terms ( [ eq : p_mc_cla ] ) and ( [ eq : g_mc_cla ] ) can be recast in the form of the integro - differential equations , and both the power spectrum and non - linear propagator can be computed by solving the evolution equations .this forward treatment seems especially suited for the full non - linear treatment of closure approximation and would be faster than directly treating the integral equations .numerical algorithm to solve evolution equations , together with preliminary results , is presented in details in ref . ( see also ) . in the present paper, we are especially concerned with the evolution of baos around , where the non - linearity of gravitational clustering is rather mild , and the analytical treatment even involving some approximations is still useful . here , employing the born approximation , we analytically evaluate the integral equations ( [ eq : rpt_expansion ] ) and ( [ eq : p_mc_cla ] ) .a fully numerical study on baos without born approximation will be discussed in a separate paper .the born approximation is the iterative approximation scheme in which the leading - order solutions are first obtained by replacing the quantities in the non - linear integral terms with linear - order ones .the solutions can be improved by repeating the iterative substitution of the leading - order solutions into the non - linear integral terms .consider the time evolution of the power spectrum started from the time .for a sufficiently small value of , the early - time evolution of power spectrum is well - approximated by the linear theory .assuming the growing - mode initial condition , we have with . then , substituting eq .( [ eq : initial_pk ] ) into ( [ eq : rpt_expansion ] ) , the iterative evaluation of the the integral equations ( [ eq : rpt_expansion ] ) with ( [ eq : p_mc_cla ] ) by the born approximation leads to where we define .the terms and respectively represent the leading- and next - to - leading order results of the born approximation to the mode - coupling term ( [ eq : p_mc_cla ] ) . the explicit expressions become the kernels and are respectively given by the diagram corresponding to the above expressions is shown in fig . [fig : born ] .note that in deriving the expression ( [ eq : pk_born ] ) , we do not expand the propagators and their non - perturbative properties still hold . in order to evaluate eq .( [ eq : pk_born ] ) , we use the analytic solution of derived in ref . , where the non - linear propagator was constructed approximately by matching the asymptotic behaviors at low- and high - k modes , based on eqs .( [ eq : rpt_expansion2 ] ) with ( [ eq : g_mc_cla ] ) .the resultant analytic solution behaves like at , where the quantity is the bessel function with its argument , and the velocity dispersion is approximately described by the linear theory , i.e. , .note that the final results of the power spectrum are a little bit sensitive to the high- behavior of the propagator , and a naive application of the approximate solution leads to a slight shift in the amplitude of power spectrum . while this is not serious at all for the leading - order calculation, it amounts a percent - level shift when we consider the higher - order correction , . as discussed by crocce & scoccimarro ( 2008 ) , one possible reason for this may be a small contribution from the sub - leading corrections in the propagator . in order to remedy the effect of small corrections ,we follow the method proposed by ref .we define ^{1/2},\ ] ] where means the non - linear matter power spectrum .then , the sub - leading correction can be corrected by simply multiplying the factor by , i.e. , .note that this treatment is only applied to the propagator in the lowest - order term in eq .( [ eq : pk_born ] ) , which most sensitively affects the power spectrum amplitude on small scales . for simplicity ,we use ` halofit ` to compute and adopt the cutoff wavenumber , , where is the non - linear scale defined by ref . . in the rest of this paper, we present the results for the analytic treatment based on the expression ( [ eq : pk_born ] ) . in computing the mode - coupling terms and , we must first evaluate the functions and for a given set of arguments , which involve the one- and two - dimensional integrals over time .we use the gaussian quadrature for these time integrations . as for the momentum integrals in the mode - coupling terms , thanks to the symmetry of the functions and , the multi - dimensional integrals in and can be reduced to the two- and four - dimensional integrals , respectively .we use the gaussian quadratures for the momentum integral in .the four - dimensional momentum integration in the mode - coupling term is performed with monte carlo technique of quasi - random sampling using the library , ` cuba` .finally , we note that the formulation and analytic treatment presented here have several distinctions and similarities to the other non - perturbative calculations proposed recently . in appendix[ app : comparison ] , we compare the present work with a subset of these treatments , and discuss how the approach developed here is complementary to or expands on these studies .in this section , particularly focusing on the baos , we compare the improved pt predictions from the analytic treatment of closure approximation with results of n - body simulations .we use a publicly available cosmological n - body code , ` gagdet2 ` .we ran two sets of simulations , ` wmap3 ` and ` wmap5 ` , in which we adopt the standard lambda cdm model with cosmological parameters determined from the wmap3 and wmap5 , respectively .the ` wmap3 ` run is basically the same n - body run as described in ref . , and a quantitative comparison between the leading - order results of improved pt and simulations has been previously made .we basically use the results of ` wmap3 ` run to check the consistency of the present calculations with the previous work .the ` wmap3 ` run is also helpful to cross - check the convergence properties in the new simulation , ` wmap5 ` , which increase the number of realizations to .table [ tab : n - body_params ] summarizes the parameters used in the simulations .the initial conditions were created with the ` 2lpt ` code at initial redshift , based on the linear transfer function calculated from ` camb ` .the number of meshes used in the particle - mesh computation is .we adopt a softening length of for tree forces . we store three output redshifts for ` wmap3 ` run , whereas we select four output redshifts for ` wmap5 ` run ; , , and ( ` wmap3 ` ) : , , , and ( ` wmap5 ` ) . using these outputs , we compute the power spectrum and two - point correlation function in both real and redshift spaces .the calculation of the matter power spectrum adopted here is basically the same treatment as in ref .the standard method to compute the power spectrum is to square the fourier transform of the density field and to take an average over realizations and fourier modes .this is given by where and are the number of fourier modes in the -th wavenumber bin and the number of realizations , and and are the minimum and the maximum wavenumber of the -th bin , respectively .the quantity means the density field in fourier space obtained from the -th realization data .we use the cloud - in - cells interpolation for the density assignment of particles onto a mesh , and correct the window function .note that the power spectra measured from the standard treatment above suffer from the effect of finite - mode sampling discussed by ref .the resultant power spectrum deviates from the prediction for the ideal ensemble average , and exhibits the anomalous growth of power spectrum amplitude on large scales . in order to reduce the effect of finite mode sampling at ,we multiply the measured power spectrum by the ratio , , where the quantity is calculated from the perturbation theory up to the third - order in density field , and is the input linear power spectrum extrapolated to a given output redshift . note that in computing , we use the gaussian - sampled density field used to generate the initial condition of each n - body run . with this treatment , the individual random nature of each n - body runis weakened , and the errors associated with anomalous growth is reduced with that in redshift space . to be precise ,we compute the multipole moments of the redshift - space power spectrum , and the ratio , , is multiplied for each multipole spectrum ( see sec . [subsubsec : pk_in_red ] ) . ] . for the estimation of two - point correlation function, we adopt the grid - based calculation using the fast fourier transformation ( fft ) . in this treatment ,similar to the power spectrum analysis , we first compute the square of the density field on each grid of fourier space . then , applying the inverse fourier transformation , we take the average over realization and distance , and obtain the two - point correlation function .schematically , this is expressed as , \label{eq : estimator_xi}\ ] ] where the operation stands for the inverse fft of the squared density field on each grid .note here that is simply chosen at the center of the -th radial bin , i.e. , .( [ eq : estimator_xi ] ) usually suffers from the ambiguity of the zero - point normalization in the amplitude of two - point correlation function , because of the lack of the low- powers due to the finite boxsize of the simulations . with the grids and the boxsize of , however , we can safely evaluate the two - point correlation function around the baryon acoustic peak .comparison between different computational methods , together with convergence check of this method , is presented in appendix [ app : computing_tpcf ] . finally ,similar to the estimation of power spectrum , the finite - mode sampling also affects the calculation of the two - point correlation function .we thus correct it by subtracting and adding the extrapolated linear density field as , , where is the correlation function estimated from the gaussian density field , and is the linear theory prediction of two - point correlation function ..[tab : n - body_params ] parameters of n - body simulations [ cols="<,^,^,^,^,^,^,^,^,^,^",options="header " , ] is separately plotted . in left panel ,one - loop and two - loop corrections in the standard pt , and , are plotted , while in right panel , the mode - coupling corrections and in the improved pt given at eqs .( [ eq : cla_mc1 ] ) and ( [ eq : cla_mc2 ] ) respectively shown ( labeled as mc1 and mc2 ) , together with the first term in eq .( [ eq : pk_born ] ) ( labeled as g ) .note that dashed lines indicate the negative values .[ fig : convergence],title="fig:",width=359 ] is separately plotted . in left panel , one - loop and two - loop corrections in the standard pt , and ,are plotted , while in right panel , the mode - coupling corrections and in the improved pt given at eqs .( [ eq : cla_mc1 ] ) and ( [ eq : cla_mc2 ] ) respectively shown ( labeled as mc1 and mc2 ) , together with the first term in eq .( [ eq : pk_born ] ) ( labeled as g ) .note that dashed lines indicate the negative values .[ fig : convergence],title="fig:",width=359 ] before addressing a quantitative comparison between n - body simulation and improved pt , we first discuss the convergence properties of the improved pt , and consider how well the calculation based on the improved pt does improve the prediction compared to the standard pt .[ fig : convergence ] plots the overall behaviors of the non - linear power spectrum of density fluctuation , , given at , adopting the ` wmap3 ` cosmological parameters .in left panel , the results of standard pt are shown , and the contributions to the total power spectrum up to the two - loop diagrams are separately plotted .on the other hand , right panel shows the results of improved pt .we plot the contributions up to the second - order born approximation labeled as mc1 and mc2 . in fig .[ fig : convergence ] , there are clear distinctions between standard and improved pts . while the loop corrections in standard pt change their signs depending on the scales and exhibit an oscillatory feature , the corrections coming from the born approximation in the improved pt are all positive and mostly the smooth function of .further , the higher - order corrections in the improved pt have a remarkable scale - dependent property compared to those in the standard pt ; their contributions are well - localized around some characteristic wavenumbers , and they are shifted to the higher modes as increasing the order of pt .these trends clearly indicate that the improved pt with closure approximation has a better convergence property .qualitative behaviors of the higher - order corrections quite resemble the predictions of rpt by crocce & scoccimarro ( 2008 ) .now , let us focus on the behavior of baos , and discuss how the convergence properties seen in fig .[ fig : convergence ] affect the predictions of bao features . in fig .[ fig : ratio_pk_real ] , adopting the ` wmap3 ` cosmological parameters , we plot the ratio , , where the function is the linear power spectrum from the smooth transfer function neglecting the bao feature in ref . . in left panel ,n - body simulations are compared with the leading - order results of pt predictions , i.e. , standard pt including the one - loop correction ( dashed ) , and improved pt with first - order born correction ( solid ) .apart from the wiggle structure , the amplitude of standard pt prediction monotonically increases with wavenumber , and tends to overestimate the results of n - body simulations . on the other hand, the amplitude of improved pt prediction rapidly falls off at a certain wavenumber , and the deviation from n - body results becomes significant .however , a closer look at the behavior on large scales reveals that improved pt prediction gives a better agreement with simulation .the results are indeed consistent with the previous findings in ref .the situation becomes more impressive when we add the next - to - leading order corrections .as shown in right panel , the improved pt gets the power on smaller scales , and reproduces the n - body results in a wider range of wavenumber .by contrast , the prediction of standard pt depicted as dashed lines seems a little bit subtle . compared to the one - loop results , the amplitudes of the standard pt prediction including the two - loop correction are slightly reduced , and the agreement with n - body simulation seems apparently improved a bit at higher redshift . at lower redshift , however , the correction coming from the two - loop order becomes significant , and the prediction eventually underestimates the simulation .the reason for these behaviors basically comes from the competition between positive and negative contributions of the one - loop and two - loop corrections , respectively ( see left panel of fig . [fig : convergence ] ) .these are consistent with those findings in ref . ( see fig . 1 of their paper ) . , given at redshifts (top ) , (middle ) and (bottom ) .cosmological parameters used in the wmap3 simulations are adopted to compute the power spectrum from standard pt and improved pt , and the results are compared with n - body simulations ( symbols with error - bars ) .the reference spectrum is calculated from the no - wiggle formula of the linear transfer function in ref . . in each panel ,dotted , dashed and solid lines represent the linear , standard pt and improved pt results , respectively . in left panel ,leading - order results of standard pt and improved pt are shown , while in right panel , the results including the higher - order corrections are plotted .[ fig : ratio_pk_real],title="fig:",width=302 ] , given at redshifts (top ) , (middle ) and (bottom ) .cosmological parameters used in the wmap3 simulations are adopted to compute the power spectrum from standard pt and improved pt , and the results are compared with n - body simulations ( symbols with error - bars ) .the reference spectrum is calculated from the no - wiggle formula of the linear transfer function in ref . . in each panel ,dotted , dashed and solid lines represent the linear , standard pt and improved pt results , respectively . in left panel , leading - order results of standard pt and improved pt are shown , while in right panel , the results including the higher - order corrections are plotted .[ fig : ratio_pk_real],title="fig:",width=302 ] /p_{\rm no\mbox{-}wiggle}(k) ] .left panel shows the results for standard pt up to the two - loop order .right panel presents the case of improved pt including the corrections up to the second - order born approximation of the mode - coupling term . in both panels , vertical arrows represent the wavenumbers of standard and improved pt ( from left to right ) , below which the leading - order pt predictions reproduce the n - body simulations well within accuracy ( see text in details ) .[ fig : ratio_pk_real2],title="fig:",width=302 ] in fig .[ fig : ratio_pk_real2 ] , to clarify the range of agreement in more quantitative ways , we plot the fractional difference divided by the smoothed reference spectra , /p_{\rm no\mbox{-}wiggle} ] . in each panel ,vertical arrows represent the wavenumber for the leading - order predictions of standard and improved pt ( from left to right ) .[ fig : ratio_pk2_alpha],title="fig:",width=302 ] , , and are shown .the improved pt predictions plotted here include the corrections up to the second - order born approximation of the mode - coupling term , .left : ratio of power spectrum to the smoothed reference spectra , .solid and dotted lines are improved pt and linear theory predictions , respectively .right : difference between n - body and improved pt results normalized by the no - wiggle formula , /p_{\rm no\mbox{-}wiggle}(k) ] . in both panels ,the symbols with error - bars indicate the n - body results averaged over the realizations in which the effect of finite - mode sampling is corrected : ( open stars ) , ( open squares ) , ( filled triangles ) , and ( crosses ) . , width=415 ] having confirmed the excellent properties of the improved pt , we turn to focus on the baryon acoustic peak in the two - point correlation function .the two - point correlation function can be computed from the power spectrum as top panel of fig .[ fig : xi_real ] shows the two - point correlation functions around the baryon acoustic peak at different redshifts and ( from top to bottom ) in the case adopting the cosmological parameters . also , lower panel plots the fractional differences between n - body and improved pt results , i.e. , /\xi_{\rm pt}(r) ] .the left and right panels respectively represent the results from monopole and quadrupole power spectra .note that the improved pt predictions are computed based on the model ( [ eq : redshift_model ] ) adopting the fitted value of . for comparison ,the statistical errors limited by the cosmic variance of the survey volumes roughly corresponding to those of wfmos - like survey and boss are shown as shaded regions in panels of , and , assuming respectively the survey volumes of , and .note that in each panel , vertical arrow indicates the maximum wavenumber determined from fig .[ fig : ratio_pk2_alpha ] by comparison between n - body and improved pt results .[ fig : ratio_pk2_error_red],title="fig:",width=302 ] / p_{\ell,{\rm no\mbox{-}wiggle}}^{\rm(s)}(k) ] for different redshifts at ( open stars ) , ( open squares ) , ( filled triangles ) , and ( crosses ) ., title="fig:",width=245 ] .the solid and dotted lines are the predictions from the improved pt based on the model ( [ eq : redshift_model ] ) and linear theory , respectively .note that only the leading - order born approximation to the the mode coupling term is included in the improved pt . ( red ) ; ( magenta ) ; ( cyan ) ; ( green ) . for comparison ,the statistical errors limited by the cosmic variance of the survey volumes , and are estimated from eq .( [ eq : delta_xi ] ) , and are depicted as shaded regions around the n - body results at , and , respectively . the cosmic - variance error for hexadecapole is not shown here because of the large scatter .bottom : fractional differences of the results between n - body simulations and improved pt predictions , /\xi_{\rm pt}(s) ] for different redshifts at ( open stars ) , ( open squares ) , ( filled triangles ) , and ( crosses ) ., title="fig:",width=245 ]in this paper , we have presented the improved pt calculations of the matter power spectrum and two - point correlation function in real and redshift spaces . based on the closure approximation of the renormalized pt treatment , a closed set of the non - perturbative expressions for power spectrum and propagatoris obtained .the resultant expression includes the effect of resummation for a class of loop diagrams at infinite order , and thereby the convergence of higher - order contributions is expected to be improved . employing the born approximation , we have analytically calculated the non - linear power spectrum , and compared the convergence properties of improved pt with those of standard pt by explicitly computing the higher - order corrections .we have also made a detailed comparison between the improved pt result and n - body simulations . with a large boxsize and many realization data of n - body simulations ,the statistical errors of two - point statistics are greatly reduced by the correction of the effect of finite - mode sampling , and this enables us to investigate the convergence check of numerical and analytic calculations at a percent level .then , specifically focusing on the behaviors of baos , the power spectrum and two - point correlation functions are calculated in both real and redshift spaces . in redshift space, the effect of redshift - space distortion which changes the clustering pattern of mass distribution should be incorporated into the improved pt predictions . in this paper , adopting the model proposed by ref . ( eq . ( [ eq : redshift_model ] ) ) , we have quantified the extent to which the current model description faithfully reproduces the n - body results , and clarified the key ingredients toward an improved prescription of redshift - space distortion .our important findings are summarized as follows : * the improved pt expansion based on the born approximation has better convergence properties , in marked contrast with the standard pt expansion .the corrections coming from the mode - coupling term are well - localized positive functions of wavenumbers , and their contributions tend to be shifted to a higher region as increasing the order of perturbation .thus , the inclusion of higher - order corrections stably improves the prediction , and the range of agreement with n - body results becomes wider in wavenumber . * in real - space power spectrum , the improved pt prediction including up to the second - order born correction seems essential for modeling bao precisely .we estimated the maximum wavenumber , below which the results of both the n - body simulation and improved pt calculation converge well within the accuracy .the resultant value of can be summarized as eq .( [ eq : k_criterion ] ) with the constant value , which provides a way to estimate in a cosmology independent manner . on the other hand ,if we consider the two - point correlation function in real space , the leading - order calculation turns out to be sufficiently accurate , and no higher - order correction is needed to describe the non - linear evolution of baryon acoustic peak seen in the n - body simulations . * modeling redshift - space power spectrum with eq .( [ eq : redshift_model ] ) gives a broadly consistent result with n - body simulations , if we regard the velocity dispersion as a fitting parameter .however , discrepancy between improved pt predictions and n - body results has appeared in the quadrupole power spectrum , and it becomes larger than the statistical errors limited by the cosmic variance of the survey volume a few .this is true even in the valid range of improved pt , . on the other hand , while a small descrepancy has been also found in the two - point correlation , it turns out that the discrepancy is well within the cosmic - variance error , and even the leading - order prediction using the linear theory estimate of can be used as an accurate theoretical template for future ground - based bao measurement .the recently proposed techniques to deal with the non - linear gravitational clustering , including the present treatment , have been greatly developed , and they would be a promising cosmological tool to precisely model the shape and amplitude of the power spectrum and/or the correlation functions in an accuracy of sub - percent level . combining the model of redshift - space distortion , we are now able to discuss the non - linear clustering in redshift space .although the present paper is especially concerned with the analytical work , we note that the non - perturbative formulation with closure approximation is suited for forward treatment in time , in which all orders of born approximation can be fully incorporated into the predictions by numerically solving the evolution equations. this approach would be particularly useful to study the non - linear matter power spectrum in general cosmological models , including the modified theory of gravity . finally , in practical application to the precisionbao measurements , there are several remaining issues to be addressed in the future work .the improvement of the model of redshift - space distortion is , of course , a very important and urgent task .the effect of galaxy biasing is also one of the key ingredients for modeling accurate theoretical template , and several attempts to take account of this effect have been recently made .another interesting direction is to develop a fast computation of non - linear power spectrum or correlation function for an arbitrary cosmological model .recently , the statistical sampling method for precise power spectrum emulation has been proposed . in this treatment ,only a limited set of cosmological models can be used to predict power spectrum at the required accuracy over the prior parameter ranges .the analytic approaches combining this method may provide a fast and reliable way to estimate the two - point statistics , and the development of this method would be valuable . we would like to thank yasushi suto and alan heavens for comments and discussion , thierry sousbie for teaching us an efficient computational method for two - point correlation function .at is supported by a grant - in - aid for scientific research from the japan society for the promotion of science ( jsps ) ( no .21740168 ) .tn and ss acknowledge a support from jsps fellows .this work was supported in part by grant - in - aid for scientific research on priority areas no .467 `` probing the dark energy through an extremely wide and deep survey with subaru telescope '' , and jsps core - to - core program `` international research network for dark energy '' .in this appendix , we briefly summarize the standard pt and derive a set of perturbative solutions . based on these solutions ,we obtain the analytic expressions for power spectrum up to the two - loop order . as we mentioned in sec .[ subsec : spt_vs_rpt ] , standard pt is the straightforward expansion of the quantity , and the perturbative solutions are obtained by order - by - order treatment of eq .( [ eq : vec_fluid_eq ] ) . in order to systematically derive the solutions , the einstein - de sitter ( eds ) approximationis often used in the literature . in the eds approximation , the matrix given by eq .( [ eq : matrix_m ] ) is replaced with the one in the eds universe , i.e. , and .this means that all the non - linear growth factors appearing in the higher - order solutions are expressed in terms of the linear growth factor . neglecting the contributions from the decaying mode , the resultant solution for then expanded as the solution for each order of perturbation is expressed as where is the initial density field which we assume gaussian statistic .the function is the symmetrized kernel of the -th order solutions .the explicit expressions for the kernel is obtained from the recursion relation , which can be derived by substituting the expansion ( [ eq : expand ] ) with ( [ eq : pt_sol ] ) into eq .( [ eq : vec_fluid_eq ] ) ( e.g. , ) : with and . here, the matrix is given by note that the kernel given above is not yet symmetric under the permutations of arguments , , and it should be symmetrized : using the perturbative solutions , the power spectrum defined by ( [ eq : def_pk ] ) is expanded as here , the quantity implies the ensemble average obtained from the -th and -th order perturbative solutions . in the above expression , the first term at the right - hand side is the linear power spectrum , while the second and third terms proportional to the growth factors and are respectively the so - called one - loop and two - loop corrections .the explicit expressions for these corrections become ( e.g. , ) where is the initial power spectrum of the density field defined by eq .( [ eq : def_p_0 ] ) , and we set .note that the expression for one - loop power spectra can be further reduced to the one - dimensional and two - dimensional integral for and , respectively ( e.g. , ) . in the results presented in sec .[ subsubsec : real_pk ] , we used the method of gaussian quadratures for numerical integration of one - loop power spectra . on the other hand , for the two - loop power spectra , the integration can not be simplified except for the first term in , and we need to directly evaluate the six - dimensional integration .we adopted the monte - carlo integration to the two - loop power spectra .the integration kernels for each term are generated numerically using the recursion relation ( [ eq : recursion ] ) and the condition ( [ eq : symmetrize ] ) .[ fig : pietroni],height=219 ] in this appendix , we collect several recent works that attempt to improve the prediction of power spectrum and/or two - point correlation function , and discuss their qualitative differences .a quantitative aspect of various analytic methods has been recently investigated in ref . . here , we specifically comment on the approaches proposed by refs . , which are very close to our treatment .crocce & scoccimarro ( 2008 ) : : : first let us mention the work by ref . .although the treatment presented in the paper are often quoted as rpt , strictly speaking , this is just the approximate treatment , which differs from the renormalized pt . as we mentioned in sec .[ subsec : spt_vs_rpt ] , renormalized pt is the exact non - perturbative formulation without any approximations , and the power spectrum given by eq .( [ eq : rpt_expansion ] ) is expressed as the infinite series of irreducible loop diagrams constructed from the non - linear propagator , full vertex , and non - linear power spectrum .to make the analysis tractable , they adopted the following approximations : ( i ) the renormalized vertex is well - described by the ( linear ) vertex function ; ( ii ) the non - linear power spectra that enter into the calculation of are all replaced with the linear - order ones . in our language , this corresponds to the first - order born approximation .then , using the approximate solution for propagator in ref . , they explicitly calculated the power spectrum including the corrections up to the two - loop order .the diagrams that they actually computed are shown in fig .[ fig : rpt_born ]. + compared to our analytical treatment with born approximation , there are two main differences .one is the higher - order corrections that appear in the diagrams ( see fig . [fig : born ] ) .another important difference is the asymptotic behaviors in the non - linear propagator . at ,the propagator used in their paper behaves like $ ] , which contrasts with in our closure approximation , where is the linear propagator and is defined by .these distinctive features come from the partial resummation of a different class of higher - order terms when constructing the approximate solution of non - linear propagator ( see ref . in details ) . despite these remarkable differences ,it has been shown in ref . that the leading - order calculations neglecting the higher - order terms ( two - loop diagram or second - order born correction ) can produce the same results which is indistinguishable from each other .this is true at least on large scales , where the agreement between n - body simulations and improved pt predictions is better than a few percent .pietroni ( 2008 ) : : : next consider the method proposed by ref . , called time - rg method .this method is based on the moment - based approach , and we first write down the moment equations . in general , this produces an infinite hierarchy of equations , however , ref . assumes a vanishing trispectrum in order to truncate the hierarchy . as a result ,a closed set of equations for power spectrum is obtained , which coulples with the evolution of bispectrum in some non - perturbative ways .diagrammatic representation of this closed equations is shown in fig .[ fig : pietroni ] , which can be compared with fig .[ fig : diagram_cla ] in our treatment .note that in the subject of statistical theory of turbulence , this truncation procedure is referred to as _ quasi - normal approximation _ ( e.g., ) , and it is known to have several drawbacks ; positivity of the energy spectrum is not ensured , and it fails to recover the kolmogolov spectrum in the inertial range of turbulence .+ nevertheless , the advantage of this treatment , similar to our closure approximation , is that the power spectrum can be computed numerically by solving the evolution equations .this forward treatment seems quite efficient to bring out the non - perturbative effects incorporated into the formalism , and it has a wide applicability to include various physical effects .recently , the formalism has been extended to deal with the effect of massive neutrinos .valageas ( 2007 ) : : : the method proposed by ref . is based on the path - integral formalism .starting from the action for the cosmological fluid equation ( [ eq : vec_fluid_eq ] ) , which describes the statistical properties of the vector field , the large - n expansions as a technique of quantum field theory have been applied to derive the governing equations for power spectrum and propagator . in ref . , two kinds of expansions have been presented , leading to the two different non - perturbative schemes , i.e. , steepest descent method and 2pi effective action method .although both methods consistently reproduce the standard pt at the one - loop level , the latter includes the non - purturbative contributions which are not properly taken into account by the former method .thus , the 2pi effective action method is expected to provide a better result .it is interesting to note that despite the field - theoretical derivation , the resultant governing equations for the 2pi effective action method turn out to be mathematically equivalent to those obtained from the closure approximation .hence , the diagrammatic representation of this formalism is exactly the same as shown in fig .[ fig : diagram_cla ] .matsubara ( 2008a ) : : : finally , we briefly mention the treatment proposed by ref . .this is the lagrangian - based approach , and we begin by writing down the exact expressions for matter power spectrum in terms of the displacement vectors .the resultant expression is in the exponential form , and the purterbative expansions are then applied for the explicit calculation of the ensemble average .while a naive expansion of the displacement vectors , together with the solutions of lagrangian pt , merely reproduces the ( standard ) eulerian pt results , ref . has applied a partial expansion , and some of the terms have been kept in the exponential form .this can be interpreted as the partial resummation of a class of the infinite diagrams .the resultant expression for power spectrum is quite similar to the one - loop result of standard pt , but slightly differs from it in the sense that there appears the exponential prefactor . as a consequence , the prediction reasonably recovers the damping behavior of the baos seen in the n - body simulations , and it also explains the smearing effect on the baryon acoustic peak in the two - point correlation function .+ one noticiable point of this method is that it is rather straightforward to generalize the calculations in real space to those in redshift space , since the displacement vectors in redshift space can be simply given by a linear mapping from those in real space .further , the computational cost is less expensive compared to the other analytic methods .although the validity range of this method is restricted to a narrow range of the low- modes , it would be very powerful for a fast compuation of the two - point correlation function ., title="fig:",width=264 ] , title="fig:",width=264 ] , title="fig:",width=264 ] , title="fig:",width=264 ] in this paper , the grid - based calculation with fft has been used for computing the two - point correlation functions from n - body data . here, we compare it with other computational methods and check their convergences . fig .[ fig : convergence_xi ] shows the two - point correlation functions measured at from the single realization of ` wmap5 ` simulations .upper - left panel shows the results from the direct pair - counting . for each particle , we randomly select pairs , which are accumulated for each bin of separations , allowing for oversampling .the estimated values of two - point correlation function are then plotted for different number of samples : , , and . the resultant total number of pairs , , indicated in the panel is given by , with being the total number of particles . note that the actual number of pairs that enters into the plotted range is less than . on the other hand ,upper - right panel shows the results from the grid - based pair - counting introduced by barriga & gaztaaga ( 2002 ) ( see also ref . ) . in this method, we first construct the density field on a grid of cells , and then estimate the correlation function through the pair count on grids : compared to the direct pair - counting , this method is computationally efficient when we store the list of neighbor particles which contribute to a given bin of separation .we plot the results adopting the two different number of cells , and . in lower - left panel ,the grid - based calculation with fft ( see eq .( [ eq : estimator_xi ] ) ) is used to compute the two - point correlation function , with different numbers of cells , , , and .note that we adopt in the analysis presented in sec .[ sec : pt_vs_n - body ] .finally , in lower - right panel , the results for three different methods with the largest number of pairs or grids are collected and compared with each other . to check the convergence, we further evaluate the residuals from the mean values , , and plot the results in each panel of fig .[ fig : convergence_xi ] . here, the mean values are estimated from the ensemble average over the three different results using the largest number of pairs or grids . as increasing the numbers or , the results for three different methods all approach the mean values , and a few percent - level agreement is achieved over the range of our interest ( except for the vicinity of zero - crossing point , ) .it is interesting to note that residuals obtained from the grid - based pair - count and fft methods almost coincide with each other and the differences are hard to distinguish , indicating that both methods are equivalent even in the practical situation .these experiments suggest that the grid - based calculation with fft is a reliable estimation method comparable to the other methods .it should be emphasized that the method using fft is much more efficient than other pair - count methods .for example , using cores of processors , the direct pair - counting with takes about two weeks to get the results shown in fig .[ fig : convergence_xi ] .the grid - based pair - counting is computationally less expensive than the direct pair - counting , but it still needs time - consuming calculations , especially for a large number of grids . by contrast , the method using fft only requires few minutes even with .this can be achieved by a single - node calculation . | we study the non - linear evolution of baryon acoustic oscillations in the matter power spectrum and correlation function from the improved perturbation theory ( pt ) . based on the framework of renormalized pt , which provides a non - perturbative way to treat the gravitational clustering of large - scale structure , we apply the _ closure approximation _ that truncates the infinite series of loop contributions at one - loop order , and obtain a closed set of integral equations for power spectrum and non - linear propagator . the resultant integral expressions are basically equivalent to those previously derived in the form of evolution equations , and they keep important non - perturbative properties which can dramatically improve the prediction of non - linear power spectrum . employing the born approximation , we then derive the analytic expressions for non - linear power spectrum and the predictions are made for non - linear evolution of baryon acoustic oscillations in power spectrum and correlation function . we find that the improved pt possesses a better convergence property compared with standard pt calculation . a detailed comparison between improved pt results and n - body simulations shows that a percent - level agreement is achieved in a certain range in power spectrum and in a rather wider range in correlation function . combining a model of non - linear redshift - space distortion , we also evaluate the power spectrum and correlation function in redshift space . in contrast to the results in real space , the agreement between n - body simulations and improved pt predictions tends to be worse , and a more elaborate modeling for redshift - space distortion needs to be developed . nevertheless , with currently existing model , we find that the prediction of correlation function has a sufficient accuracy compared with the cosmic - variance errors for future galaxy surveys with volume of a few at . |
the problem of reconstructing a signal from non - uniformly spaced measurements arises in areas as diverse as geophysics , medical imaging , communication engineering , and astronomy .a successful reconstruction of from its samples requires a priori information about the signal , otherwise the reconstruction problem is ill - posed .this a priori information can often be obtained from physical properties of the process generating the signal . in many of the aforementioned applicationsthe signal can be assumed to be ( essentially ) band - limited .recall that a signal ( function ) is band - limited with bandwidth if it belongs to the space , given by where is the fourier transform of defined by for convenience and without loss of generality we restrict our attention to the case , since any other bandwidth can be reduced to this case by a simple dilation .therefore we will henceforth use the symbol for the space of band - limited signals .it is now more than 50 years ago that shannon published his celebrated sampling theorem .his theorem implies that any signal can be reconstructed from its regularly spaced samples by in practice however we seldom enjoy the luxury of equally spaced samples .the solution of the nonuniform sampling problem poses much more difficulties , the crucial questions being : * under which conditions is a signal uniquely defined by its samples ? * how can be stably reconstructed from its samples ?these questions have led to a vast literature on nonuniform sampling theory with deep mathematical contributions see to mention only a few .there is also no lack of methods claiming to efficiently reconstruct a function from its samples .these numerical methods naturally have to operate in a finite - dimensional model , whereas theoretical results are usually derived for the infinite - dimensional space . from a numerical point of viewthe `` reconstruction '' of a bandlimited signal from a finite number of samples amounts to computing an approximation to ( or ) at sufficiently dense ( regularly ) spaced grid points in an interval . hence in order to obtain a `` complete '' solution of the sampling problem following questions have to be answered : * does the approximation computed within the finite - dimensional model actually converge to the original signal , when the dimension of the model approaches infinity ? * does the finite - dimensional model give rise to fast and stable numerical algorithms ? these are the questions that we have in mind , when presenting an overview on recent advances and new results in the nonuniform sampling problem from a numerical analysis view point .in section [ ss : truncated ] it is demonstrated that the celebrated frame approach does only lead to fast and stable numerical methods when the finite - dimensional model is carefully designed .the approach usually proposed in the literature leads to an ill - posed problem even in very simple situations .we discuss several methods to stabilize the reconstruction algorithm in this case . in section [ ss : trigpol ]we derive an alternative finite - dimensional model , based on trigonometric polynomials .this approach leads to a well - posed problem that preserves important structural properties of the original infinite - dimensional problem and gives rise to efficient numerical algorithms .section [ s : numeric ] describes how this approach can be modified in order to reconstruct band - limited signals for the in practice very important case when the bandwidth of the signal is not known .furthermore we present regularization techniques for ill - conditioned sampling problems .finally section [ s : applications ] contains numerical experiments from spectroscopy and geophysics .before we proceed we introduce some notation that will be used throughout the paper .if not otherwise mentioned always denotes the -norm ( -norm ) of a function ( vector ) . for operators ( matrices ) is the standard operator ( matrix ) norm .the condition number of an invertible operator is defined by and the spectrum of is . denotes the identity operator .the concept of frames is an excellent tool to study nonuniform sampling problems .the frame approach has the advantage that it gives rise to deep theoretical results and also to the construction of efficient numerical algorithms _ if _ ( and this point is often ignored in the literature ) the finite - dimensional model is properly designed .following duffin and schaeffer , a family in a separable hilbert space is said to be a frame for , if there exist constants ( the _ frame bounds _ ) such that we define the _ analysis operator _ by and the _ synthesis operator _ , which is just the adjoint operator of , by the _ frame operator _ is defined by , hence . is bounded by and hence invertible on .we will also make use of the operator in form of its gram matrix representation with entries .on the matrix is bounded by and invertible .on this inverse extends to the _ moore - penrose inverse _ or pseudo - inverse ( cf . ) . given a frame for ,any can be expressed as where the elements form the so - called dual frame and the frame operator induced by coincides with .hence if a set establishes a frame for , we can reconstruct any function from its moments .one possibility to connect sampling theory to frame theory is by means of the _ sinc_-function its translates give rise to a _ reproducing kernel _ for via combining with formulas and we obtain following well - known result .if the set is a frame for , then the function is uniquely defined by the sampling set . in this casewe can recover from its samples by or equivalently by with being the frame gram matrix with entries and .the challenge is now to find easy - to - verify conditions for the sampling points such that ( or equivalently the exponential system ) is a frame for .this is a well - traversed area ( at least for one - dimensional signals ) , and the reader should consult for further details and references .if not otherwise mentioned from now on we will assume that is a frame for . of course, neither of the formulas and can be actually implemented on a computer , because both involve the solution of an infinite - dimensional operator equation , whereas in practice we can only compute a finite - dimensional approximation .although the design of a valid finite - dimensional model poses severe mathematical challenges , this step is often neglected in theoretical but also in numerical treatments of the nonuniform sampling problem .we will see in the sequel that the way we design our finite - dimensional model is crucial for the stability and efficiency of the resulting numerical reconstruction algorithms . in the next two sections we describe two different approaches for obtaining finite - dimensional approximations to the formulas and .the first and more traditional approach , discussed in section [ ss : truncated ] , applies a finite section method to equation .this approach leads to an ill - posed problem involving the solution of a large unstructured linear system of equations .the second approach , outlined in section [ ss : trigpol ] , constructs a finite model for the operator equation in by means of trigonometric polynomials .this technique leads to a well - posed problem that is tied to efficient numerical algorithms .according to equation we can reconstruct from its sampling values via , where with . 0 since is a compact operator it can be diagonalized via its singular system ( or eigensystem since is self - adjoint ) as follows with a corresponding complete orthogonal set of vectors .the moore - penrose inverse can be expressed as where , as usual , only the non - zero singular values of are used in the above sum . in orderto compute a finite - dimensional approximation to we use the finite section method . for and define the orthogonal projection by and identify the image of with the space . setting and , we obtain the -th approximation to by solving it is clear that using the truncated frame in for an approximate reconstruction of leads to the same system of equations . if is an exact frame( i.e. , a riesz basis ) for then we have following well - known result .let be an exact frame for with frame bounds and and as defined above .then converges strongly to and hence for .since the proof of this result given in is somewhat lengthy we include a rather short proof here .note that is invertible on and .let with , then .in the same way we get , hence the matrices are invertible and uniformly bounded by and the lemma of kantorovich yields that strongly .if is a non - exact frame for the situation is more delicate .let us consider following situation .* example 1 : * let and let the sampling points be given by , i.e. , the signal is regularly oversampled at times the nyquist rate. in this case the reconstruction of is trivial , since the set is a tight frame with frame bounds .shannon s sampling theorem implies that can be expressed as where and the numerical approximation is obtained by truncating the summation , i.e. , using the truncated frame approach one finds that is a toeplitz matrix with entries in other words , coincides with the prolate matrix .the unpleasant numerical properties of the prolate matrix are well - documented .in particular we know that the singular values of cluster around and with singular values in the transition region .since the singular values of decay exponentially to zero the finite - dimensional reconstruction problem has become _ severely ill - posed _ , although the infinite - dimensional problem is `` perfectly posed '' since the frame operator satisfies , where is the identity operator .of course the situation does not improve when we consider non - uniformly spaced samples . in this caseit follows from standard linear algebra that \} ] .this coincides very well with numerical experiments .if the noise level is not known , it has to be estimated .this difficult problem will not be discussed here .the reader is referred to for more details .although we have arrived now at an implementable algorithm for the nonuniform sampling problem , the disadvantages of the approach described in the previous section are obvious . in generalthe matrix does not have any particular structure , thus the computational costs for the singular value decomposition are which is prohibitive large in many applications .it is definitely not a good approach to transform a well - posed infinite - dimensional problem into an ill - posed finite - dimensional problem for which a stable solution can only be computed by using a `` heavy regularization machinery '' .the methods in coincide with or are essentially equivalent to the truncated frame approach , therefore they suffer from the same instability problems and the same numerical inefficiency . as mentioned above one way to stabilizethe solution of is a truncated singular value decomposition , where the truncation level serves as regularization parameter . for large costs of the singular value decomposition become prohibitive for practical purposes .we propose the conjugate gradient method to solve .it is in general much more efficient than a tsvd ( or tikhonov regularization as suggested in ) , and at the same time it can also be used as a regularization method .the standard error analysis for cg can not be used in our case , since the matrix is ill - conditioned .rather we have to resort to the error analysis developed in .when solving a linear system by cg for noisy data following happens .the iterates of cg may diverge for , however the error propagation remains limited in the beginning of the iteration .the quality of the approximation therefore depends on how many iterative steps can be performed until the iterates turn to diverge .the idea is now to stop the iteration at about the point where divergence sets in . in other wordsthe iterations count is the regularization parameter which remains to be controlled by an appropriate stopping rule .in our case assume , where denotes a noisy sample .we terminate the cg iterations when the iterates satisfy for the first time for some fixed .it should be noted that one can construct `` academic '' examples where this stopping rule does not prevent cg from diverging , see , `` most of the time '' however it gives satisfactory results .we refer the reader to for a detailed discussion of various stopping criteria .there is a variety of reasons , besides the ones we have already mentioned , that make the conjugate gradient method and the nonuniform sampling problem a `` perfect couple '' .see sections [ ss : trigpol ] , [ ss : ml ] , and [ ss : regul ] for more details . by combining the truncated frame approach with the conjugate gradient method ( with appropriate stopping rule ) we finally arrive at a reconstruction method that is of some practical relevance .however the only existing method at the moment that can handle large scale reconstruction problems seems to be the one proposed in the next section .in the previous section we have seen that the naive finite - dimensional approach via truncated frames is not satisfactory , it already leads to severe stability problems in the ideal case of regular oversampling . in this sectionwe propose a different finite - dimensional model , which resembles much better the structural properties of the sampling problem , as can be seen below .the idea is simple . in practice only a finite number of samples given , where without loss of generality we assume ( otherwise we can always re - normalize the data ) . since no data of are available from outside this region we focus on a local approximation of on ] or ] is more convenient .since the dual group of the torus is , periodic band - limited functions on reduce to trigonometric polynomials ( of course technically does then no longer belong to since it is no longer in ) .this suggests to use trigonometric polynomials as a realistic finite - dimensional model for a numerical solution of the nonuniform sampling problem .we consider the space of trigonometric polynomials of degree of the form the norm of is since the distributional fourier transform of is we have ] and compute the least squares approximation with degree and period as in theorem [ th : act ] .it is shown in that if ] .the period of the polynomial becomes with where is the number of given samples .then for , where is kronecker s symbol with the usual meaning if and else .hence we get where is the identity matrix on , thus resembles the structure of the infinite - dimensional frame operator in this case ( including exact approximation of the frame bounds ) .recall that the truncated frame approach leads to an `` artificial '' ill - posed problem even in such a simple situation .the advantages of the trigonometric polynomial approach compared to the truncated frame approach are manifold . in the one case we have to deal with an ill - posed problem which has no specific structure ,hence its solution is numerically very expensive . in the other casewe have to solve a problem with rich mathematical structure , whose stability depends only on the sampling density , a situation that resembles the original infinite - dimensional sampling problem . in principlethe coefficients of the polynomial that minimizes could also be computed by directly solving the vandermonde type system where for and is a diagonal matrix with entries , cf .several algorithms are known for a relatively efficient solution of vandermonde systems .however this is one of the rare cases , where , instead of directly solving , it is advisable to explicitly establish the system of normal equations where and .the advantages of considering the system instead of the vandermonde system are manifold : * the matrix plays a key role in the analysis of the relation of the solution of and the solution of the infinite - dimensional sampling problem , see and above .* is of size , independently of the number of sampling points . moreover , since , it is of toeplitz type .these facts give rise to fast and robust reconstruction algorithms . *the resulting reconstruction algorithms can be easily generalized to higher dimensions , see section [ ss : multi ] .such a generalization to higher dimensions seems not to be straightforward for fast solvers of vandermonde systems such as the algorithm proposed in .0 an interesting finite - dimensional model is proposed in .the bernstein - boas formula yields an explicit way to reconstruct a function from its ( sufficiently dense ) nonuniform samples , cf .this formula involves the numerically intractable computation of infinite products .however since only a finite number of samples can be used in a numerical reconstruction one may assume that the sequence of sampling points has regular structure outside a finite interval .this allows to replace the infinite products by finite products which yields following approximation formula for and an estimate for the approximation error .although their approach is computationally more expensive than the algorithm proposed in section [ s : trigpol ] their approach may be an attractive alternative if only a small number of samples in a short interval ] .we point out that other finite - dimensional approaches are proposed in .these approaches may provide interesting alternatives in the few cases where the algorithm outlined in section [ ss : trigpol ] does not lead to good results .these cases occur when only a few samples of the signal are given in an interval ] .however the computational complexity of the methods in is significantly larger .the approach presented above can be easily generalized to higher dimensions by a diligent book - keeping of the notation .we consider the space of -dimensional trigonometric polynomials as finite - dimensional model for . for given samples of , where , we compute the least squares approximation similar to theorem [ th : act ] by solving the corresponding system of equations . in 2-d for instance the matrix becomes a block toeplitz matrix with toeplitz blocks . for a fast computation of the entries of can again make use of beylkin s usfft algorithm . and similar to 1-d , multiplication of a vector by be carried out by 2-d fft .also the relation between the finite - dimensional approximation in and the infinite - dimensional solution in is similar as in 1-d .the only mathematical difficulty is to give conditions under which the matrix is invertible . since the fundamental theorem of algebra does not hold in dimensions larger than one , the condition is necessary but no longer sufficient for the invertibility of .sufficient conditions for the invertibility , depending on the sampling density , are presented in .in this section we discuss several numerical aspects of nonuniform sampling that are very important from a practical viewpoint , however only few answers to these problems can be found in the literature . in almost all theoretical results and numerical algorithms for reconstructing a band - limited signal from nonuniform samplesit is assumed that the bandwidth is known a priori .this information however is often not available in practice .a good choice of the bandwidth for the reconstruction algorithm becomes crucial in case of noisy data .it is intuitively clear that choosing a too large bandwidth leads to over - fit of the noise in the data , while a too small bandwidth yields a smooth solution but also to under - fit of the data . andof course we want to avoid the determination of the `` correct '' by trial - and - error methods .hence the problem is to design a method that can reconstruct a signal from non - uniformly spaced , noisy samples without requiring a priori information about the bandwidth of the signal .the multilevel approach derived in provides an answer to this problem .the approach applies to an infinite - dimensional as well as to a finite - dimensional setting .we describe the method directly for the trigonometric polynomial model , where the determination of the bandwidth translates into the determination of the polynomial degree of the reconstruction .the idea of the multilevel algorithm is as follows .let the noisy samples of be given with and let denote the orthogonal projection from into .we start with initial degree and run algorithm [ th : act ] until the iterates satisfy for the first time the _ inner _ stopping criterion for some fixed .denote this approximation ( at iteration ) by . if satisfies the _outer _ stopping criterion we take as final approximation .otherwise we proceed to the next level and run algorithm [ th : act ] again , using as initial approximation by setting . at level inner level - dependent stopping criterion becomes while the outer stopping criterion does not change since it is level - independent .stopping rule guarantees that the iterates of cg do not diverge .it also ensures that cg does not iterate too long at a certain level , since if is too small further iterations at this level will not lead to a significant improvement .therefore we switch to the next level .the outer stopping criterion controls over - fit and under - fit of the data , since in presence of noisy data is does not make sense to ask for a solution that satisfies . since the original signal is not known , the expression in can not be computed . in reader can find an approach to estimate recursively .a variety of conditions on the sampling points are known under which the set is a frame for , which in turn implies ( at least theoretically ) perfect reconstruction of a signal from its samples .this does however not guarantee a stable reconstruction from a numerical viewpoint , since the ratio of the frame bounds can still be extremely large and therefore the frame operator can be ill - conditioned .this may happen for instance if in goes to 1 , in which case may become large .the sampling problem may also become numerically unstable or even ill - posed , if the sampling set has large gaps , which is very common in astronomy and geophysics .note that in this case the instability of the system does _ not _ result from an inadequate discretization of the infinite - dimensional problem .there exists a large number of ( circulant ) toeplitz preconditioners that could be applied to the system , however it turns out that they do not improve the stability of the problem in this case .the reason lies in the distribution of the eigenvalues of , as we will see below .following , we call two sequences of real numbers and _ equally distributed _ , if = 0 \label{defdist}\ ] ] for any continuous function with compact support and are required to belong to a common interval . ] .let be a circulant matrix with first column , we write .the eigenvalues of are distributed as .observe that the toeplitz matrix with first column can be embedded in the circulant matrix thms 4.1 and 4.2 in state that the eigenvalues of and are equally distributed as where the partial sum of the series is to understand the clustering behavior of the eigenvalues of in case of sampling sets with large gaps , we consider a sampling set in , that consists of one large block of samples and one large gap , i.e. , for for .( recall that we identify the interval with the torus ) .then the entries of the toeplitz matrix of ( with ) are to investigate the clustering behavior of the eigenvalues of for , we embed in a circulant matrix as in . then becomes }\ ] ]whence } ] , if and 0 else .thus the eigenvalues of are asymptotically clustered around zero and one .for general nonuniform sampling sets with large gaps the clustering at 1 will disappear , but of course the spectral cluster at 0 will remain . in this caseit is known that the preconditioned problem will still have a spectral cluster at the origin and preconditioning will not be efficient .fortunately there are other possibilities to obtain a stabilized solution of .the condition number of essentially depends on the ratio of the maximal gap in the sampling set to the nyquist rate , which in turn depends on the bandwidth of the signal .we can improve the stability of the system by adapting the degree of the approximation accordingly .thus the parameter serves as a regularization parameter that balances stability and accuracy of the solution .this technique can be seen as a specific realization of _ regularization by projection _ , see chapter 3 in .in addition , as described in section [ ss : regul ] , we can utilize cg as regularization method for the solution of the toeplitz system in order to balance approximation error and propagated error . the multilevel method introduced in section [ ss : ml ]combines both features . by optimizing the level ( bandwidth ) and the number of iterations in each level it provides an efficient and robust regularization technique for ill - conditioned sampling problems .see section [ s : applications ] for numerical examples . 0 in many applications the physical process that generates the signal implies not only that the signal is ( essentially ) band - limited but also that its spectrum of the signal has a certain rate of decay .for instance geophysical potential fields have exponentially decaying fourier transform .this a priori knowledge can be used to improve the accuracy of the approximation . instead of the usual regularization methods , such as tikhonov regularization , we propose a different , computationally much more efficient method .assume that the decay of the fourier transform of can be bounded by .typical choice in practice are or . for a given system define the diagonal matrix by .instead of solving we consider the `` weighted problem '' or in the first case the solution is and in the second case we have of course , if is invertible both solutions coincide with the solution of .however if is not invertible , then both equations lead to a weighted minimal norm least squares solution .note that is not chosen to minimize the condition number of the problem , since as outlined above standard preconditioning will not work in this case .systems and can be solved by conjugate gradient methods .hence the computational effort of such an approach is of the same order as algorithm [ th : act ] .a detailed numerical analysis of the convergence properties of this approach has still to be completed .for a numerical example see section [ ss : geo ] .we present two numerical examples to demonstrate the performance of the described methods .the first one concerns a 1-d reconstruction problem arising in spectroscopy . in the second example we approximate the earth s magnetic field from noisy scattered data .the original spectroscopy signal is known at 1024 regularly spaced points .this discrete sampling sequence will play the role of the original continuous signal .to simulate the situation of a typical experiment in spectroscopy we consider only 107 randomly chosen sampling values of the given sampling set .furthermore we add noise to the samples with noise level ( normalized by division by ) of .since the samples are contaminated by noise , we can not expect to recover the ( discrete ) signal completely . the bandwidth is approximately which translates into a polynomial degree of .note that in general and ( hence ) may not be available .we will also consider this situation , but in the first experiments we assume that we know .the error between the original signal and an approximation is measured by computing .first we apply the truncated frame method with regularized svd as described in section [ ss : truncated ] .we choose the truncation level for the svd via formula .this is the optimal truncation level in this case , providing an approximation with least squares error .figure [ fig : spect](a ) shows the reconstructed signal together with the original signal and the noisy samples . without regularizationwe get a much worse `` reconstruction '' ( which is not displayed ) .we apply cg to the truncated frame method , as proposed in section [ ss : cgtrunc ] with stopping criterion ( for ) .the algorithm terminates already after 3 iterations .the reconstruction error is with slightly higher than for truncated svd ( see also figure [ fig : spect](b ) ) , but the computational effort is much smaller . also algorithm [ th : act ] ( with ) terminates after 3 iterations .the reconstruction is shown in figure [ fig : spect](c ) , the least squares error ( ) is slightly smaller than for the truncated frame method , the computational effort is significantly smaller .we also simulate the situation where the bandwidth is not known a priori and demonstrate the importance of a good estimate of the bandwidth .we apply algorithm [ th : act ] using a too small degree ( ) and a too high degree ( ) .( we get qualitatively the same results using the truncated frame method when using a too small or too large bandwidth ) .the approximations are shown in figs .[ fig : spect](d ) and ( e ) , the approximation errors are and , respectively .now we apply the multilevel algorithm of section [ ss : ml ] which does not require any initial choice of the degree .the algorithm terminates at `` level '' , the approximation is displayed in fig .[ fig : spect](f ) , the error is , thus within the error bound , as desired .hence without requiring explicit information about the bandwidth , we are able to obtain the same accuracy as for the methods above .exploration geophysics relies on surveys of the earth s magnetic field for the detection of anomalies which reveal underlying geological features .geophysical potential field - data are generally observed at scattered sampling points .geoscientists , used to looking at their measurements on maps or profiles and aiming at further processing , therefore need a representation of the originally irregularly spaced data at a regular grid .the reconstruction of a 2-d signal from its scattered data is thus one of the first and crucial steps in geophysical data analysis , and a number of practical constraints such as measurement errors and the huge amount of data make the development of reliable reconstruction methods a difficult task . it is known that the fourier transform of a geophysical potential field has decay .this rapid decay implies that can be very well approximated by band - limited functions .since in general we may not know the ( essential ) bandwidth of , we can use the multilevel algorithm proposed in section [ ss : ml ] to reconstruct .the multilevel algorithm also takes care of following problem .geophysical sampling sets are often highly anisotropic and large gaps in the sampling geometry are very common .the large gaps in the sampling set can make the reconstruction problem ill - conditioned or even ill - posed . as outlined in section [ ss : regul ] the multilevel algorithm iteratively determines the optimal bandwidth that balances the stability and accuracy of the solution .figure [ fig : geo](a ) shows a synthetic gravitational anomaly .the spectrum of decays exponentially , thus the anomaly can be well represented by a band - limited function , using a `` cut - off - level '' of for the essential bandwidth of .we have sampled the signal at 1000 points and added 5% random noise to the sampling values . the sampling geometry shown in figure [ fig : geo ] as black dots exhibits several features one encounters frequently in exploration geophysics .the essential bandwidth of would imply to choose a polynomial degree of ( i.e. , spectral coefficients ) . with this choice of corresponding block toeplitz matrix would become ill - conditioned , making the reconstruction problem unstable . as mentioned above, in practice we usually do not know the essential bandwidth of .hence we will not make use of this knowledge in order to approximate .we apply the multilevel method to reconstruct the signal , using only the sampling points , the samples and the noise level as a priori information .the algorithm terminates at level .the reconstruction is displayed in figure [ fig : geo](c ) , the error between the true signal and the approximation is shown in figure [ fig : geo](d ) . the reconstruction error is ( or mgal ) , thus of the same order as the data error , as desired .k. grchenig .non - uniform sampling in higher dimensions : from trigonometric polynomials to band - limited functions . in j.j .benedetto and p.j.s.g ferreira , editors , _ modern sampling theory : mathematics and applications_. birkhuser , boston , to appear . | we give an overview of recent developments in the problem of reconstructing a band - limited signal from non - uniform sampling from a numerical analysis view point . it is shown that the appropriate design of the finite - dimensional model plays a key role in the numerical solution of the non - uniform sampling problem . in the one approach ( often proposed in the literature ) the finite - dimensional model leads to an ill - posed problem even in very simple situations . the other approach that we consider leads to a well - posed problem that preserves important structural properties of the original infinite - dimensional problem and gives rise to efficient numerical algorithms . furthermore a fast multilevel algorithm is presented that can reconstruct signals of unknown bandwidth from noisy non - uniformly spaced samples . we also discuss the design of efficient regularization methods for ill - conditioned reconstruction problems . numerical examples from spectroscopy and exploration geophysics demonstrate the performance of the proposed methods . 0 subject classification : 65t40 , 65f22 , 42a10 , 94a12 + non - uniform sampling , band - limited functions , frames , regularization , signal reconstruction , multi - level method . |
a non trivial problem that appears in several applications , e.g. in multivariate regression and bayesian statistics , is the estimation of the mean of a truncated normal distribution .the problem arises in cases where a random vector ^t ] .the method relies on the sampling of the one - dimensional conditionals of the truncated normal distribution .more specifically , letting ] denoting the ( one dimensional ) truncated normal distribution which results from the truncation of a normal distribution with mean and variance in ] , where for each interval ] ,\sigma_i^{*2}) ] ) . as it is well known ( see e.g. ) , the mean of a truncated one dimensional normal distribution } ( \mu^*,\sigma^{*2}) ] , that is , the most recent information about s is utilized .more formally , we can say that the above scheme performs sequential updating and ( following the terminology used in ) it is a _ gauss seidel _ updating scheme .it is reminded that , due to the type of truncation considered here ( only left truncation or only right truncation per coordinate ) , the bracketed expression in each equation of ( [ new ] ) contains _ only one _ non identically equal to zero term . in the sequel , we consider separately the cases corresponding to and , i.e. , and where and , respectively , and is defined as in eq .( [ f ] ) .in this section we provide sufficient conditions under which the proposed scheme is proved to converge . before we proceed , we give some propositions and remind some concepts that will be proved useful in the sequel . _proposition 1 : _ assume that is a symmetric positive definite matrix , is the -th column of , after removing its -th element and results from after removing its -th row and its -th column .also , let be the element of and be the -dimensional vector resulting from the -th row of after ( i ) removing its -th element , , and ( ii ) multiplying the remaining elements by .then , it holds \(i ) and \(ii ) .the proof of this proposition is straightforward from the inversion lemma for block partitioned matrices ( , p. 53 ) and the use of permutation matrices , in order to define the schur complement for each row of ._ proposition 2 : _ it is where denotes the derivative of , which is defined in eq .( [ f ] ) .the proof of proposition 2 is given in the appendix ._ definition 1 : _ a mapping , where , is called _ contraction _ if for some norm there exists some constant ( called _ modulus _ ) such that is called _ contracting iteration_. _ proposition 3 ( , pp .182 - 183 ) : _ suppose that is a contraction with modulus and that is a closed subset of .then \(a ) the mapping has a unique fixed point . is called _ fixed point _ of a mapping if it is . ]\(b ) for every initial vector , the sequence , generated by converges to geometrically .in particular , let us define the mappings , as \sigma_i^ * \label{t - i}\end{aligned}\ ] ] where and and the mapping as let us define next the mapping as performing the sequential updating as described by eq .( [ new ] ) ( one at a time and in increasing order ) is equivalent to applying the mapping , defined as where denotes function composition . following the terminology given in , is called _ the gauss seidel mapping based on the mapping _ and the iteration is called _ the gauss seidel algorithm based on mapping . a direct consequence of ( * ? ? ?1.4 , pp.186 ) is the following proposition : _ proposition 4 : _ if is a contraction with respect to the norm , then the gauss - seidel mapping is also a contraction ( with respect to the norm ) , with the same modulus as . in particular ,if is closed , the sequence of the vectors generated by the gauss - seidel algorithm based on the mapping converges to the unique fixed point of geometrically . having given all the necessary definitions and results, we will proceed by proving that ( a ) for each mapping it holds , , where , are the and norms , respectively , ( b ) if is diagonally dominant then is a contraction and ( c ) provided that is a contraction , the algorithm converges geometrically to the unique fixed point of .we remind here that the -dimensional vector results from the -th row of , exluding its -th element and dividing each element by the negative of ._ proposition 5 : _ for the mappings , , it holds _ proof : _ ( a ) we consider first the case where .let us consider the vectors . since is constant , utilizing eq .( [ mu - i ] ) it follows that also , it is taking the difference we have since is continuous in , the mean value theorem guarantees that there exists ] such that substituting eq .( [ mvt1 ] ) to ( [ diff - t - b ] ) we get substituting eqs .( [ diff - mu - i ] ) and ( [ diff - b - i ] ) into ( [ diff - t1-b ] ) , we obtain from this point on , the proof is exactly the same with that of ( a ) ._ proposition 6 : _ the mapping is a contraction in , with respect to the norm , provided that is diagonally dominant . _ proof : _ let .taking into account proposition 5 , it easily follows that now , ( a ) taking into account that the -dimensional vector results from the -th row of , exluding its -th element and dividing each element by the negative of and ( b ) recalling that is the -th row of excluding its -th element , it is provided that is diagonally dominant , it is or which proves the claim ._ theorem 1 : _ the algorithm converges geometrically to the unique fixed point of , provided that is diagonally dominant ._ proof : _ the proof is a direct consequence of the propositions 3 , 4 and 6 exposed before , applied for .an issue that naturally arises with the proposed method is how accurate the estimate of the mean is . since it is very difficult to give a theoretical analysis of this issue , mainly due to the highly complex nature of the propsoed iterative scheme ( see eq .( [ new ] ) ) , we will try to gain some insight for this subject via experimentation . to this end, we set equal to the _ exponential correlation _ matrix , which is frequently met in various fields of applications , e.g. , in signal processing applications .its general form is , \ \ ( 0 \leq \rho < 1)\ ] ] it is easy to verify that the inverse of is expressed as \ ] ] also , it is straightforward to see that is diagonally dominant for all values of . thus , it is a suitable candidate for our case .in addition , it is `` controlled '' by just a single parameter ( ) , which facilitates the extraction of conclusions .note that for , becomes the identity matrix , while as increases towards the diagonal dominancy of decreases ( while its condition number increases ) . for close to , is alomost singular . in the sequel, we consider the case of a zero mean normal distribution with covariance matrix as in ( [ exp - corr-1 ] ) , which is truncated in the region ^n ] , =, while its truncation point is ^t ] and =, while the truncation point is ^t ] , which is quite close to the estimate obtained by the mcmc ( the mean absolute difference per coordinate is ) .it is worth noting that in this case , although eq .( [ norm - t ] ) is not satisfied ( , since is not diagonally dominant ) , the quantities , , are less than for all pairs of points is through . ] .thus , the ( tighter ) bound of eq .( [ norm - t1 ] ) is satisfied for all the pairs , which guarantees the rapid convergence of the method .however , note that , since depends on the data ( through ) , it can not be used as an upper bound in the proof of the contraction . _3rd experiment : _ we consider now the case where ^t ] . the matrix ^ -1=, is non - diagonally dominant and , in addition , the diagonal dominance condition is strongly violated ( in the last two rows ) .we run the proposed method for several different initial conditions .it is noted that , in contrast to the 2nd experiment , the quantity is now greater than for all pairs of .the algorithm in all cases converges to the vector ^t$ ] .however , in this case , the mean absolute difference per coordinate between this estimate and the one resulting from mcmc is , which is significantly greater than that in the previous experiment . _4th experiment : _ in order to exhibit the scalability properties of the proposed method with respect to the dimensionality , an additional experiment has been conducted for .the mean , and the covariance matrix of the corresponding normal distribution , as well as the truncation points have been selected as in the 1st experiment ( note that the inverse of the covariance matrix is diagonally dominant ) .the algorithm gives its response in less than one minute , a time that is substantially smaller than that required from the mcmc method .analysing the previous results we may draw the following conclusions : * provided that the inverse of the covariance matrix of the untrucated normal distribution is diagonally dominant , the proposed method gives very accurate estimates of the mean of the truncated normal distribution , using as benchmark the estimates of the mcmc method . *the proposed method converges much faster ( in very few iterations ) compared to the mcmc method . *even if the diagonal dominance constraint is slightly violated , the algorithm seems to converge to an accurate estimate of the mean of the truncated distribution , as the 2nd experiment indicates .* when the diagonal dominance condition is strongly violated , the algorithm still ( seems to ) converge to a vector .however , this vector is a ( much ) less accurate estimation of the mean of the truncated normal ( see the third experiment ) .* in the special case , where ( and , as a consequence ) is diagonal , the matrix is ( obviously ) equal to zero .thus , both eqs .( [ trunc - mean ] ) and ( [ trunc - mean2 ] ) implies that and eq .( [ trunc - var ] ) imply that . in other words , in this case ,both the proposed and the mcmc methods solve independent one - dimensional problems , with the deterministic method giving its estimate in a single iteration , since in this case , it reduces to ( [ case - i ] ) or ( [ case - ii ] ) .an additional observation is that the estimates provided by the two methods in this case are almost identical . letting intuition enter into the scene and generalizing a bit, one could claim that as approaches `` diagonality '' , the estimates of the mean of both methods are expected to be even closer to each other .this observation , may be an explanation of the decreasing trend observed in figure 1 , since , as the dimension increases , moves closer to `` diagonality '' , due to the diagonal dominance condition .* finally , the proposed method scales well with the dimensionality of the problem .in this paper , a new iterative algorithmic scheme is proposed for the approximation of the mean value of a one - sided truncated multivariate normal distribution .the algorithm converges in very few iterations and , as a consequence , it is much faster than the mcmc based algorithm proposed in .in addition , the algorithm is an extension of the one used in .the quality of the approximation of the mean is assessed through the case where the exponential correlation matrix is used as covariance matrix .the proof of convergence of the proposed scheme is provided for the case where is diagonally dominant . however , experimental results indicate that , even if this condition is softly violated , the method still provides estimates of the mean of the truncated normal , however , less accurate .finally , the method exhibits good scalablity properties with respect to the dimensinality ._ proof of proposition 2 : _ in order to prove ( [ q3 ] )it suffices to show that , or , since in this case the sign of the second degree polynomial will be the opposite of that of the coefficient of ( i.e. , ) .we proceed again be considering separately the cases and .\(i ) let .we set . in this case( [ prove ] ) becomes where .taking into account ( [ ineq ] ) , the right hand side inequality of ( [ prove1 ] ) holds . in order to prove the left hand inequlity of ( [ prove1 ] ) ( again taking into account ( [ ineq ] ) ) , it suffices to prove that \(ii ) let .the left hand side inequality of ( [ prove ] ) holds trivially , since in this case .we focus now on the right hand side inequality of ( [ prove ] ) . taking into account that ( a ) , for ( see e.g. ) and ( b ) , it is combining the right hand side inequality of ( [ ineq ] ) with ( [ q4 ] ), the right hand side inequality of ( [ prove ] ) holds if or or since , for ( from the taylor series expansion ) , the previous inequality holds if or since , the previous inequality holds if the discriminant of the above second degree polynomial of is .thus , the above second degree polynomial is always positive since the coefficient of the term is positive . in other words , ( [ q5 ] ) holds .therefore , the right hand side inequality of ( [ prove ] ) also holds .thus , for , it is also and therefore . as a consequence ( [ q3 ] ) also holds. q.e.d .m.chiani , d.dardari , m.k .simon , `` new exponential bounds and approximations for the computation of error probability in fading channels '' , _ ieee transactions on wireless communications _ , 2(4 ) , 840 - 845 ( 2003 ) .r. nabben , r.s .varga , `` a linear algebra proof that the inverse of strictly ultrametric matrix is a strictly diagonally dominant stieltjes matrix '' , _ siam journal of matrix anal ._ , 15 , 107 - 113 ( 1994 ) .k. themelis , a. rontogiannis , k. koutroumbas , `` a novel hierarchical bayesian approach for sparse semisupervised hyperspectral unmixing '' , ieee transactions on signal processing , 60(2 ) , 585 - 599 ( 2012 ) . | a non trivial problem that arises in several applications is the estimation of the mean of a truncated normal distribution . in this paper , an iterative deterministic scheme for approximating this mean is proposed , motivated by an iterative markov chain monte carlo ( mcmc ) scheme that addresses the same problem . conditions are provided under which it is proved that the scheme converges to a unique fixed point . the quality of the approximation obtained by the proposed scheme is assessed through the case where the exponential correlation matrix is used as covariance matrix of the initial ( non truncated ) normal distribution . finally , the theoretical results are also supported by computer simulations , which show the rapid convergence of the method to a solution vector that ( under certain conditions ) is very close to the mean of the truncated normal distribution under study . _ keywords : _ truncated normal distribution , contraction mapping , diagonally dominant matrix , mcmc methods , exponential correlation matrix |
the advent of the new class of 10-m ground based telescopes is having a strong impact on the study of galaxy evolution .for instance , instruments as lris at the keck allow observers to regularly secure redshifts for dozens of galaxies in several hours of exposure .technical advances in the instrumentation , combined with the proliferation of similar telescopes in the next years guarantees a vast increase in the number of galaxies , bright and faint , for which spectroscopical redshifts will be obtained in the near future . notwithstandingthis progress in the sheer numbers of available spectra , the ` barrier ' ( for reasonably complete samples ) is likely to stand for a time , as there are not foreseeable dramatic improvements in the telescope area or detection techniques . despite the recent spectacular findings of very high redshift galaxies , ( , , ) , it is extremely difficult to secure redshifts for such objects . on the other hand , even moderately deep ground based imaging routinely contain many high redshift galaxies ( although hidden amongst myriads of foreground ones ) , not to mention the hubble deep field or the images that will be available with the upcoming advanced camera .to push further in redshift the study of galaxy evolution is therefore very important to develop techniques able to extract galaxy redshifts from multicolor photometry data .this paper applies the methods of bayesian probability theory to photometric redshift estimation . despite the efforts of thomas loredo , who has written stimulating reviews on the subject ( loredo 1990 , 1992 ) , bayesian methods are still far from being one of the staple statistical techniques in astrophysics .most courses and monographs on statistics only include a small section on bayes theorem , and perhaps as a consequence of that , bayesian techniques are frequently used _ad hoc _ , as another tool from the available panoply of statistical methods . however , as any reader of the fundamental treatise by e.t .jaynes ( 1998 ) can learn , bayesian probability theory represents an unified look to probability and statistics , which does not intend to complement , but to fully substitute the traditional , ` frequentist ' statistical techniques ( see also bretthorst 1988 , 1990 ) one of the fundamental differences between ` orthodox ' statistics and bayesian theory , is that the probability is not defined as a frequency of occurrence , but as a reasonable degree of belief .bayesian probability theory is developed as a rigorous full flegded alternative to traditional probability and statistics based on this definition and three _ desiderata _ : a)degrees of belief should be represented by real numbers , b)one should reason consistently , and c)the theory should reduce to aristotelian logic when the truth values of hypothesis are known .one of the most attractive features of bayesian inference lies on its simplicity .there are two basic rules to manipulate probability , the product rule and the sum rule where `` '' means `` and are true '' , and `` '' means `` either or or both are true '' . from the product rule , and taking into account that the propositions `` '' and `` '' are identical , it is straightforward to derive bayes theorem if the set of proposals are mutually exclusive and exhaustive , using the sum rule one can write which is known as bayesian marginalization .these are the basic tools of bayesian inference .properly used and combined with the rules to assign prior probabilities , they are in principle enough to solve most statistical problems .there are several differences between the methodology presented in this paper and that of , the most significant being the treatment of priors ( see sec .[ bpz ] ) .the procedures developed here offer a major improvement in the redshift estimation and based on them it is possible to generate new statistical methods for applications which make use of photometric redshifts ( sec . [ appli ] ) .the outlay of the paper is the following : sec .2 reviews the current methods of photometric redshifts estimation making emphasis on their main sources of error .3 introduces an expression for the redshift likelihood slightly different from the one used by other groups when applying the sed fitting technique . in sec .4 it is described in detail how to apply bayesian probability to photometric redshift estimation ; the resulting method is called bpz . sec 5compares the performance of traditional statistical techniques , as maximum likelihood , with bpz by applying both methods to the hdf spectroscopic sample and to a simulated catalog .6 briefly describes how bpz may be developed to deal with problems in galaxy evolution and cosmology which make use of photometric redshifts .sec 7 briefly summarizes the main conclusions of the paper .there are two basic approaches to photometric redshift estimation . using the terminology of ,they may be termed ` sed fitting ' and ` empirical training set ' methods .the first technique ( , , , , , etc . ) involves compiling a library of template spectra , empirical or generated with population synthesis techniques .these templates , after being redshifted and corrected for intergalactic extinction , are compared with the galaxy colors to determine the redshift which best fits the observations . the training set technique( , , ) starts with a multicolor galaxy sample with apparent magnitudes and colors which has been spectroscopically identified . using this sample , a relationship of the kind determined using a multiparametric fit .it should be said that these two methods are more similar than what it is usually considered . to understand this ,let s analyze how the empirical training set method works . for simplicity ,let s forget about the magnitude dependence and let s suppose that only two colors are enough to estimate the photometric redshifts , that is , given a set of spectroscopic redshifts and colors , the training set method tries to fit a surface to the data .it must be realized that this method makes a very strong assumption , namely that the surface is a _ function _ defined on the color space : each value of is assigned one and only one redshift .visually this means that the surface does not ` bend ' over itself in the redshift direction .although this functionality of the redshift / color relationship can not be taken for granted in the general case ( at faint magnitudes there are numerous examples of galaxies with very similar colors but totally different redshifts ) , it seems to be a good approximation to the real picture at redshifts and bright magnitudes ( ) .a certain scatter around this surface is allowed : galaxies with the same value of may have slightly different redshifts and it seems to be assumed implicitly that this scatter is what limits the accuracy of the method .the sed fitting method is based on the color / redshift relationships generated by each of the library templates , . a galaxy at the position is assigned the redshift corresponding to the closest point of any of the curves in the color space .if these functions are inverted , one ends up with the curves , which , in general , are not functions ; they may present self crossings ( and of course they may also cross each other ) .if we limit ourselves to the region in the color / redshift space in which the training set method defines the surface , for a realistic template set the curves would be embedded in the surface , conforming its ` skeleton ' and defining its main features .the fact that the surface is continuous , whereas the template - defined curves are sparsely distributed , does not have a great practical difference .the gaps may be filled by finely interpolating between the templates ( ) , but this is not strictly necessary : usually the statistical procedure employed to search for the best redshift performs its own interpolation between templates .when the colors of a galaxy do not exactly coincide with one of the templates , or the maximum likelihood method will assign the redshift corresponding to the nearest template in the color space .this is equivalent to the curves having extended ` influence areas ' around them , which conform a sort of step like surface which interpolates across the gaps , and also extends beyond the region limited by them in the color space .therefore , the sed - fitting method comes with a built - in interpolation ( and extrapolation ) procedure . for this reason ,the accuracy of the photometric redshifts does not change dramatically when using a sparse template set as the one of ( ) or a fine grid of template spectra ( ) .the most crucial factor is that the template library , even if it contains few spectra , adequately reflects the main features of real galaxy spectra and therefore the main ` geographical accidents ' of the surface the intrinsic similarity between both photometric redshift methods explains their comparable performance , especially at redshift ( ) . when the topology of the color redshift relationship is simple , as apparently happens at low redshift , the training set method will probably work slightly better than the template fitting procedure , if only because it avoids the possible systematics due to mismatches between the predicted template colors and the real ones , and also partially because it includes not only the colors of the galaxies , but also their magnitudes , what helps to break the color / redshift degeneracies ( see below ) .however , it must be kept in mind that although the fits to the spectroscopic redshifts give only a dispersion ( ) , there is not a strong guarantee that the predictive capabilities of the training set method will keep such an accuracy , even within the same magnitude and redshift ranges . as a matter of fact , they do not seem to work spectacularly better than the sed fitting techniques ( ) , even at low and intermediate redshifts .however , the main drawback of the training set method is that , due to its empirical and _ ad hoc _ basis , in principle it can only be reliably extended as far as the spectroscopic redshift limit . because of this , it may represent a cheaper method of obtaining redshifts than the spectrograph , but which can not really go much fainter than it . besides it is difficult to transfer the information obtained with a given set of filters , to another survey which uses a different set .such an extrapolation has to be done with the help of templates , what makes the method lose its empirical purity . and last but not least, it is obvious that as one goes to higher redshifts / fainter magnitudes the topology of the color - redshift distribution displays several nasty degeneracies , even if the near - ir information is included , and it is impossible to fit a single functional form to the color - redshift relationship .although the sed fitting method is not affected by some of these limitations , it also comes with its own set of problems .several authors have analyzed in detail the main sources of errors affecting this method ( , ) .these errors may be divided into two broad classes : fig .. shows vs for the morphological types employed in sec [ test ] and .the color / redshift degeneracies happen when the line corresponding to a single template intersects itself or when two lines cross each other at points corresponding to different redshifts for each of them ( these cases correspond to `` bendings '' in the redshift / color relationship ) .it is obvious that the likelihood of such crossings increases with the extension of the considered redshift range and the number of templates included .it may seem that even considering a very extended redshift range , such confusions could in principle be easily avoided by using enough filters .however , the presence of color / redshift degeneracies is highly increased by random photometric errors , which can be visualized as a blurring or thickening of the relationship ( fig .[ colors]b ) : each point of the curves in fig .[ colors]a is expanded into a square of size , the error in the measured color .the first consequence of this is a ` continuous ' ( ) increase in the rms of the ` small - scale ' errors in the redshift estimation , and , what it is worse , the overlaps in the color - color space become more frequent , with the corresponding rise in the number of ` catastrophic ' redshift errors . in addition , multicolor information may often be degenerate , so increasing the number of filters does not break the degeneracies ; for instance , by applying a simple pca analysis to the photometric data of the hdf spectroscopic sample it can be shown that the information contained in the seven filters for the hdf galaxies can be condensed using only three parameters , the coefficients of the principal components of the flux vectors ( see also ) .therefore , if the photometric errors are large , it is not always possible to get totally rid of the degeneracies even increasing the number of filters .this means that the presence of color / redshift degeneracies is unavoidable for faint galaxy samples .the training set method somehow alleviates this problem by introducing an additional parameter in the estimation , the magnitude , which in some cases may break the degeneracy .however , it is obvious that color / redshift degeneracies also affect galaxies with the same magnitude , and the training set method does not even contemplate the possibility of their existence ! the sed fitting method at least allows for the existence of this problem , although it is not extremely efficient in dealing with it , especially with noisy data .its choice of redshift is exclusively based on the goodness of fit between the observed colors and the templates . in cases as the one described above , where two or more redshift / morphological type combinations have practically the same colors , the value of the likelihood would have two or more approximately equally high maxima at different redshifts ( see fig . [ peaks ] ) .depending on the random photometric error , one maximum would prevail over the others , and a small change in the flux could involve a catastrophic change in the estimated redshift ( see fig .[ peaks ] ) .however , in many cases there is additional information , discarded by ml , which could potentially help to solve such conundrums .for instance , it may be known from previous experience that one of the possible redshift / type combinations is much more likely than any other given the galaxy magnitude , angular size , shape , etc . in that case , and since the likelihoods are not informative enough , it seems clear that the more reasonable decision would be to choose the option which is more likely _ a priori _ as the best estimate .plain common sense dictates that one should compare all the possible hypotheses with the data , as ml does , but simultaneously keeping in mind the degrees of plausibility assigned to them by previous experience .there is not a simple way of doing this within ml , at best one may remove or change the redshift of the problematic objects by hand or devise _ ad hoc _solutions for each case .in contrast , bayesian probability theory allows to include this additional information in a rigorous and consistent way , effectively dealing with this kind of errors ( see sec [ bpz ] ) in some cases , the spectra of observed galaxies have no close equivalents in the template library .such galaxies will be assigned the redshift corresponding to the nearest template in the color / redshift space , no matter how distant from the observed color it is in absolute terms .the solution is obvious , one has to include enough templates in the library so that all the possible galaxy types are considered . as was explained above , the sed fitting techniques perform their own ` automatic ' interpolation and extrapolation , so once the main spectral types are included in the template library , the results are not greatly affected if one finely interpolates among the main spectra .the effects of using a correct but incomplete set of spectra are shown in sec [ test ] .both sources of errors described above are exacerbated at high redshifts .high redshift galaxies are usually faint , therefore with large photometric errors , and as the color / redshift space has a very extended range in , the degeneracies are more likely ; in addition the template incompleteness is worsened as there are few or no empirical spectra with which compare the template library .the accuracy of any photometric redshift technique is usually established by contrasting its output with a sample of galaxies with spectroscopic redshifts .it should be kept in mind , though , that the results of this comparison may be misleading , as the available spectroscopic samples are almost ` by definition ' especially well suited for photometricredshift estimation : relatively bright ( and thus with small photometric errors ) and often filling a privileged niche in the color - redshift space , far from degeneracies ( e.g. lyman - break galaxies ) .thus , it is risky to extrapolate the accuracy reached by current methods as estimated from spectroscopic samples ( and this also applies to bpz ) to fainter magnitudes .this is especially true for the training set methods , which deliberately minimize the difference between the spectroscopic and photometric redshifts .photometric redshift techniques based on template fitting look for the best estimate of a galaxy redshift from the comparison of its measured fluxes in filters , , with a set of template spectra which try to represent the different morphological types , and which have fluxes .these methods find their estimate by maximizing the likelihood ( or equivalently minimizing ) over all the possible values of the redshift , the templates and the normalization constant . since the normalization constant is considered a free parameter , the only information relevant to the redshift determination is contained in the ratios among the fluxes , that is , in the galaxy colors . the definition of the likelihood in eq .( [ li ] ) is not convenient for applying bayesian methods as it depends on a normalization parameter , which is not convenient to define useful priors either theoretically or from previous observations .here we prefer to normalize the total fluxes in each band by the flux in a ` base ' filter , e.g. the one corresponding to the band in which the galaxy sample was selected and is considered to be complete .then the ` colors ' , are defined as , where is the base flux .the exact way in which the colors are defined is not relevant , other combinations of filters are equally valid .hereinafter the magnitude ( corresponding to the flux ) will be used instead of in the expressions for the priors . andso , assuming that the magnitude errors are gaussianly distributed , the likelihood can be defined as where [c_j - c_{tj}(z)]\ ] ] and the matrix of moments can be expressed as by normalizing by instead of , one reduces the computational burden as it is not necessary to maximize over , which is already the ` maximum likelihood ' estimate for the value of the galaxy flux in that filter .it is obvious that this assumes that the errors in the colors are gaussian , which in general is not the case , even if the flux errors are .fortunately , the practical test performed below ( sec .[ test ] ) shows that there is little change between the results using both likelihood definitions ( see fig .[ comparison]a ) .within the framework of bayesian probability , the problem of photometric redshift estimation can be posed as finding the probability , i.e. , the probability of a galaxy having redshift given the data , _ and _ the prior information . as it was mentioned in the introduction ,bayesian theory states that _ all _ the probabilities are conditional ; they do not represent frequencies , but states of knowledge about hypothesis , and therefore always depend on other data or information ( for a detailed discussion of this and many other interesting issues see jaynes , 1998 ) .the prior information is an ample term which in general should include any knowledge that may be relevant to the hypothesis under consideration and is not already included in the data .note that in bayesian probability the relationship between the prior and posterior information is _ logical _ ; it does not have to be temporal or even causal .for instance , data from a new observation may be included as prior information to estimate the photometric redshifts of an old data set .although some authors recommend that the should not be dropped from the expressions of probability ( as a remainder of the fact that all probabilities are conditional and especially to avoid confusions when two probabilities based on different prior informations are considered as equal ) , here the rule of simplifying the mathematical notation whenever there is no danger of confusion will be followed , and from now will stand for , for etc . as a trivial example of the application of bayes s theorem ,let s consider the case if which there is only one template and the likelihood only depends on the redshift . then ,applying bayes theorem the expression is simply the likelihood : the probability of observing the colors if the galaxy has redshift ( it is assumed for simplicity that only depends on the redshift and morphological type , and not on ) the probability is a normalization constant , and usually there is no need to calculate it . the first factor , the _ prior _ probability , is the redshift distribution for galaxies with magnitude .this function allows to include information as the existence of upper or lower limits on the galaxy redshifts , the presence of a cluster in the field , etc .the effect of the prior on the estimation depends on how informative it is .it is obvious that for a constant prior ( all redshifts equally likely _ a priori _ ) the estimate obtained from eq .( [ 1 ] ) will exactly coincide with the ml result .this is also roughly true if the prior is ` smooth ' enough and does not present significant structure .however , in other cases , values of the redshifts which are considered very improbable from the prior information would be `` discriminated '' ; they must fit the data much better than any other redshift in order to be selected .note that in rigor , one should write the prior in eq .( [ 1 ] ) as where is the ` true ' value of the observed magnitude , is proportional to the number counts as a function of the magnitude and $ ] , i.e , the probability of observing if the true magnitude is . the above convolution accounts for the uncertainty in the value of the magnitude , which has the effect of slightly ` blurring ' and biasing the redshift distribution .to simplify our exposition this effect would not be consider hereinafter , and just and its equivalents will be used .it may seem from eq . [ 1 ] ( and unfortunately it is quite a widespread misconception ) that the only difference between bayesian and ml estimates is the introduction of a prior , in this case , . however , there is more to bayesian probability than just priors .the galaxy under consideration may belong to different morphological types represented by a set of templates .this set is considered to be _ exhaustive _ , i.e including all possible types , and _ exclusive _ : the galaxy can not belong to two types at the same time . in that case , using bayesian marginalization ( eq . [ mar ] ) the probability can be ` expanded ' into a ` basis ' formed by the hypothesis ( the probability of the galaxy redshift being _ and _ the galaxy type being ) .the sum over all these ` atomic ' hypothesis will give the total probability .that is , is the likelihood of the data given and .the prior may be developed using the product rule .for instance where is the galaxy type fraction as a function of magnitude and is the redshift distribution for galaxies of a given spectral type and magnitude .( [ bas ] ) and fig .[ peaks ] clearly illustrate the main differences between the bayesian and ml methods .ml would just pick the highest maximum over all the as the best redshift estimate , without looking at the plausibility of the corresponding values of or . on the contrary ,bayesian probability averages all these likelihood functions after weighting them by their prior probabilities . in this waythe estimation is not affected by spurious likelihood peaks caused by noise as it is shown in fig .[ peaks ] ( see also the results of sec .[ test ] ) .of course that in an ideal situation with perfect , noiseless observations ( and a nondegenerate template space , i.e , only one for each pair ) the results obtained with ml and bayesian inference would be the same . instead of a discrete set of templates ,the comparison library may contain spectra which are a function of continuous parameters .for instance , synthetic spectral templates depend on the metallicity , the dust content , the star formation history , etc . even starting from a set of a few templates, they may be expanded using the principal component analysis ( pca ) technique ( ) . in general ,if the spectra are characterized by possible parameters ( which may be physical characteristics of the models or just pca coefficients ) , the probability of given can be expressed as sometimes , instead of finding a ` point ' estimate for a galaxy redshift , one needs to establish if that redshift belongs within a certain interval .for instance , the problem may be to determine whether the galaxy has , where is a given threshold , or whether its redshift falls within a given , e.g. in the selection of cluster members or background galaxies for lensing studies .as an example , let s consider the classification of galaxies into the background - foreground classes with respect to a certain redshift threshold .one must choose between the hypothesis and its opposite , .the corresponding probabilities may be written as and the ( ` bookmaker ' ) odds of hypothesis are defined as the probability of being true over the probability of being false ( jaynes 1998 ) when , there is not enough information to choose between both hypothesis .a galaxy is considered to have if , where is a certain decision threshold .there are no fixed rules to choose the value of , and the most appropriate value depends on the task at hand ; for instance , to be really sure that no foreground galaxy has sneaked into the background sample , would have to be high , but if the main goal is selecting all the background galaxies and one does not mind including some foreground ones , then would be lower , etc .basically this is a problem concerning decision theory . in the same way, the cluster galaxies can be selected by choosing a redshift threshold which defines whether a galaxy belongs to the cluster .the corresponding hypothesis would be . and similarly , the odds of are defined as in those cases where the prior information is vague and does not allow to choose a definite expression prior probability , bayesian inference offers the possibility of `` calibrating '' the prior , if needed using the very data sample under consideration . let s suppose that the distribution is parametrized using continuous parameters .they may be the coefficients of a polynomial fit , a wavelet expansion , etc . in that case , including in eq .( [ bas ] ) , the probability can be written as where is the prior probability of , and is the prior probability of and as a function of the parameters .the latter have not been included in the likelihood expression since is totally determined once the values of and are known .now let s suppose that the galaxy belongs to a sample containing galaxies .each galaxy has a ` base ' magnitude and colors .the sets and contain respectively the colors and magnitudes of all the galaxies in the sample .then , the probability of the galaxy having redshift given the full sample data and can be written as the sets and , are identical to and except for the exclusion of the data and . applying bayes theorem , the product rule and simplifying where as before it has been considered that the likelihood of only depends on and that the probability of and only depend on and through . the expression to which we arrived is very similar to eq .( [ 20 ] ) only that now the shape of the prior is estimated from the data .this means that even if one starts with a very sketchy idea about the shape of the prior , the very galaxy sample under study can be used to determine the value of the parameters , and thus to provide a more accurate estimate of the individual galaxy characteristics . assuming that the data ( as well as ) are independent among themselves where if the number of galaxies in our sample is large enough , it can be reasonably assumed that the prior probability will not change appreciably with the inclusion of the data belonging to a single galaxy . in that case , a time - saving approximation is to use as a prior the probability , calculated using the whole data set , instead of finding for each galaxy . in addition, it should be noted that represents the bayesian estimate of the parameters which define the shape of the redshift distribution ( see fig .[ nz ] ) . in some cases spectroscopical redshifts are available for a fraction of the galaxy sample .it is straightforward to include them in the prior calibration procedure described above , using a delta function likelihood weighted by the probability of the galaxy belonging to a morphological type , as it is done to determine the priors in sec [ test ] .this gives the spectroscopical subsample a ( deserved ) larger weight in the determination of the redshift and morphological priors in comparison with the rest of the galaxies , at least within a certain color and magnitude region , but , unlike what happens with the training set method , the information contained in the rest of the sample is not thrown away .if nevertheless one wants to follow the training set approach and use only the spectroscopic sample , it is easy to develop a bayesian variant of this method .as before , the goal is to find an expression of the sort , which would give us the redshift probability for a galaxy given its colors and magnitude .if the color / magnitude / redshift multidimensional surface were infinitely thin , the probability would just be , where is a delta - function .but in the real world there is always some scatter around the surface defined by ( even without taking into account the color / redshift degeneracies ) , and it is therefore more appropriate to describe as e.g. a gaussian of width centered on each point of the surface .let s assume that all the parameters which define the shape of this relationship , together with are included in the set . using the prior calibration method introduced above, the probability distribution for these parameters can be determined from the training set . the expression for the redshift probability of a galaxy with colors and would then be the redshift probability obtained from eq .( [ 25 ] ) is compatible with the one obtained in eq .( [ bas ] ) using the sed fitting procedure .therefore it is possible to combine them in a same expression . as an approximation ,let s suppose that both of them are given equal weights , then in fact , due to the above described redundancy between the sed fitting method and the training set method ( sec .[ sed ] ) , it would be more appropriate to combine both probabilities using weights which would take these redundancies into account in a consistent way , roughly using eq.([25 ] ) at brighter magnitudes , where the galaxies are well studied spectroscopically and leaving eq.([bas ] ) for fainter magnitudes .the exploration of this combined , training set / sed - fitting approach will be left for a future paper , and in the practical tests performed below the followed procedure uses the sed fitting likelihood .the hubble deep field ( hdf ; ) has become _ the _ benchmark in the development of photometric redshift techniques . in this section bpzwill be applied to the hdf and its performance contrasted with the results obtained with the standard ` frequentist ' ( in the bayesian terminology ) method , the procedure usually applied to the hdf ( , , , etc . ) .the photometry used for the hdf is that of , which , in addition to magnitudes in the four hdf filters includes jhk magnitudes from the observations of . is chosen as the base magnitude .the colors are defined as described in sec .[ ml ] .the template library was selected after several tests with the hdf subsample which has spectroscopic redshifts ( 108 galaxies ) , spanning the range .the set of spectra which worked best is similar to that used by .it contains four templates ( e / s0 , sbc , scd , irr ) , that is the same spectral types used by , plus the spectra of 2 starbursting galaxies from ( used two very blue seds from gissel ) .all the spectra were extended to the uv using a linear extrapolation and a cutoff at , and to the near ir using gissel synthetic templates .the spectra are corrected for intergalactic absorption following .it could seem in principle that a synthetic template set which takes ( at least tentatively ) into account galaxy evolution is more appropriate than ` frozen ' template library obtained at low redshift and then extrapolated to very high redshifts .however , as has convincingly shown , the extended cww set offers much better results than the gissel synthetic models .i have also tried to use the rvf set of spectra and the agreement with the spectroscopic redshifts is considerably worse than using the empirical template set . and if the synthetic models do not work well within the magnitude range corresponding to the hdf spectroscopic sample is relatively bright , there is little reason to suppose that their performance will improve at fainter magnitudes .however , even working with empirical templates , it is important to be sure that the template library is complete enough .[ comparison ] illustrates the effects of template incompleteness in the redshift estimation .the left plot displays the results obtained using ml ( sec [ ml ] ) redshift estimation using only the four cww templates ( this plot is very similar to the diagram shown in , which confirms the validity of the expression for the likelihood introduced in sec [ ml ] ) . on the right , the results obtained also using ml ( no bpz yet ) but including two more templates , sb2 and sb3 from .it can be seen that the new templates almost do not affect the low redshift range , but the changes at are quite dramatic , the ` sagging ' of the cww only diagram disappears and the general scatter of the diagram decreases by .this shows how important it is to include enough galaxy types in the template library .no matter how sophisticated the statistical treatment is , it will do little to improve the results obtained with a deficient template set .the first step in the application of bpz is choosing the shape of the priors . due to the depth of the hdfthere is little previous information about the redshift priors , so this is a good example in which the prior calibration procedure described in sec[bpz ] has to be applied .it will be assumed that the early types ( e / s0 ) and spirals ( sbc , scd ) have a spectral type prior ( eq . [ pri ] ) of the form with for early types and for spirals .the irregulars ( the remaining three templates ; ) complete the galaxy mix .the fraction of early types at is assumed to be and that of spirals .the parameters and are left as free .based on the result from redshift surveys the following shape for the redshift prior has been chosen : ^{\alpha_t } \ } \label{par1}\ ] ] where and , and are considered free parameters . in total , 11 parameters have to be determined using the calibration procedure . for those objects with spectroscopic redshifts ,a ` delta - function ' located at the spectroscopic redshift of the galaxy has been used instead of the likelihood .table 1 shows the values of the ` best ' values of the parameters in eq .( [ par1],[par2 ] ) found by maximizing the probability in eq .( [ 25 ] ) using the subroutine _ amoeba _ ( ) .the errors roughly indicate the parameter range which encloses of the probability .the values of the parameters in eq .( [ par ] ) are and .the prior in redshift can obviously be found by summing over the ` nuisance ' parameter ( jaynes 1998 ) , in this case : fig .[ priors ] plots this prior for different magnitudes .with the priors thus found , one can proceed with the redshift estimation using eq .( [ 20 ] ) . here the multiplication by the probability distribution and the integration over will be skipped . as it can be seen from table 1 ,the uncertainties in the parameters are rather small and it is obvious that the results would not change appreciably , so the additional computational effort of performing a 11-dimensional integral is not justified .there are several options to convert the continuous probability to a point estimate of the ` best ' redshift . herethe ` mode ' of the final probability is chosen , although taking the ` median ' value of , corresponding to of the cumulative probability , or even the ` average ' is also valid .it was mentioned in sec [ bpz ] that bayesian probability offers a way to characterize the accuracy of the redshift estimation using the odds or a similar indicator , for instance by analogy with the gaussian distribution a ` ' error may be defined using a interval with contains of the integral of around , etc .here it has been chosen as an indicator of the redshift reliability the quantity , the probability of , where is the galaxy redshift . in this way ,when the value of is low , we are warned that the redshift prediction is unreliable . as it will be shown below, is extremely efficient in picking out galaxies with ` catastrophic errors ' in their redshifts .the photometric redshifts resulting from applying bpz to the spectroscopic sample are plotted in fig .galaxies with a probability ( there are three of them ) have been discarded , where is chosen to be , to take into account that the uncertainty grows with the redshift of the galaxies .it is evident from fig .[ hdf ] that the agreement is very good at all redshifts .the residuals have .if is divided by a factor , as suggested in , the rms of the quantity is only .there are no appreciable systematic effects in the residuals .one of the three objects discarded because of their having is the only clear outlier in our ml estimation , with and ( see fig .[ comparison]b ) , evidence of the usefulness of to generate a reliable sample . from the comparison of fig .[ comparison]b with fig .[ hdf ] , it may seem that , apart from the exclusion of the outlier , there is not much profit in applying bpz with respect to ml .this is not surprising in the particular case of the hdf spectroscopic sample , which is formed mostly by galaxies either very bright or occupying privileged regions in the color space .the corresponding likelihood peaks are thus rather sharp , and little affected by smooth prior probabilities .to illustrate the effectiveness of bpz under worse than ideal conditions , the photometric redshifts for the spectroscopic sample are estimated again using ml and bpz but restricting the color information to the ubvi hst filters .the results are plotted in fig .the ml redshift diagram displays 5 ` catastrophic errors ' ( ) .note that these are the same kind of errors pointed out by in the first hdf photometric redshifts estimations .bpz with a threshold ( which eliminates a total of 7 galaxies ) totally eliminates those outliers .this is a clear example of the capabilities of bpz ( combined with an adequate template set ) to obtain reliable photometric redshift estimates .note that even using near ir colors , the ml estimates shown in fig .[ comparison ] presented outliers .this shows that applying bpz to uv only data may yield results more reliable than those obtained with ml including near - ir information ! although of course no more accurate ; the scatter of fig .[ comparison]b , once the outliers are removed is , whereas fig .[ 4f]b has a scatter of , which incidentally is approximately the scatter of fig .[ comparison]a . another obvious way of testing the efficiency of bpz is with a simulated sample .the latter can be generated using the procedure described in .each galaxy in the hdf is assigned a redshift and type using ml ( this is done deliberately to avoid biasing the test towards bpz ) and then a mock catalog is created containing the colors corresponding to the best fitting redshifts and templates . to represent the photometric errors present in observations , a random photometric noise of the same amplitude as the photometric erroris added to each object .[ 90]b shows the ml estimated redshifts for the mock catalog ( ) against the ` true ' redshifts ; although in general the agreement is not bad ( as could be expected ) there are a large number of outliers ( ) , whose positions illustrate the main source of color / redshift degeneracies : high galaxies which are erroneously assigned redshifts and vice versa .this shortcoming of the ml method is analyzed in detail in .in contrast , fig .[ 90]a shows the results of applying bpz with a threshold of .this eliminates of the initial sample ( almost half of which have catastrophically wrong redshifts ) , but the number of outliers is reduced to a remarkable .is it possible to define some ` reliability estimator ' , similar to within the ml framework ?the obvious choice seems to be .[ odds]b plots the value of vs. the ml redshift error for the mock catalog .it is clear that is almost useless to pick out the outliers .the dashed line marks the upper quartile in ; most of the outliers are below it , at smaller values . in stark contrast ,[ odds]a plots the errors in the bpz redshifts _ vs. _ the values of .the lower quartile , under the dashed line , contains practically all the outliers . by setting an appropriate threshold onecan virtually eliminate the ` catastrophic errors ' .[ oddsmz ] shows the numbers of galaxies above a given threshold in the hdf as a function of magnitude and redshifts .it shows how risky it is to estimate photometric redshifts using ml for faint , objects ; the fraction of objects with possible catastrophic errors grows steadily with magnitude .there is one caveat regarding the use of or similar quantities as a reliability estimator .they provide a safety check against the color / redshift degeneracies , since basically they tell us if there are other probability peaks comparable to the highest one , but they can not protect us from template incompleteness .if the template library does not contain any spectra similar to the one corresponding to the galaxy , there is no indicator able to warn us about the unreliability of the prediction .because of this , no matter how sophisticated the statistical methods become , it is fundamental to have a good template set , which contains even if only approximately all the possible galaxy types present in the sample .finally , fig .[ nz ] shows the redshift distributions for the hdf galaxies with .no objects have been removed on the basis of , so the values of the histogram bins should be taken with care .the overplotted continous curves are the distributions used as priors and which simultaneously are the bayesian fits to the final redshift distributions .the results obtained from the hdf will be analyzed in more detail , using a revised photometry , in a forthcoming paper .as we argue above , the use of bpz for photometric redshift estimation offers obvious advantages over standard ml techniques .however , quite often obtaining photometric redshifts is not an end in itself , but an intermediate step towards measuring other quantities , like the evolution of the star formation rate ( ) , the galaxy galaxy correlation function ( , ) , galaxy or cluster mass distributions ( ) , etc .the usual procedure consists in obtaining photometric redshifts for all the galaxies in the sample , using ml or the training set method , and then work with them as if these estimates were accurate , reliable spectroscopic redshifts .the results of the previous sections alert us to the dangers inherent in that approach , as it hardly takes into account the uncertainties involved in photometric redshift estimation .in contrast , within the bayesian framework there is no need to work with the discrete , point like ` best ' redshift estimates .the whole redshift probability distribution can be taken into account , so that the uncertainties in the redshift estimation are accounted for in the final result . to illustrate this point ,let s outline how bpz can be applied to several problems which use photometric redshift estimation .if , instead of working with a discrete set of templates , one uses a spectral library whose templates depend of parameters as the metallicity , the star - formation history , initial mass function , etc . , represented by in sec [ bpz ] , it is obvious from equation ( [ cont ] ) that the same technique used to estimate the redshift can be applied to estimate any of the parameters which characterize the galaxy spectrum .for instance , let s suppose that one want to estimate the parameter . then defining , we have that is , the likelihoods and the weights are the same ones used for the redshift estimation ( eq . [ cont ] ) , only that now the integration is performed over the variables and instead of . in this way ,depending on the template library which is being used , one can estimate galaxy characteristics as the metallicity , dust content , etc .an important advantage of this method over ml is that the estimates of the parameter automatically include the uncertainty of the redshift estimation , which is reflected in the final value of . besides , by integrating the probability over all the parameters , one precisely includes the uncertainties caused by possible parameter degeneracies in the final result for .it should also be noted that as many of the results obtained in this paper , this method can be almost straightforwardly applied to spectroscopical observations ; one has only to modify the likelihood expression which compares the observed fluxes with the spectral template .the rest of the formalism remains practically identical .one frequent application of photometric redshift techniques is the study of galaxy cluster fields .the goals may be the selection of cluster galaxies to characterize their properties , especially at high redshifts , or the identification of distant , background galaxies to be used in gravitational lensing analysis ( ) .bpz offers an effective way of dealing with such problems . to simplify the problem , the effects of gravitational lensing on the background galaxies ( magnification , number counts depletion , etc. ) will be neglected ( see however the next subsection ) .let s suppose that we already have an estimate of the projected surface density of cluster galaxies ( which can roughly be obtained without any photometric redshifts , just from the number counts surface distribution ) , where is the position with respect to the cluster center .the surface density of ` field ' , non cluster galaxies is represented by . for each galaxy in the samplewe know its magnitude and colors and also its position , which is now a relevant parameter in the redshift estimation .following eq .( [ bas ] ) we can write a dependence on the magnitude ( e.g. for the early types cluster sequence ) could easily be included in the likelihood if needed .the prior can be divided into the sum of two different terms : where represents the prior probability of the galaxy belonging to the cluster , whereas corresponds to the prior probability of the galaxy belonging to the general field population .the expression for can be written as the probability corresponds to the expected galaxy mix fraction in the cluster , which in general will depend on the magnitude and will be different from that of field galaxies .the function is the redshift profile of the cluster ; a good approximation could be a gaussian with a width corresponding to the cluster velocity dispersion .the second prior takes the form which uses the priors for the general field galaxy population ( sec [ test ] ) . finally , the hypothesis that the galaxy belongs to the cluster or not can be decided about with the help of a properly defined , or with the odds introduced in sec [ bpz ] .we have assumed above that the cluster redshift and its galaxy surface density distribution are known . however , in some cases , there is a reasonable suspicion about the presence of a cluster at a certain redshift , but not total certainty , and our goal is to confirm its existence .an example using ml photometric redshift estimation is shown in .an extreme case with minimal prior information occur in optical cluster surveys , galaxy catalogs covering large areas of the sky are searched for clusters . in those cases there are no previous guesses about the position or redshift of the cluster , and a ` blind ' , automatized search algorithm has to be used ( ) .the prior expression used in the previous subsection offers a way to build such a searching method . instead of assuming that the cluster redshift and its surface distribution are known , the redshift can be left as a free parameter and the expression characterizing the cluster galaxy surface density distribution can be parametrized using the quantities .for simplicity , let s suppose that where is the cluster ` amplitude ' , is the number counts distribution expected for the cluster ( which in general will depend on the redshift ) and represents the cluster profile , centered on and with a scale width of .this expression , except for the dependence on the redshift is very similar to that used by to define their likelihood .then for a multicolor galaxy sample with data , and , the probability can be developed analogously to how it was done in sec .[ bpz ] . the probability assigned to the existence of a cluster at a certain redshift and position may be simply defined as .it seems that the most obvious application of bpz to cluster lensing analysis is the selection of background galaxies with the technique described in the previous subsection in order to apply the standard cluster mass reconstruction techniques ( , , , ) .however , using bayesian probability it is possible to develop an unified approach which simultaneously considers the lensing and photometric information in an optimal way . in a simplified fashion, the problem of determining the mass distribution of a galaxy cluster from observables can be stated as finding the probability where represent the parameters which describe the cluster mass distribution ; their number may range from a few , if the cluster is described with a simplified analytical model or as many as wanted if the mass distribution is characterized by e.g. fourier coefficients ( ) . represents the cosmological parameters , which sensitively affect the lensing strength .the parameter set represents the properties of the background galaxy population which affect the lensing , as its redshift distribution , number counts slope , etc .and it is assumed to be known previously .the data correspond to the galaxy ellipticities , to their angular positions . as above , correspond to their colors and magnitudes . for simplicity , it will be assumed that the cluster and foreground galaxies have been already removed and we are dealing only with the background galaxy population .analogous to eq .( [ 23 ] ) , we can develop eq .( [ 38 ] ) as where the last factor may be written as the meaning of the three factors on the right side of the equation is the following : represents the likelihood of measuring a certain ellipticity in a galaxy given its redshift , position , etc .the second factor corresponds to the so called `` broadhurst effect '' , the number counts depletion of background galaxies caused by the cluster magnification ( broadhurst 1995 , bentez & broadhurst 1998 ) .the last factor , is the redshift probability , but including a correction which takes into account that the observed magnitude of a galaxy is affected by the magnification .it is clear that the simplified method outlined here is not the only way of applying bayesian probability to cluster mass reconstruction .my purpose here is to show that this can be done considering the photometric redshifts in a integrated way with the rest of the information .as it has been shown in section ( [ bpz ] ) , bpz can be used to estimate the parameters characterizing the joint magnitude redshift morphological type galaxy distribution . for small fields, this distribution may be dominated by local perturbations , and the redshift distribution may be ` spiky ' , as it is observed in redshift surveys of small fields .however , if one were to average over a large number of fields , the resulting distribution would contain important information about galaxy evolution and the fundamental cosmological parameters . included galaxy counts as one of the four fundamental tests of observational cosmology , although noting that the number - redshift distribution is in fact more sensitive to the value of .as also notes , the color distribution of the galaxies in a survey hold also much more information about the process of galaxy evolution that the raw number counts . however , quite often the only method of analyzing multicolor observations is just comparing them with the number counts model predictions , or at most , with color distributions .there are several attempts at using photometric redshifts to study global galaxy evolution parameters ( e.g. , ) , but so far there is not an integrated statistical method which would simultaneously considers all the information , magnitudes and colors , contained in the data , and set it against the model predictions .it is then clear that eq .( [ 23 ] ) can be used to estimate these parameters from large enough samples of multicolor data . if it is assumed that all the galaxies belong to a few morphological types , the joint redshift - magnitude-`type ' distribution can be written as where is the comoving volume as a function of redshift , which depends on the cosmological parameters and , and is the schecter luminosity function for each morphological type , where the absolute magnitude has been substituted by the apparent magnitude ( a transformation which depends on the redshifts , cosmological parameters and morphological type ) .schecter s function also depend on , and , and on the evolutionary parameters , such as the merging rate , the luminosity evolution , etc .therefore , the prior probability of , and depends on the parameters , and . as an example , let s suppose that one wants to estimate , independently of the rest of the parameters , given the data .then the prior can be derived from in eq .( [ 5 ] ) .the prior allows to include the uncertainties derived from previous observations or theory in the values of these parameters , even when they are strongly correlated among themselves , as in the case of the schecter function parameters .the narrower the prior is , the less ` diluted ' the probability of and the more accurate the estimation .despite the remarkable progress of faint galaxy spectroscopical surveys , photometric redshift techniques will become increasingly important in the future .the most frequent approaches , the template fitting and empirical training set methods , present several problems related which hinder their practical application .here it is shown that by consistently applying bayesian probability to photometric redshift estimation , most of those problems are efficiently solved . the use of prior probabilities and bayesian marginalization allows the inclusion of valuable information as the shape of the redshift distributions or the galaxy type fractions , which is usually ignored by other methods .it is possible to characterize the accuracy of the redshift estimation in a way with no equivalents in other statistical approaches ; this property allows to select galaxy samples for which the redshift estimation is extremely reliable . in those caseswhen the _ a priori _ information is insufficient , it is shown how to ` calibrate ' the prior distributions , using even the data under consideration . in this way it is possible to determine the properties of individual galaxies more accurately and simultaneouslyestimate their statistical properties in an optimal fashion .the photometric redshifts obtained for the hubble deep field using optical and near - ir photometry show an excellent agreement with the spectroscopic redshifts published up to date in the interval , yielding a rms error and no outliers .note that these results , obtained with an empirical set of templates , have not been reached by minimizing the difference between spectroscopic and photometric redshifts ( as for empirical training set techniques , which may lead to an overestimation of their precision ) and thus offer a reasonable estimate of the predictive capabilities of bpz .the reliability of the method is also tested by estimating redshifts in the hdf but restricting the color information to the ubvi filters ; the results are shown to be more reliable than those obtained with the existing techniques even including the near - ir information .the bayesian formalism developed here can be generalized to deal with a wide range of problems which make use of photometric redshifts .several applications are outlined , e.g. the estimation of individual galaxy characteristics as the metallicity , dust content , etc . , or the study of galaxy evolution and the cosmological parameters from large multicolor surveys .finally , using bayesian probability it is possible to develop an integrated statistical method for cluster mass reconstruction which simultaneously considers the information provided by gravitational lensing and photometric redshift estimation .i would like to thank tom broadhurst and rychard bouwens for careful reading the manuscript and making valuable comments .thanks also to alberto fernndez - soto and collaborators for kindly providing me with the hdf photometry and filter transmissions , and to brenda frye for help with the intergalactic absorption correction .the author acknowledges a basque government postdoctoral fellowship .jaynes , `` probability theory : the logic of science '' , to be published in cambridge university press . a preliminary version can be obtained from thomas loredo s web page at _ http://astrosun.tn.cornell.edu / staff / loredo / bayes/_. | photometric redshift estimation is becoming an increasingly important technique , although the currently existing methods present several shortcomings which hinder their application . here it is shown that most of those drawbacks are efficiently eliminated when bayesian probability is consistently applied to this problem . the use of prior probabilities and bayesian marginalization allows the inclusion of valuable information , e.g. the redshift distributions or the galaxy type mix , which is often ignored by other methods . it is possible to quantify the accuracy of the redshift estimation in a way with no equivalents in other statistical approaches ; this property permits the selection of galaxy samples for which the redshift estimation is extremely reliable . in those cases when the _ a priori _ information is insufficient , it is shown how to ` calibrate ' the prior distributions , using even the data under consideration . there is an excellent agreement between the hdf spectroscopic redshifts and the predictions of the method , with a rms error up to and no systematic biases nor outliers . note that these results have not been reached by minimizing the difference between spectroscopic and photometric redshifts ( as is the case with empirical training set techniques ) , which may lead to an overestimation of the accuracy . the reliability of the method is further tested by restricting the color information to the ubvi filters . the results thus obtained are shown to be more reliable than those of standard techniques even when the latter include near - ir colors . the bayesian formalism developed here can be generalized to deal with a wide range of problems which make use of photometric redshifts . several applications are outlined , e.g. the estimation of individual galaxy characteristics as the metallicity , dust content , etc . , or the study of galaxy evolution and the cosmological parameters from large multicolor surveys . finally , using bayesian probability it is possible to develop an integrated statistical method for cluster mass reconstruction which simultaneously considers the information provided by gravitational lensing and photometric redshift estimation . |
consider a random vector ( rv ) whose distribution function ( df ) is a copula , i.e. , each follows the uniform distribution on . the copula is said to be in the max - domain of attraction of an extreme value df ( evd ) on , denoted by , if the characteristic property of the df is its _ max - stability _ , precisely , see , e.g. , ( * ? ? ?* section 4 ) .let be independent copies of .equation is equivalent with where and .all operations on vectors such as are meant componentwise . from ( * ? ? ?* corollary 2.2 ) we know that if and only if ( iff ) there exists a norm on such that the copula satisfies the expansion uniformly for ^d ] is in a -neighborhood of a generalized pareto copula process ( gpcp ) . while section [ sec : test_gpcp ] deals with copula processes or general processes as a whole , respectively , section [ sec : test_processes_grid ] considers the case that the underlying processes are observed at a finite grid of points only . in order to demonstrate the performance of our test , section [ sec : simulations ] states the results of a simulation study .since the results from the previous sections highly depend on a proper choice of some threshold , we also present a graphical tool that makes the decision , whether or not to reject the hypothesis , more comfortable .the following result provides a one parametric family of bivariate rv , which are easy to simulate .each member of this family has the property that its corresponding copula does not satisfy the extreme value condition .however , as the parameter tends to zero , the copulas of interest come arbitrarily close to a gpc , which , in general , is in the domain of attraction of an evd .[ lem : copula_not_in_domain_of_attraction ] let the rv have df where ] .we show that does not exist for \setminus{{\mathopen{}\mathclose\bgroup\originalleft}\{0{\aftergroup\egroup\originalright}\}} ] ; consider , e.g. , the sequences and as . on the other hand , elementary computations show for ^ 2\setminus{{\mathopen{}\mathclose\bgroup\originalleft}\{{\bm{0}}{\aftergroup\egroup\originalright}\}} ] .however , if , then we obtain and respectively . note that both terms have no limit for ; consider , e.g. , the sequences and as in the proof of lemma [ lem : copula_not_in_domain_of_attraction ] .the significance of the -neighborhood of a gpc can be seen as follows .denote by ^d:\,{{\mathopen{}\mathclose\bgroup\originalleft}\vert{\bm{t}}{\aftergroup\egroup\originalright}\vert}_1=\sum_{i=1}^d t_i=1{\aftergroup\egroup\originalright}\}} ] and the copula is obviously determined by the family of univariate _spectral df _ .this family is the _ spectral decomposition _ of * section 5.4 ) .a copula is , consequently , in with corresponding -norm iff its spectral decomposition satisfies as .the copula is in the -neighborhood of the gpc with -norm iff uniformly for as . in this casewe know from ( * ? ? ?* theorem 5.5.5 ) that ^d}{{\mathopen{}\mathclose\bgroup\originalleft}\vertc^n{\mathopen{}\mathclose\bgroup\originalleft}({\bm{1}}+ \frac 1n{\bm{x}}{\aftergroup\egroup\originalright } ) - \exp{\mathopen{}\mathclose\bgroup\originalleft}(-{{\mathopen{}\mathclose\bgroup\originalleft}\vert{\bm{x}}{\aftergroup\egroup\originalright}\vert}_d{\aftergroup\egroup\originalright}){\aftergroup\egroup\originalright}\vert } = o{\mathopen{}\mathclose\bgroup\originalleft}(n^{-\delta}{\aftergroup\egroup\originalright}).\ ] ] under additional differentiability conditions on with respect to , also the reverse implication holds ; cf .* theorem 5.5.5 ) .thus the -neighborhood of a gpc , roughly , collects those copula with a polynomial rate of convergence of maxima .let }{\mathopen{}\mathclose\bgroup\originalleft}(\sum_{k=1}^d\varphi(u_i){\aftergroup\egroup\originalright}),\qquad { \bm{u}}=(u_1,\dots , u_d)^{{\mathpalette{\mspace{-1mu}\raisebox{0.25ex}{}}}\in[0,1]^d,\ ] ] be an archimedean copula with generator function \to[0,\infty] ] , .the function is in particular strictly decreasing , continuous and satisfies ; for a complete characterization of the function we refer to .suppose that is differentiable on ] , where is an arbitrary function on the set of directions in ^d ] , , , and obtain analogously } \bigl({\bm{u}}^{(i)}\bigr)\end{aligned}\ ] ] with probability one .note that where and denote the ordered values of for each .thus and , , almost surely .since are iid with df , the distribution of does not depend on the marginal df but only on the copula of the continuous df .the following auxiliary result assures that we may actually consider instead of .[ lem : crucial_approximation_of_n_j ] suppose that , as .let satisfy , as .then we obtain for we have almost surely } \bigl({\bm{u}}^{(i)}\bigr ) - 1_{{\mathopen{}\mathclose\bgroup\originalleft}[{\bm{0 } } , { \mathopen{}\mathclose\bgroup\originalleft}(1-\frac{c_n}{j}{\aftergroup\egroup\originalright}){\bm{1}}{\aftergroup\egroup\originalright}]}\bigl({\bm{u}}^{(i)}\bigr ) { \aftergroup\egroup\originalright } ) \\ & = \sum_{i\in m(n ) } 1_{\times_{r=1}^d{\mathopen{}\mathclose\bgroup\originalleft}[\vphantom{\frac{c_n}{j}}\smash{0,u_{\langle n(1-\frac{c_n}{j})\rangle : n , r}}{\aftergroup\egroup\originalright } ] } \bigl({\bm{u}}^{(i)}\bigr ) { \mathopen{}\mathclose\bgroup\originalleft}(1 - 1_{{\mathopen{}\mathclose\bgroup\originalleft}[{\bm{0 } } , { \mathopen{}\mathclose\bgroup\originalleft}(1-\frac{c_n}{j}{\aftergroup\egroup\originalright}){\bm{1}}{\aftergroup\egroup\originalright}]}\bigl({\bm{u}}^{(i)}\bigr ) { \aftergroup\egroup\originalright } ) \\ & \mathrel{\hphantom{= } } { } - \sum_{i\in m(n ) } 1_{{\mathopen{}\mathclose\bgroup\originalleft}[{\bm{0 } } , { \mathopen{}\mathclose\bgroup\originalleft}(1-\frac{c_n}{j}{\aftergroup\egroup\originalright}){\bm{1}}{\aftergroup\egroup\originalright}]}\bigl({\bm{u}}^{(i)}\bigr ) { \mathopen{}\mathclose\bgroup\originalleft}(1 - 1_{\times_{r=1}^d[0,u_{\langle n(1-\frac{c_n}{j})\rangle : n , r } ] } \bigl({\bm{u}}^{(i)}\bigr){\aftergroup\egroup\originalright } ) \\ & = : r_n - t_n.\end{aligned}\ ] ] in what follows we show }\bigl(u_r^{(i)}\bigr){\aftergroup\egroup\originalright } ) = o(1)\ ] ] and thus ; note that }\bigl(u_r^{(i)}\bigr).\end{aligned}\ ] ] put with .then we have with where the first term is of order ; recall that is uniformly distributed on and as .furthermore we deduce from ( * ? ? ?* lemma 3.1.1 ) the exponential bound where and as well as as .this implies repeating the above arguments shows that as well , which completes the proof of lemma [ lem : crucial_approximation_of_n_j ] .the previous result suggests a modification of our test statistic in which does not depend on the margins but only on the copula of the underlying df . the following result is a consequence of theorem [ thm : limit_distribution_of_t_n ] and lemma [ lem : crucial_approximation_of_n_j ] .[ thm : limit_distribution_of_test_statistic_general ] suppose that the df is continuous and that its copula satisfies expansion for some .let satisfy , as , and let satisfy , , as .then we obtain with and as in theorem [ thm : limit_distribution_of_t_n ] .the condition can again be dropped if the copula is a gpc .in this section we carry the results of section [ sec : test_multivariate ] over to function space , namely the space ] . a stochastic process } ]is called a _ standard max - stable process _ ( smsp ) , if , , for each ] is an smsp iff there exists a _ generator process _}\in c[0,1] ] , for some , and , such that }{\mathopen{}\mathclose\bgroup\originalleft}({{\mathopen{}\mathclose\bgroup\originalleft}\vertf(t){\aftergroup\egroup\originalright}\vert}z_t{\aftergroup\egroup\originalright}){\aftergroup\egroup\originalright}){\aftergroup\egroup\originalright}),\qquad f\in e^-[0,1].\ ] ] by ] , which are bounded and have only a finite number of discontinuities ; ] that attain only non positive values .note that }{\mathopen{}\mathclose\bgroup\originalleft}({{\mathopen{}\mathclose\bgroup\originalleft}\vertf(t){\aftergroup\egroup\originalright}\vert}z_t{\aftergroup\egroup\originalright}){\aftergroup\egroup\originalright}),\qquad f\in e[0,1],\ ] ] defines a norm on ] be a _ copula process _, i.e. , each component is uniformly distributed on .a copula process ] denotes the indicator function of the interval ] is the uniquely determined _ generator constant _ pertaining to , and the remainder term satisfies as .a copula process ] , consequently , satisfies iff there exists a gpcp such that }+cf{\aftergroup\egroup\originalright})=p{\mathopen{}\mathclose\bgroup\originalleft}({\bm{v}}\le 1_{[0,1]}+cf{\aftergroup\egroup\originalright})+o(c ) , \qquad f\in e^-[0,1],\ ] ] as .if the remainder term in expansion is in fact of order for some , then the copula process ] , which enables us to check whether a given copula process } ] is in the -neighborhood of a gpcp for some . in this casethe remainder term in expansion is of order as .let satisfy , and as .then we obtain with and as in theorem [ thm : limit_distribution_of_t_n ] . in what followswe will extend theorem [ thm : limit_distribution_of_t_n_for_functional_data ] to the case when observing the underlying copula process is subject to a certain kind of nuisance .let }\in c[0,1] ] , is a continuous function in . is said to be in the functional max - domain of attraction of a max - stable process } ] satisfies , where is a smsp , and the df satisfies the univariate extreme value condition ; for the univariate case we refer to ( * ? ? ?* section 2.1 ) , among others .let be independent copies of the process and denote the sample df pertaining to the univariate iid observations by }\bigl(x_0^{(i)}\bigr) ] among , which exceed the random threshold for some ] corresponding to is in the -neighborhood of a gpcp for some . in this casethe remainder term in expansion is of order as .choose and such that , , , and as .then we obtain for we have with probability one }{\aftergroup\egroup\originalright}\ } } } - 1_{\bigl\{{\bm{u}}^{(i)}\nleq u_{\langle n(1-\frac{c_n}{j})\rangle : n}1_{[0,1]}\bigr\ } } { \aftergroup\egroup\originalright } ) \\ & = \sum_{i\in m(n ) } 1_{\bigl\{{\bm{u}}^{(i)}\nleq ( 1-\frac{c_n}{j})1_{[0,1]},\ { \bm{u}}^{(i)}\leu_{\langle n(1-\frac{c_n}{j})\rangle : n}1_{[0,1]}\bigr\ } } \\ & \mathrel{\hphantom{= } } { } - \sum_{i\in m(n ) } 1_{\bigl\{{\bm{u}}^{(i)}\nleq u_{\langle n(1-\frac{c_n}{j})\rangle : n}1_{[0,1]},\ { \bm{u}}^{(i)}\le ( 1-\frac{c_n}{j})1_{[0,1]}\bigr\ } } \\ & = : r_n - t_n.\end{aligned}\ ] ] we show in what follows that proceeding as in the proof of lemma [ lem : crucial_approximation_of_n_j ] : put with . note that satisfies for large values of as well as now we obtain by expansion , if is sufficiently large , },\ { \bm{u}}^{(1)}\le u_{\langle n(1-\frac{c_n}{j})\rangle : n}1_{[0,1 ] } { \aftergroup\egroup\originalright } ) \\ & = p{\mathopen{}\mathclose\bgroup\originalleft } ( { \bm{u}}^{(1)}\le u_{\langle n(1-\frac{c_n}{j})\rangle : n}1_{[0,1 ] } { \aftergroup\egroup\originalright } ) - p\bigl ( { \bm{u}}^{(1)}\le \min\bigl\{1-\frac{c_n}{j } , u_{\langle n(1-\frac{c_n}{j})\rangle : n}\bigr\}1_{[0,1 ] } \bigr ) \\ & \le p{\mathopen{}\mathclose\bgroup\originalleft } ( { \bm{u}}^{(1)}\le ( \mu_n + { \varepsilon}_n)1_{[0,1 ] } { \aftergroup\egroup\originalright } ) + p{\mathopen{}\mathclose\bgroup\originalleft } ( u_{\langle n(1-\frac{c_n}{j})\rangle : n } \ge \mu_n+{\varepsilon}_n{\aftergroup\egroup\originalright } ) \\ & \mathrel{\hphantom{\le } } { } - p\bigl ( { \bm{u}}^{(1)}\le \min\bigl\{1-\frac{c_n}{j } , \mu_n-{\varepsilon}_n\bigr\}1_{[0,1 ] } \bigr ) + p{\mathopen{}\mathclose\bgroup\originalleft}(u_{\langle n(1-\frac{c_n}{j})\rangle : n } \le \mu_n-{\varepsilon}_n{\aftergroup\egroup\originalright } ) \\ & = p{\mathopen{}\mathclose\bgroup\originalleft}({{\mathopen{}\mathclose\bgroup\originalleft}\vertu_{\langle n(1-\frac{c_n}{j})\rangle : n } - \mu_n{\aftergroup\egroup\originalright}\vert}\ge{\varepsilon}_n{\aftergroup\egroup\originalright } ) - ( 1-\mu_n-{\varepsilon}_n){\mathopen{}\mathclose\bgroup\originalleft}(m_d+r(\mu_n+{\varepsilon}_n-1){\aftergroup\egroup\originalright } ) \\ & \mathrel{\hphantom{= } } { } + \max\bigl\{\frac{c_n}{j } , 1-\mu_n+{\varepsilon}_n\bigr\ } { \mathopen{}\mathclose\bgroup\originalleft}(m_d + r{\mathopen{}\mathclose\bgroup\originalleft}(-\max\bigl\{\frac{c_n}{j } , 1-\mu_n+{\varepsilon}_n\bigr\}{\aftergroup\egroup\originalright}){\aftergroup\egroup\originalright } ) \\ & \le p{\mathopen{}\mathclose\bgroup\originalleft}({{\mathopen{}\mathclose\bgroup\originalleft}\vertu_{\langle n(1-\frac{c_n}{j})\rangle : n } - \mu_n{\aftergroup\egroup\originalright}\vert}\ge{\varepsilon}_n{\aftergroup\egroup\originalright } ) - { \mathopen{}\mathclose\bgroup\originalleft}(\frac{n}{n+1 } \frac{c_n}{j } - { \varepsilon}_n{\aftergroup\egroup\originalright}){\mathopen{}\mathclose\bgroup\originalleft}(m_d + o{\mathopen{}\mathclose\bgroup\originalleft}(c_n^\delta{\aftergroup\egroup\originalright}){\aftergroup\egroup\originalright } ) \\ & \mathrel{\hphantom{= } } { } + { \mathopen{}\mathclose\bgroup\originalleft}(\frac1{n+1 } + \frac{n}{n+1}\frac{c_n}{j } + { \varepsilon}_n{\aftergroup\egroup\originalright } ) { \mathopen{}\mathclose\bgroup\originalleft}(m_d + o{\mathopen{}\mathclose\bgroup\originalleft}(c_n^\delta{\aftergroup\egroup\originalright}){\aftergroup\egroup\originalright } ) \\ & = p{\mathopen{}\mathclose\bgroup\originalleft}({{\mathopen{}\mathclose\bgroup\originalleft}\vertu_{\langle n(1-\frac{c_n}{j})\rangle : n } - \mu_n{\aftergroup\egroup\originalright}\vert}\ge{\varepsilon}_n{\aftergroup\egroup\originalright } ) + o{\mathopen{}\mathclose\bgroup\originalleft}(c_n^{1+\delta}{\aftergroup\egroup\originalright } ) + o{\mathopen{}\mathclose\bgroup\originalleft}(\frac1n + { \varepsilon}_n{\aftergroup\egroup\originalright}).\end{aligned}\ ] ] the arguments in the proof of lemma [ lem : crucial_approximation_of_n_j ] show as and , thus , ; recall and note that repeating the above arguments one shows that as as well , which completes the proof of lemma [ lem : crucial_approximation_function_space ] .analogously to section [ sec : test_multivariate ] we now choose and replace in with and obtain by this statistic we can in particular check , whether the copula process } ] where the rv is independent of and follows the df defined in lemma [ lem : copula_not_in_domain_of_attraction ] with ] has identical continuous marginal df . for ,the process is a _ generalized pareto process _ , whose pertaining copula process is in the max - domain of attraction of a smsp , see . for is not true : just consider the bivariate rv , where , and repeat the arguments in lemma [ lem : copula_not_in_domain_of_attraction ] .observing a complete process on ] only through an increasing grid of points in ] be a gpcp with pertaining -norm}({{\mathopen{}\mathclose\bgroup\originalleft}\vertf(t){\aftergroup\egroup\originalright}\vert}z_t){\aftergroup\egroup\originalright}),\qquad f\in e[0,1].\ ] ] choose a grid of points .then the rv follows a gpc , whose corresponding -norm is given by let now depend on .if we require that then , by the continuity of } ] satisfying . againa suitable version of the central limit theorem implies and thus yielding theorem [ thm : limit_distribution_of_t_n ] now carries over : [ thm : asymptotic_distribution_of_t_n_for_gpcp ] let be a copula process satisfying .choose a grid of points with and as .let as defined in be based on the projections of independent copies of onto this increasing grid of points .let satisfy , and as .then we obtain with and as in theorem [ thm : limit_distribution_of_t_n ] .now we will extend theorem [ thm : asymptotic_distribution_of_t_n_for_gpcp ] to a general process }\in c[0,1] ] .we want to test whether the copula process }\in c[0,1] ] , , yielding an estimator of : }{\mathopen{}\mathclose\bgroup\originalleft}({\bm{x}}_{d_n}^{(i)}{\aftergroup\egroup\originalright}).\end{aligned}\ ] ] we have where denote the ordered values of for each and is again the right integer neighbor of .since transforming each by its df does not alter the value of with probability one , we obtain } \bigl({\bm{u}}_{d_n}^{(i)}\bigr)\ ] ] almost surely where are the order statistics of . since are independent copies of the rv , the distribution of does not depend on the marginal df .the following auxiliary result is crucial .[ newlem : crucial_approximation_of_n_j ] suppose that .let satisfy , and as .if satisfies as , then we obtain for we have almost surely } \bigl({\bm{u}}_{d_n}^{(i)}\bigr ) - 1_{{\mathopen{}\mathclose\bgroup\originalleft}[{\bm{0 } } , { \mathopen{}\mathclose\bgroup\originalleft}(1-\frac{c_n}{j}{\aftergroup\egroup\originalright}){\bm{1}}{\aftergroup\egroup\originalright}]}\bigl({\bm{u}}_{d_n}^{(i)}\bigr ) { \aftergroup\egroup\originalright } ) \\ & = \sum_{i\in m(n ) } 1_{\times_{r=1}^{d_n}{\mathopen{}\mathclose\bgroup\originalleft}[\vphantom{\frac{c_n}{j}}\smash{0,u_{\langle n(1-\frac{c_n}{j})\rangle : n , r}}{\aftergroup\egroup\originalright } ] } \bigl({\bm{u}}_{d_n}^{(i)}\bigr ) { \mathopen{}\mathclose\bgroup\originalleft}(1 - 1_{{\mathopen{}\mathclose\bgroup\originalleft}[{\bm{0 } } , { \mathopen{}\mathclose\bgroup\originalleft}(1-\frac{c_n}{j}{\aftergroup\egroup\originalright}){\bm{1}}{\aftergroup\egroup\originalright}]}\bigl({\bm{u}}_{d_n}^{(i)}\bigr ) { \aftergroup\egroup\originalright } ) \\ & \mathrel{\hphantom{= } } { } - \sum_{i\in m(n ) } 1_{{\mathopen{}\mathclose\bgroup\originalleft}[{\bm{0 } } , { \mathopen{}\mathclose\bgroup\originalleft}(1-\frac{c_n}{j}{\aftergroup\egroup\originalright}){\bm{1}}{\aftergroup\egroup\originalright}]}\bigl({\bm{u}}_{d_n}^{(i)}\bigr ) { \mathopen{}\mathclose\bgroup\originalleft}(1 - 1_{\times_{r=1}^d[0,u_{\langle n(1-\frac{c_n}{j})\rangle : n , r } ] } \bigl({\bm{u}}_{d_n}^{(i)}\bigr){\aftergroup\egroup\originalright } ) \\ & = : r_n - t_n.\end{aligned}\ ] ] in what follows we show }{\mathopen{}\mathclose\bgroup\originalleft}(u_{t_r^{(d_n)}}^{(i)}{\aftergroup\egroup\originalright}){\aftergroup\egroup\originalright } ) = o(1)\ ] ] and thus ; note that }{\mathopen{}\mathclose\bgroup\originalleft}(u_{t_r^{(d_n)}}^{(i)}{\aftergroup\egroup\originalright}).\end{aligned}\ ] ] put with .we have with where the first term is of order ; recall that is uniformly distributed on .it , therefore , suffices to show that the second term is of order as well . as in the proof of lemma [ lem : crucial_approximation_of_n_j ]we obtain for and .we have , moreover , for large and , since as , by repeating the above arguments one shows that as well , which completes the proof of lemma [ newlem : crucial_approximation_of_n_j ] .now we consider the modified test statistic which does not depend on the marginal df , ] has continuous marginal df , ] satisfies .choose a grid of points with and as .let satisfy and let satisfy , , and as .then we obtain with and as in theorem [ thm : limit_distribution_of_t_n ] .in this section we provide some simulations , which indicate the performance of the test statistic from theorem [ thm : limit_distribution_of_t_n ] .all computations were performed using the package written by pierre lafaye de micheaux and pierre duchesne .we chose method for computing the -values of our test statistics ; cf . for an overview of simulation techniques of quadratic forms in normal variables .therefore we chose , and .we generated independent realizations of that we denote by and we computed the asymptotic -values , , where is the df of in theorem [ thm : limit_distribution_of_t_n ] .the values are filed in increasing order , , and we plot the points this quantile plot is a discrete approximation of the quantile function of the -value of , which visualizes the performance of the test statistic . if the underlying copula is in a -neighborhood of a gpc , then the points , , should approximately lie along the line , $ ] , whereas otherwise should be significantly smaller than for many .2[gpc ] .,title="fig:",width=6 ] .,width=6 ] 2[copula not in ] .,title="fig:",width=6 ] .,width=6 ] 2[normal copula with coefficient of correlation .,title="fig:",width=6 ] .,width=6 ] 2[clayton copula with parameter .,title="fig:",width=6 ] .,width=6 ] as can be seen in figures 18 , the test is quite reliable in detecting a gpc itself . however ,if the underlying copula is _ not _ a gpc , the corresponding -value is quite sensitive to the selection of .e.g. , if we decrease the value of from to , a copula that is not even in the domain of attraction of an extreme value distribution can not be detected anymore , cf .figure [ fig : copula_not_in_doa_0_2 ] and figure [ fig : copula_not_in_doa_0_01 ] . on the other hand , there are copulas satisfying the -neighborhood condition that perform well with , such as the normal copula in figure [ fig : normal_copula_0_2 ] , and those that do not , such as the clayton copula in figure [ fig : clayton_copula_0_2 ] .the aforementioned disadvantages can , however , be overcome by considering the -value as a function of the threshold .therefore we simulated a single data set of sample size and plotted the -value for each of a some grid , see figures 912 .it turns out that the -value curve of the considered gpc is above the -line for , roughly .in contrast , the copula in figure [ fig : copula_not_in_doa ] has a peak for intermediate values of but , for shrinking , decreases again below the -line .finally , copulas in the -neighborhood of a gpc behave similar to the gpc in figure [ fig : gpc ] except that the point of intersection with the -line is notably smaller than . 2 , title="fig:",width=6 ] , width=6 ] 2 the shapes of the -value plots in figures 912 seem to be a reliable tool for the decision whether or not to reject the hypothesis .a great advantage of this approach is that a practitioner does not need to specify a suitable value of the threshold , which is a rather complicated task , but can make the decision based on a highly intelligible graphical tool .a further analysis of these kind of -value plots is part of future work .the authors are grateful to kilani ghoudi for his hint to compute the asymptotic distribution of the above test statistics using s method . | a multivariate distribution function is in the max - domain of attraction of an extreme value distribution if and only if this is true for the copula corresponding to and its univariate margins . have shown that a copula satisfies the extreme value condition if and only if the copula is tail equivalent to a generalized pareto copula ( gpc ) . in this paper we propose a -goodness - of - fit test in arbitrary dimension for testing whether a copula is in a certain neighborhood of a gpc . the test can be applied to stochastic processes as well to check whether the corresponding copula process is close to a generalized pareto process . since the -value of the proposed test is highly sensitive to a proper selection of a certain threshold , we also present a graphical tool that makes the decision , whether or not to reject the hypothesis , more comfortable . |
entropic uncertainty relations for position and momentum , or other canonically conjugated variables have been derived a long time ago .initial investigations were devoted to continuous shannon entropies .further generalizations took into account , in a spirit of deutsch and maassen - uffink results , the accuracy of measuring devices and impurity of a quantum state .recent 12th icssur / feynfest conference showed that the topic of entropic uncertainty relations including experimental accuracies is important in the task of entanglement detection in quantum optics .this paper is organized as follows . in a further part of the present sectioni point out some aspects related to entropic uncertainty relations , following the rephrased heisenberg sentence _ , , the more information we have about the position , the less information we can acquire about the momentum and vice versa _ and an observation made by peres _ , , the uncertainty relation such as is not a statement about the accuracy of our measuring instruments_. it the second section i generalize an approach presented in to the case of finite number of measuring devices ( detectors ) .i would like to start with the famous bialynicki - birula - mycielski entropic uncertainty relation of the form where and are probability distributions in position and momentum spaces respectively .wave functions in both spaces are related to each other by the fourier transform the introduction of starts as follows : _ , , the purpose of this paper is to derive a new stronger version of the heisenberg uncertainty relation in wave mechanics .this new uncertainty relation has a simple interpretation in terms of information theory .it is also closely related to newly discovered logarithmic sobolev inequalities . _ information theory enters to the inequality ( [ bbm ] ) by the notion of continuous shannon entropies and .connection with the logarithmic sobolev inequality can be recognized with the help of the reversed logarithmic sobolev inequality which , for the probability distribution function ( pdf ) defined on , reads .\label{logsobol}\ ] ] the variance is defined as usual , .we shall use the inequality ( [ logsobol ] ) independently for the position and momentum variables and obtain the stronger version of the heisenberg uncertainty relation , announced in it seems to be widely accepted that the continuous shannon entropy is also a good measure of information since it is a relative of shannon information entropy .however , this statement is not completely true . to prove that let me recall the definition of the shannon entropy of a set of probabilities we have two important properties of the shannon entropy ( [ shannon ] ) : 1 .since the probabilities are dimensionless the shannon entropy is also dimensionless .2 . from the property know that .these two properties are essential for the information - like interpretation of the shannon entropy because information can be neither negative nor expressed in any physical units .unfortunately the continuous shannon entropy does not possess these two properties .for instance the unit of the entropy ( in si units ) is the logarithm of meter .this makes impossible to check if the continuous shannon entropy is positive or negative . on the other handif we introduce some length ( momentum ) scale in order to measure ( ) then we still can not fulfill the second property since ( ) can be greater than and make the continuous entropy negative .an idea that helps to overcome the difficulties which i have mentioned above ( related to the continuous entropies ) introduces a partition of the real line into bins of equal width ( see fig .1 ) . [ fig1 ] as a result the continuous probability distribution is replaced by a discrete distribution , ( ) the bins are of equal width , thus .this idea was for the first time introduced by partovi who noticed that shall be interpreted as a _ , , resolution of the measuring device_. the requirement that all bins have to have the same width is a strong constraint on .in fact must be of the form so there is only one free parameter to be chosen .one might expect that different values of the parameter are equivalent to different possible choices of the central point ( ) of the used coordinate .in the first partovi s paper and later publications the easiest possible choice was made . in this choice the probability distributions and the shannon entropies are ( and are experimental accuracies for positions and momenta respectively ) : in this moment i would like to point out that the entropies defined in ( [ en1 ] , [ en2 ] ) are not correct measures of information ( because of the choice ) . in order to realize that one shall investigate the limit of large coarse graining ( large experimental accuracies ) or . in these limits performed measurements tell us nothing about the quantum state - our information is , thus , we shall expect .but in fact , we have : as a result ( ) have state - dependent values that vary between if the state is localized in or ( in positions or momenta ) and if the state is symmetric .in we captured this ambiguity and performed the redefinition of the probability distributions ( [ en1 ] ) in the following way : it is equivalent to the choice in the construction ( [ con ] ) and means that the center of the coordinate lays in the middle of the central bin . in the previous choice the center of the coordinate was the border point between two bins .similarly to ( [ bbm ] ) uncertainty relations for were found a long time .the stronger one reads since the shannon entropies ( [ en2 ] ) have been correctly defined ( are positive and dimensionless ) it is obvious that .thus , for the relation ( [ ibb ] ) becomes trivially satisfied and in fact does not give an optimal lower bound ( the bound is optimal , and saturated by a gaussian distribution , only in the limit and ) .some attempts to find a better lower bound failed completely ( for details see - the comment on ). recently we derived the bound ) .red / dashed curve represents while green curve represents ; .,title="fig:"][fig2 ] ,\label{bound new}\ ] ] which is always positive , but still not optimal ( cf .function } ] and obtain since we expect that the state fulfills we can , without any significant loss , simplify the right hand side of ( [ 19 ] ) after this step we obtain the inequality ,\quad\lambda\geq0,\ ] ] where the function depends only on the , , moment ( the norm ) and the , , moment ( ) of the function restricted to .since the moments of the function are independent we can find keeping constant .this step will reduce the state dependent input only to the one quantity . in order to find the minimum we shall calculate the following derivatives since for ] we shall modify and write using this result we obtain that : if we assume that the numbers ( ) are sufficiently large , so that and we find the following uncertainty relation this bound gives a nontrivial limitation when , what it this case is equivalent to . in a general casethe uncertainty relation for the sum of the entropies is , where : have briefly presented a current status in the topic of entropic uncertainty relations for position and momentum variables .i have pointed out ambiguities appearing in the scientific literature .finally i have investigated the case of finite number of detectors in an approach using discrete shannon entropies including experimental accuracies and derived nontrivial , state - dependent lower bound generalizing previous results .i am specially indebted to iwo bialynicki - birula who inspired and supported all my efforts in the topic of entropic uncertainty relations .i would like to thank stephen p. walborn , fabricio toscano , luiz davidovich and all other members of faculty of physics in federal university of rio de janeiro ( instituto de fsica in universidade federal do rio de janeiro ) for their hospitality after the 12th icssur / feynfest conference .this research was partly supported by a grant from the polish ministry of science and higher education for the years 20102012 .references i. bialynicki - birula and j. mycielski , _ commun .phys . _ * 44 * , 129 ( 1975 ) .d. deutsch , _ phys .lett . _ * 50 * , 631 ( 1983 ) .h. maassen and j.b.m .uffink , _ phys .lett_.*60 * , 1103 ( 1988 ) .m. h. partovi , _ phys .lett . _ * 50 * , 1883 ( 1983 ). i. bialynicki - birula , _ phys . lett . _ * 103 a * , 253 ( 1984 ) .i. bialynicki - birula , _ phys .a _ * 74 * , 052101 ( 2006 ) . m. d. srinivas , _ pramana _ * 25 * , 369 ( 1985 ) .i. bialynicki - birula and .rudnicki , entropic uncertainty relations in quantum physics , arxiv:1001.4668 ( 2010 ) . v. v. dodonov , _ j. opt .b : quantum semiclass . opt . _ * 4 * , 98 ( 2002 ) .b. coutinho dos santos , k. dechoum , and a. z. khoury , _ phys .lett . _ * 103 * , 230503 ( 2009 ) .a. saboia , f. toscano , and s. p. walborn , _ phys .a _ * 83 * , 032307 ( 2011 ) .a. peres , quantum theory : concepts and methods .kluwer , dordrecht ( 1995 ) .d. chafa , _ gaussian maximum of entropy and reversed log - sobolev inequality _ , sminaire de probabilitis , strasbourg * 36 * , 194 - 200 ( 2002 ) . g. wilk and z. wodarczyk , _ phys .a _ * 79 * , 062108 ( 2009 ) . i. bialynicki - birula and .rudnicki , _ phys .a _ * 81 * , 026101 ( 2010 ) .rudnicki , uncertainty related to position and momentum localization of a quantum state , arxiv:1010.3269v1 ( 2010 ) .rudnicki , s. p.walborn and f. toscano , in preparation .m. abramowitz and i.a .stegun , _ handbook of mathematical functions_. dover , new york , ( 1964 ) . | this paper is prepared as a contribution to the proceedings after the 12th icssur / feynfest conference held in foz do iguau ( brazil ) from 2 to 6 may 2011 . in the first part i briefly report the topic of entropic uncertainty relations for position and momentum variables . then i investigate the discrete shannon entropies related to the case of finite number of detectors set to measure probability distributions in position and momentum spaces . i derive an uncertainty relation for the sum of the shannon entropies which generalizes previous approaches [ _ phys . lett . _ * 103 a * , 253 ( 1984 ) ] based on an infinite number of detectors ( bins ) . [ sh ] * * [ cols="^ " , ] * * * ukasz rudnicki * _ center for theoretical physics , polish academy of sciences _ + _ aleja lotnikw 32/46 , pl-02 - 668 warsaw , poland _ author e - mail : rudnicki.edu.pl + * keywords : * shannon entropy , entropic uncertainty relations , uncertainty of quantum measurements performed with finite accuracy and finite number of detectors |
due to the advancement of cloud computing technologies , there has been an increased interest for individuals and business entities to move their data from traditional private data center to cloud servers . indeed , even popular storage service providers such as dropbox use third party cloud storage providers such as amazon s simple storage service ( s3 ) for data storage . with the wide adoption of cloud computing and storage technologies ,it is important to consider data security and reliability issues that are strongly related to the underlying storage services .though it is interesting to consider general data security issues in cloud computing environments , this paper will concentrate on the basic question of reliable data storage in the cloud and specifically the coding techniques for data storage in the cloud .there has been extensive research in reliable data storage on disk drives .for example , redundant array of independent disks ( raid ) techniques have been proposed and widely adopted to combine multiple disk drive components into a logical unit for better resilience , performance , and capacity . the well known solutions to address the data storage reliability are to add data redundancy to multiple drivers .there are basically two ways to add the redundancy : data mirror ( e.g. , raid 1 ) and data stripping with erasure codes ( e.g. , raid 2 to raid 6 ) .though data mirror ( or data replication ) provides the straightforward way for simple data management and repair of data on corrupted drives , it is very expensive to implement and deploy due to its high demand for redundancy .in addition to data replication techniques , erasure codes can be used to achieve the required data reliability level with much less data redundancy .note that though error correcting codes ( e.g. , reed - solomon codes ) could also be used for reliable data storage and correcting errors from failed disk drives , it is normally not used for data storage since it needs expensive computation for both encoding and decoding processes .erasure codes that have been used for reliable data storage systems are mainly binary linear codes which are essentially xor - operation based codes . for example , flat xor codes are erasure codes in which parity disks are calculated as the xor of some subset of data disks . though it is desirable to have mds ( maximal distance separable ) flat xor codes , it may not be available for all scenarios .non - mds codes have also been used in storage systems ( e.g. , the replicated raid configurations such as raid 10 , raid 50 , and raid 60 ) .however , we have not seen any systematic research in designing non - mds codes with flat xor operations for storage systems . in order to achieve better fault tolerance with minimal redundancy in data storage systems, there has also been active research in xor based codes which are not necessarily flat xor codes .for example , blaum , brady , bruck , and menon proposed the array code evenodd for tolerating two disk faults and correcting one disk errors .blaum , bruck , and vardy and huang have extended the construction of evenodd code to general codes for tolerating three disk faults .other non - flat xor based codes include ( but are not limited to ) ] flat bp - xor code is a binary linear code determined by a zero - one valued generator matrix such that for a given message vector , the corresponding code is computed as where the addition of two strings in is defined as the xor on bits .furthermore , a flat ] bp - xor code .the fact could be easily proved by the following observation : let ] are linearly independent , then , where is the hamming weight .thus for , there is neither binary linear ] bp - xor code .fact [ singluarlemma ] shows the impossibility of designing flat ] code with , we can tolerate erasure faults .the question that we are interested in is : for given , what is best distance we could achieve for a flat ] corresponds to the mds flat ] , ] bp - xor codes for tolerating two erasure faults .\left [ \begin{array}{c|c } i_3 & \begin{array}{ccc } 0&1&1\\ 1&0&1\\ 1&1&1 \end{array } \end{array}\right ] \left [ \begin{array}{c|c } i_4 & \begin{array}{ccc } 0&1&1\\ 1&0&1\\ 1&1&0\\ 1&1&1 \end{array } \end{array}\right]\ ] ] indeed , the above three codes are the only flat ] bp - xor code if and only if . _proof_. let ] bp - xor code if and only if for every , we have where is the hamming weight . by the fact that it follows that there exists a flat ] bp - xor code for .table [ table1 ] lists the required redundancy for tolerating two erasure faults when the value of changes ..redundancy for flat bp - xor $ ] codes [ cols="^,^,^",options="header " , ] the values in table [ necessn ] show that for degree one and two encoding symbols based array bp - xor codes , if we want to recover the information symbols from more than three columns ( i.e. , ) of encoding symbols , then we could only have one column redundancy for . combining theorem [ necessaryarraybpxor ] and values in table [ necessn ], we get the following results for edge - colored graphs .[ edgetheorem ] for a color set with , if we want to design an edge - colored graph ( or a network with more than kinds of homogeneous devices ) with minimum cost , then the edge - colored graph is robust against at most one color failures ( or one brand of homogeneous devices failures ) ._ based on the results in theorems [ graphtobpxorthm ] , [ necessaryarraybpxor ] and values in table [ necessn ] , we can have the following conclusion : given integers , and , an edge - colored graph with , , and , is -color connected only .thus the theorem follows .based on the bp ( belief propagation ) decoding process and the edge - colored graph model , we introduced flat bp - xor codes and array bp - xor codes .we have established the equivalence between edge - colored graphs and degree one and two based array bp - xor codes . in particular , we used results in array bp - xor codes to get new results in edge - colored graphs . for array bp - xor codes with higher degree encoding symbols , we do not have general results yet .it would be interesting to have a compelete characterization of the existence and bounds for array bp - xor codes with higher degree encoding symbols .these characterizations may be used to design more efficient lt codes or digital fountain techniques .we have implemented an online software package for users to generate array bp - xor codes with their own specification and to verify the validity of their array bp - xor codes ( see ) .i would like to thank duan qi for some discussion on hamming code and theorem [ 2erasurefaults ] and thank prof .doug stinson and yvo desmedt , for some discussions on edge - colored graphs , hamiltonian circuit , and factorization of complete graphs . | lt codes and digital fountain techniques have received significant attention from both academics and industry in the past few years . there have also been extensive interests in applying lt code techniques to distributed storage systems such as cloud data storage in recent years . however , plank and thomason s experimental results show that ldpc code performs well only asymptotically when the number of data fragments increases and it has the worst performance for small number of data fragments ( e.g. , less than ) . in their infocom 2012 paper , cao , yu , yang , lou , and hou proposed to use exhaustive search approach to find a deterministic lt code that could be used to decode the original data content correctly in distributed storage systems . however , by plank and thomason s experimental results , it is not clear whether the exhaustive search approach will work efficiently or even correctly . this paper carries out the theoretical analysis on the feasibility and performance issues for applying lt codes to distributed storage systems . by employing the underlying ideas of efficient belief propagation ( bp ) decoding process in lt codes , this paper introduces two classes of codes called flat bp - xor codes and array bp - xor codes ( which can be considered as a deterministic version of lt codes ) . we will show the equivalence between the edge - colored graph model and degree - one - and - two encoding symbols based array bp - xor codes . using this equivalence result , we are able to design general array bp - xor codes using graph based results . similarly , based on this equivalence result , we are able to get new results for edge - colored graph models using results from array bp - xor codes . |
a fundamental statistical problem is shrinkage estimation of a multivariate normal mean .see , for example , the february 2012 issue of _ statistical science _ for a broad range of theory , methods , and applications .let be multivariate normal with _ unknown _ mean vector and _ known _ variance matrix .consider the problem of estimating by an estimator under the loss , where is a _ known _ positive definite , symmetric matrix .the risk of is .the general problem can be transformed into a canonical form such that is diagonal and , the identity matrix ( e.g. , lehmann and casella , problem 5.5.11 ) . for simplicity , assume except in section [ sec3.2 ] that is and , where for a column vector .the letter is substituted for to emphasize that it is diagonal . for this problem , we aim to develop shrinkage estimators that are both minimax and capable of effective risk reduction over the usual estimator even in the heteroscedastic case ( i.e. , are not equal ) .an estimator of is minimax if and only if , _ regardless of _ , its risk is always no greater than , the risk of . for ,minimax estimators different from and hence dominating are first discovered in the homoscedastic case where ( i.e. , ) . james and stein showed that is minimax provided .stein suggested the positive - part estimator , which dominates .throughout , .shrinkage estimation has since been developed into a general methodology with various approaches , including empirical bayes ( efron and morris ; morris ) and hierarchical bayes ( strawderman ; berger and robert ) . while these approaches are prescriptive for constructing shrinkage estimators , minimaxity is not automatically achieved but needs to be checked separately . for the heteroscedastic case ,there remain challenging issues on how much observations with different variances should be shrunk relatively to each other ( e.g. , casella , morris ) . for the empirical bayes approach ( efron and morris ) ,the coordinates of are shrunk directly in proportion to their variances .but the existing estimators are , in general , non - minimax ( i.e. , may have a greater risk than the usual estimator ) . on the other hand , berger proposed minimax estimators , including admissible minimax estimators , such that the coordinates of are shrunk inversely in proportion to their variances .but the risk reduction achieved over is insubstantial unless all the observations have similar variances . to address the foregoing issues , we develop novel minimax estimators for multivariate normal means under heteroscedasticity .there are two central ideas in our approach .the first is to develop a class of minimax estimators by generalizing a geometric argument essentially in stein ( see also brandwein and strawderman ) . for the homoscedastic case , the argument shows that can be derived as an approximation to the best linear estimator of the form , where is a scalar .in fact , the optimal choice of in minimizing the risk is .replacing by leads to with .this derivation is highly informative , even though it does not yield the optimal value .our class of minimax estimators are of the linear form , where is a nonnegative definite , diagonal matrix indicating the direction of shrinkage and is a scalar indicating the magnitude of shrinkage .the matrix is open to specification , depending on the variance matrix but _ not _ on the data . for a fixed , the scalar determined to achieve minimaxity , depending on both and . minimax estimator corresponds to the special choice , thereby leading to the unusual pattern of shrinkage discussed above .the second idea of our approach is to choose by approximately minimizing the bayes risk with a normal prior in our class of minimax estimators .the bayes risk is used to measure average risk reduction for in an elliptical region as in berger .it turns out that the solution of obtained by our approximation strategy has an interesting simple form .in fact , the coordinates of are automatically segmented into two groups , based on their bayes `` importance '' ( berger ) , which is of the same order as the coordinate variances when the specified prior is homoscedastic .the coordinates of high bayes `` importance '' are shrunk inversely in proportion to their variances , whereas the remaining coordinates are shrunk in the direction of the bayes rule .this shrinkage pattern may appear paradoxical : it may be expected that the coordinates of high bayes `` importance '' are to be shrunk in the direction of the bayes rule .but that scheme is inherently aimed at reducing the bayes risk under the specified prior and , in general , fails to achieve minimaxity ( i.e. , it may lead to even a greater risk than the usual estimator ) .in addition to simplicity and minimaxity , we further show that the proposed estimator is scale adaptive in reducing the bayes risk : it achieves close to the minimum bayes risk , with the difference no greater than the sum of the 4 highest bayes `` importance '' of the coordinates of , simultaneously over a scale class of normal priors ( including the specified prior ) . to our knowledge, the proposed estimator seems to be the first one with such a property in the general heteroscedastic case .previously , in the homoscedastic case , is known to achieve the minimum bayes risk up to the sum of 2 ( equal - valued ) bayes `` importance '' of the coordinates over the scale class of homoscedastic normal priors ( efron and morris ) .the rest of this article is organized as follows .section [ sec2 ] gives a review of existing estimators .section [ sec3 ] develops the new approach and studies risk properties of the proposed estimator .section [ sec4 ] presents a simulation study .section [ sec5 ] provides concluding remarks .all proofs are collected in the .we describe a number of existing shrinkage estimators . see lehmann and casella for a textbook account and strawderman and morris and lysy for recent reviews . throughout, denotes the trace and denotes the largest eigenvalue. then and . for a bayes approach ,assume the prior distribution : , where is the prior variance .the bayes rule is given componentwise by .then the greater is , the more is shrunk whether is fixed or estimated from the data . for the empirical bayes approach of efron and morris , estimated by the maximum likelihood estimator such that morris suggested the modified estimator in our implementation , the right - hand side of ( [ eb - iter ] ) is computed to update from the initial guess , , for up to 100 iterations until the successive absolute difference in is , or is set to so that otherwise . alternatively , xie _ et al . _ proposed empirical bayes - type estimators based on minimizing stein s unbiased risk estimate ( sure ) under heteroscedasticity .their basic estimator is defined componentwise by where is obtained by minimizing the sure of , that is , . in general , the two types of empirical bayes estimators , and , are non - minimax , as shown in section [ sec4 ] . for a direct extension of ,consider the estimator and , more generally , , where is a scalar constant and a scalar function .see lehmann and casella , theorem 5.7 , although there are some typos .both and are spherically symmetric .the estimator is minimax provided and is minimax provided .no such exists unless , which restricts how much can differ from each other .for example , condition ( [ s - cond ] ) fails when and because and .berger proposed estimators of the form and , where is a scalar constant and a scalar function .then is minimax provided , and is minimax provided , regardless of differences between .however , a striking feature of and , compared with and , is that the smaller is , the more is shrunk . for example ( [ example ] ) , under ,the coordinates are shrunk only slightly , whereas are shrunk as if they were shrunk as a 7-dimensional vector under .the associated risk reduction is insubstantial , because the risk of estimating is a small fraction of the overall risk of estimating .define the positive - part version of componentwise as the estimator dominates by baranchik , section 2.5 .berger , equation ( 5.32 ) , stated a different positive - part estimator , with , but the component may not be of the same sign as . given a prior , berger suggested an approximation of berger s robust generalized bayes estimator as x.\end{aligned}\ ] ] the estimator is expected to provide significant risk reduction over if the prior is correct and be robust to misspecification of the prior , but it is , in general , non - minimax . in the case of , becomes , in the form of spherically symmetric estimators , where is a scalar function ( bock , brown ) .the estimator is minimax provided and is nondecreasing .moreover , if , then is non - minimax unless . to overcome the non - minimaxity of ,berger developed a minimax estimator by combining , , and a minimax estimator of bhattacharya .suppose that and the indices are sorted such that , where .define componentwise as \frac{d_j}{d_j+\gamma_j } x_j,\end{aligned}\ ] ] where . in the case of , reduces to the original estimator of bhattacharya .the factor is replaced by in berger s original definition of , corresponding to replacing by in . in our simulations ,the two versions of somehow yield rather different risk curves , and so do the corresponding versions of other estimators .but there has been limited theory supporting one version over the other .therefore , we focus on comparisons of only the corresponding versions of and other estimators .we develop a useful approach for shrinkage estimation under heteroscedasticity , by making explicit how different coordinates are shrunk differently .the approach not only sheds new light on existing results , but also lead to new minimax estimators .assume that ( diagonal ) and .consider estimators of the linear form where is a nonnegative definite , diagonal matrix indicating the _ direction _ of shrinkage and is a scalar indicating the _ magnitude _ of shrinkage .both and are to be determined .a sketch of our approach is as follows .a. for a fixed , the optimal choice of in minimizing the risk is b. for a fixed and a scalar constant , consider the estimator by theorem [ th1 ] , an upper bound on the risk function of is ,\end{aligned}\ ] ] where .requiring the second term to be no greater than 0 shows that if , then is minimax provided if , then the upper bound ( [ upper - bound ] ) has a minimum at . c. by taking in ,consider the estimator subject to , so that is minimax by step ( ii ) .a positive - part estimator dominating is defined componentwise by where are the diagonal elements of .the upper bound ( [ upper - bound ] ) on the risk functions of and , subject to , gives we propose to choose based on some optimality criterion , such as minimizing the bayes risk with a normal prior centered at 0 ( berger ) .further discussions of steps ( i)(iii ) are provided in sections [ sec3.2][sec3.3 ] .we first develop steps ( i)(ii ) for the general problem where neither nor may be diagonal .the results can be as concisely stated as those just presented for the canonical problem where is diagonal and .such a unification adds to the attractiveness of the proposed approach .consider estimators of the form ( [ delta - form ] ) , where is not necessarily diagonal , but condition ( [ a - cond ] ) is invariant under a linear transformation . to see this ,let be a nonsingular matrix and and .for the transformed problem of estimating based on with variance matrix , the transformed estimator from ( [ delta - form ] ) is .the application of condition ( [ a - cond ] ) to says that is nonnegative definite and therefore is equivalent to ( [ a - cond ] ) itself . for the canonical problem where ( diagonal ) , condition ( [ a - cond ] ) only requires that is nonnegative definite , allowing to be non - diagonal . on the other hand, it seems intuitively appropriate to restrict to be diagonal. then condition ( [ a - cond ] ) is equivalent to saying that is nonnegative definite ( and diagonal ) , which is the condition introduced on in the sketch in section [ sec3.1 ] .the risk of an estimator of the form ( [ delta - form ] ) is for a fixed , the optimal in minimizing the risk is replacing by and by a scalar constant leads to the estimator for a generalization , replacing by with a scalar function leads to the estimator we provide in theorem [ th1 ] an upper bound on the risk function of .[ th1 ] assume that almost differentiable ( stein ) . if ( [ a - cond ] ) holds and is nondecreasing , then for each , ,\end{aligned}\ ] ] where and . taking in ( [ r - upper - bound ] )gives an upper bound on . requiring the second term in the risk upper bound ( [ r - upper - bound ] ) to be no greater than 0 leads to a sufficient condition for to be minimax .[ cor1 ] if ( [ a - cond ] ) holds and , then is minimax provided particularly , is minimax provided . for the canonical problem , inequality ( [ r - upper - bound ] ) and condition ( [ tan - cond2 ] ) for give respectively ( [ upper - bound ] ) and ( [ tan - cond ] ) .these results generalize the corresponding ones for and in section [ sec2 ] , by the specific choices or .the generalization also holds if is replaced by a scalar function .in fact , condition ( [ tan - cond2 ] ) reduces to baranchik s condition in the homoscedastic case . if , then the risk upper bound ( [ r - upper - bound ] ) has a minimum at . as a result , consider the estimator which is minimax provided . if ( berger ) , then and , by the proof of theorem [ th1 ] in the , the risk upper bound ( [ r - upper - bound ] ) becomes exact for .therefore , for , the estimator is uniformly best in the class , in agreement with the result that is uniformly best among in the homoscedastic case .the estimator has desirable properties of invariance .first , is easily shown to be invariant under a multiplicative transformation for a scalar .second , is invariant under a linear transformation of the inference problem .similarly as discussed below ( [ a - cond ] ) , let be a nonsingular matrix and , , and .for the transformed problem of estimating based on , the transformed estimator from is , whereas the application of is .the two estimators are identical because , , and hence . finally , we present a positive - part estimator dominating in the case where both and are symmetric , that is , similarly to ( [ a - cond ] ) , it is easy to see that this condition is invariant under a linear transformation . condition ( [ a - cond2 ] ) is trivially true if , , and are diagonal . in the, we show that ( [ a - cond2 ] ) holds if and only if there exists a nonsingular matrix such that , , and , where and are diagonal and the diagonal elements of or are , respectively , the eigenvalues of or . in the foregoing notation , and . for the problem of estimating based on , consider the estimator and the positive - part estimator with the component , where are the diagonal elements of .the estimator dominates by a simple extension of baranchik , section 2.5 . by a transformation back to the original problem , yields , whereas yields b x.\end{aligned}\ ] ] then dominates .therefore , ( [ r - upper - bound ] ) also gives an upper bound on the risk of , with , even though is not of the form .in practice , a matrix satisfying ( [ a - cond2 ] ) can be specified in two steps .first , find a nonsingular matrix such that and , where is diagonal .second , pick a diagonal matrix and define .the first step is always feasible by taking , where is a nonsingular matrix such that and is an orthogonal matrix such that is diagonal . given and , it can be shown that and depend on the choice of , but not on that of , provided that if for any . in the canonical casewhere and , this condition amounts to saying that any coordinates of with the same variances should be shrunk in the same way .different choices of lead to different estimators and .we study how to choose , depending on but _ not _ on , to approximately optimize risk reduction while preserving minimaxity for .the estimator provides even greater risk reduction than .we focus on the canonical problem where ( diagonal ) and .further , we restrict to be diagonal and nonnegative definite . as discussed in berger , any estimator can have significantly smaller risk than only for in a specific region .berger considered the situation where significant risk reduction is desired for an elliptical region with and the prior mean and prior variance matrix .see and reviewed in section [ sec2 ] .to measure average risk reduction for in region ( [ region ] ) , berger used the bayes risk with the normal prior . for simplicity , assume throughout that and is diagonal .we adopt berger s ideas of specifying an elliptical region and using the bayes risk to quantify average risk reduction in this region .we aim to find , subject to , minimizing the bayes risk of with the prior , , where denotes the expectation with respect to the prior .given , the risk can be numerically evaluated .a simple monte carlo method is to repeatedly draw and and then take the average of .but it seems difficult to literally implement the foregoing optimization .alternatively , we develop a simple method for choosing by two approximations .first , if , then taking the expectation of both sides of ( [ point - bound ] ) with respect to the prior gives an upper bound on the bayes risk of : where denotes the expectation with respect to the marginal distribution of in the bayes model , that is , .an approximation strategy for choosing is to minimize the upper bound ( [ bayes - bound ] ) on the bayes risk or to maximize the second term .the expectation can be evaluated as a 1-dimensional integral by results on inverse moments of quadratic forms in normal variables ( e.g. , jones ) .but the required optimization problem remains difficult .second , approximations can be made to the distribution of the quadratic form .suppose that is approximated with the same mean by , where is a chi - squared variable with degrees of freedom .then is approximated by .we show in the that this approximation gives a valid lower bound : a direct application of jensen s inequality shows that .but the lower bound ( [ bayes - bound2 ] ) is strictly tighter and becomes exact when .no simple bounds such as ( [ bayes - bound2 ] ) seem to hold if more complicated approximations ( e.g. , satterthwaite ) are used .combining ( [ bayes - bound ] ) and ( [ bayes - bound2 ] ) shows that if , then notice that is invariant under a multiplicative transformation for a scalar , and so is the upper bound ( [ bayes - bound3 ] ) . our strategy for choosing is to minimize the upper bound ( [ bayes - bound3 ] ) subject to or , equivalently , to solve the constrained optimization problem : \\[-8pt ] & & \quad \mbox{subject to } \quad \sum_{j=1}^p ( d_j+ \gamma_j ) a_j^2 = \mbox{fixed}. \nonumber\end{aligned}\ ] ] the condition is dropped , because for , the achieved maximum is at least for some scalar . in spite of the approximations used in our approach ,theorem [ th2 ] shows that not only the problem ( [ opt ] ) admits a non - iterative solution , but also the solution has a very interesting interpretation . for convenience ,assume thereafter that the indices are sorted such that .[ th2 ] assume that , with and with ( ) .for problem ( [ opt ] ) , assume that with ( ) and , satisfied by . then the following results hold .a. there exists a _unique _ solution , , to problem ( [ opt ] ) .b. let be the largest index such that .then , for , and where and the achieved maximum value , , is . c. the resulting estimator is minimax .we emphasize that , although can be considered a tuning parameter , the solution is _ data independent _ , so that is automatically minimax .if a data - dependent choice of were used , minimaxity would not necessarily hold .this result is achieved both because each estimator with is minimax and because a global criterion ( such as the bayes risk ) is used , instead of a pointwise criterion ( such as the frequentist risk at the unknown ) , to select . by these considerations ,our approach differs from the usual exercise of selecting a tuning parameter in a data - dependent manner for a class of candidate estimators .there is a remarkable property of monotonicity for the sequence , which underlies the uniqueness of and .[ cor2 ] the sequence is nonincreasing : for , , where the equality holds if and only if the condition is equivalent to saying that the left side is greater than the right - hand side in the above expression for . therefore , is the smallest index with this property , and .the estimator is invariant under scale transformations of .therefore , the constant can be dropped from the expression of in theorem [ th1 ] .[ cor3 ] the solution can be rescaled such that then .moreover , it holds that the estimator can be expressed as the foregoing results lead to a simple algorithm for solving problem ( [ opt ] ) : a. sort the indices such that .b. take to be the smallest index ( corresponding to the largest ) such that and or take if there exists no such . c. compute by ( [ sol1])([sol2 ] ) .this algorithm is guaranteed to find the ( unique ) solution to problem ( [ opt ] ) by a fixed number of numerical operations .no iteration or convergence diagnosis is required .therefore , the algorithm is exact and non - iterative , in contrast with usual iterative algorithms for nonlinear , constrained optimization .the estimator has an interesting interpretation . by ( [ sol1])([sol2 ] ), there is a dichotomous segmentation in the shrinkage direction of the coordinates of based on .this quantity is said to reflect the bayes `` importance '' of , that is , the amount of reduction in bayes risk obtainable in estimating in berger .the coordinates with high are shrunk inversely in proportion to their variances as in berger s estimator , whereas the coordinates with low are shrunk in the direction of the bayes rule .therefore , mimics the bayes rule to reduce the bayes risk , except that mimics for some coordinates of highest bayes `` importance '' in order to achieve minimaxity .in fact , by inequality ( [ sol - ineq ] ) , the relative shrinkage , , of each ( ) in versus the bayes rule is always no greater than that of ( ) .the expression ( [ delta - a ] ) suggests that there is a close relationship in beyond the shrinkage direction between and the bayes rule under the bayes model , . in this case , , and hence behaves similarly to .therefore , _ on average _ under the bayes model , the coordinates of are shrunk in the same as in the bayes rule , except that some coordinates of highest bayes `` importance '' are shrunk no greater than in the bayes rule .while this discussion seems heuristic , we provide in section [ sec3.4 ] a rigorous analysis of the bayes risk of , compared with that of the bayes rule .we now examine for two types of priors : and ( ) , referred to as the homoscedastic and heteroscedastic priors . for both types , are of the same order as the variances .recall that is invariant under a multiplicative transformation of .for both the homoscedastic prior with and the heteroscedastic prior _ regardless _ of , the solution can be rescaled such that denote by this rescaled matrix , corresponding to .then coordinates with high variances are shrunk inversely in proportion to their variances , whereas coordinates with low variances are shrunk symmetrically . for ,the proposed method has a purely frequentist interpretation : it seeks to minimize the upper bound ( [ bayes - bound3 ] ) on the pointwise risk of at . for the homoscedastic prior with ,the proposed method is then to minimize the upper bound ( [ bayes - bound3 ] ) on the bayes risk of with an extremely flat , homoscedastic prior . as , the solution can be rescaled such that denote by this rescaled matrix .then coordinates with low ( or high ) variances are shrunk directly ( or inversely ) in proportion to their variances . the direction can also be obtained by using a fixed prior in the form ( ) for arbitrary , where .finally , in the homoscedastic case ( ) , if the prior is also homoscedastic ( ) , then , , and reduces to the james stein estimator , _ regardless _ of and . the estimator is constructed by minimizing the upper bound ( [ bayes - bound3 ] ) on the bayes risk subject to minimaxity .in addition to simplicity , interpretability , and minimaxity demonstrated for , it remains important to further study risk properties of and show that can provide effective risk reduction over .write whenever needed to make explicit the dependency of on .first , we study how close the bayes risk of can be to that of the bayes rule , which is the smallest possible among _ all _ estimators including non - minimax ones , under the prior , .the bayes rule is given componentwise by , with the bayes risk where , indicating the bayes `` importance '' of ( berger ) .the upper bound ( [ bayes - bound3 ] ) on the bayes risk of gives because and hence by corollary [ cor3 ] .it appears that the difference between and tends to be large if is large .but can not differ too much from each other because by corollary [ cor1 ] , then the difference between and should be limited even if is large .a careful analysis using these ideas leads to the following result .[ th3 ] suppose that the prior is .if , then if , then throughout , an empty summation is 0 .there are interesting implications of theorem 3 . by ( [ loose - bound1 ] ) and ( [ loose - bound2 ] ) , then achieves almost the minimum bayes risk if . in terms of bayesrisk reduction , the bound ( [ bayes - close ] ) shows that therefore , achieves bayes risk reduction within a negligible factor of that achieved by the bayes rule if . in the homoscedastic casewhere both and , reduces to , regardless of ( section [ sec3.3 ] ) .then the bounds ( [ tight - bound1 ] ) and ( [ tight - bound2 ] ) become exact and give efron and morris s result that or equivalently .it is interesting to compare the bayes risk bound of with that of the following simpler version of berger s estimator : by berger , is minimax and there seems to be no definite comparison between the bounds ( [ tight - bound1 ] ) and ( [ tight - bound2 ] ) on and the exact expression ( [ mb - bayes1 ] ) for , although the simple bounds ( [ loose - bound1 ] ) and ( [ loose - bound2 ] ) is slightly higher , by at most , than the bound ( [ mb - bayes2 ] ) .of course , each risk upper bound gives a conservative estimate of the actual performance , and comparison of two upper bounds should be interpreted with caution .in fact , the positive - part estimator yields lower risks than those of the non - simplified estimator in our simulation study ( section [ sec4 ] ) .the simplicity of and makes it easy to further study them in other ways than using the bayes ( or average ) risk .no similar result to the following theorem [ th4 ] has been established for or .corresponding to the prior , consider the worst - case ( or maximum ) risk over the hyper - rectangle ( e.g. , donoho _ et al . _ ) . applying jensen s inequality to ( [ point - bound ] ) shows that if , then which immediately leads to by the discussion after ( [ bayes - bound2 ] ) , a direct application of jensen s inequality to ( [ bayes - bound ] ) shows that the bayes risk is also no greater than the right - hand side of ( [ minimax - bound ] ) , whereas inequality ( [ bayes - bound2 ] ) leads to a strictly tighter bound ( [ bayes - bound3 ] ) .nevertheless , the upper bound ( [ minimax - bound ] ) on the worst - case risk of gives similarly as how ( [ bayes - bound3 ] ) leads to ( [ bayes - bound4 ] ) on the bayes risk of .therefore , the following result holds by the same proof of theorem 3 .[ th4 ] suppose that .if , then if , then there are similar implications of theorem [ th4 ] to those of theorem [ th3 ] . by donoho , the minimax linear risk over , , coincides with the minimum bayes risk , and is no greater than times the minimax risk over , .these results are originally obtained in the homoscedastic case ( ) , but they remain valid in the heteroscedastic case by the independence of the observations and the separate constraints on . therefore , a similar result to ( [ bayes - close ] ) holds : if , then achieves almost the minimax linear risk ( or the minimax risk up to a factor of ) over the hyper - rectangle , in addition to being globally minimax with unrestricted .the foregoing results might be considered non - adaptive in that is evaluated with respect to the prior or the parameter set with the same used to construct .but , by the invariance of under scale transformations of , is identical to the estimator , , that would be obtained if is replaced by for any scalar such that the diagonal matrix is nonnegative definite . by theorems 34 , this observation leads directly to the following adaptive result .in contrast , no adaptive result seems possible for .[ cor4 ] let and .then for each , & \le & r\bigl ( \delta^{\mathrm{bayes}}_{\gamma_\alpha } , \pi_{\gamma_\alpha}\bigr ) + \alpha^{-1 } \bigl(d_1^*+d_2^*+d_3^*+d_4^ * \bigr ) \\ & = & r^l(\mathcal h_{\gamma_\alpha})+ \alpha^{-1 } \bigl ( d_1^*+d_2^*+d_3^*+d_4^*\bigr),\end{aligned}\ ] ] where . for fixed , can achieve close to the minimum bayes risk or the minimax linear risk with respect to each prior in the class or each parameter set in the class under mild conditions . for illustration , consider the case of a heteroscedastic prior with .then can be reparameterized as . by corollary [ cor4 ] , for each , where and . therefore , if , then achieves the minimum bayes risk , within a negligible factor , under the prior for each .this can be seen as an extension of the result that in the homoscedastic case , asymptotically achieves the minimum bayes risk under the prior for each as .finally , we compare the estimator with a block shrinkage estimator , suggested by the differentiation in the shrinkage of low- and high - variance coordinates by .consider the estimator where is a cutoff index , and if is of dimension 1 or 2 .the index can be selected such that the coordinate variances are relatively homogeneous in each block .alternatively , a specific strategy for selecting is to minimize an upper bound on the bayes risk of , similarly as in the development of . applying ( [ bayes - bound3 ] ) with to in the two blocks shows that , where the first ( or second ) term in is set to 0 if ( or ). then can be defined as the smallest index such that .but the upper bound ( [ bayes - bound4 ] ) on is likely to be smaller than the corresponding bound on , because for each by the cauchy schwarz inequality .therefore , tends to yield greater risk reduction than .this analysis also indicates that can be advantageous over extended to multiple blocks .the rationale of forming blocks in and differs from that in existing block shrinkage estimators ( e.g. , brown and zhao ) . as discussed in cai ,block shrinkage has been developed mainly in the homoscedastic case as a technique for pooling information : the coordinate means are likely to be similar to each other within a block .nevertheless , it is possible to both deal with heterogeneity among coordinate variances and exploit homogeneity among coordinate means within individual blocks in our approach using a block - homoscedastic prior ( i.e. , the prior variances are equal within each block ) .this topic can be pursued in future work .we conduct a simulation study to compare the following 8 estimators , a. non - minimax estimators : by ( [ eb ] ) , by ( [ xkb ] ) , by ( [ rb ] ) with ; b. minimax estimators : by ( [ b+ ] ) , by ( [ mb ] ) with or for some large , by ( [ a+ ] ) with and .recall that corresponds to or and corresponds to with .in contrast , letting the diagonal elements of tend to in any direction in and leads to . setting to 0 or used here to specify the relevant estimators , rather than to restrict the prior on . for completeness, we also study the following estimators : by ( [ b+ ] ) , with replaced by in ( [ rb ] ) , with replaced by in ( [ mb ] ) , and with replaced by in ( [ a+ ] ) , referred to as the alternative versions of , , , and respectively .the usual choices of the factors , , , and , are motivated to minimize the risks of the non - positive - part estimators , but may not be the most desirable for the positive - part estimators . as seen below , the alternative choices , , and can lead to risk curves for the positive - part estimators rather different from those based on the usual choices , , and .therefore , we compare the estimators , , , and and , separately , their alternative versions .each estimator is evaluated by the pointwise risk function as moves in a certain direction or the bayes risk function as varies in a set of priors on .consider the homoscedastic prior or the heteroscedastic prior for .as discussed in section [ sec3.3 ] , the bayes risk with the first or second prior is meant to measure average risk reduction over the region or .corresponding to the two priors , consider the direction along or , where gives the euclidean distance from 0 to the point indexed by .the two directions are referred to as the homoscedastic and heteroscedastic directions .we investigate several configurations for , including ( [ example ] ) and where is a chi - squared variable with degrees of freedom . in the last case , can be considered a typical sample from a scaled inverse chi - squared distribution , which is the conjugate distribution for normal variances . in the case ( [ example - group3 ] ) , the coordinates may be segmented intuitively into three groups with relatively homogeneous variances . in the case ( [ example - group22 ] ) , there is no clear intuition about how the coordinates should be segmented into groups . for fixed ,the pointwise risk is computed by repeatedly drawing and then taking the average of .the bayes risk is computed by repeatedly drawing and and then taking the average of .each monte carlo sample size is set to .axis ( third row ) in the case ( [ example - group3 ] ) . left : non - minimax estimators ( ) , ( ) , ( ) .right : minimax estimators ( ) , with ( ) and ( ) , with ( ) and ( ) . ]axis ( third row ) in the case ( [ example - group3 ] ) , with the same legend as in figure [ fig1 ] .the alternative versions of , , , and are used . ] the relative performances of the estimators are found to be consistent across different configurations of studied . moreover , the bayes risk curves under the homoscedastic prior are similar to the pointwise risk curves along the homoscedastic direction .the bayes risk curves under the heteroscedastic prior are similar to the pointwise risk curves along the heteroscedastic direction .figure [ fig1 ] shows the pointwise risks of the estimators with the usual versions of , , , and and figure [ fig2 ] shows those of the estimators with the alternative versions of , , , and for the case ( [ example - group3 ] ) , with roughly three groups of coordinate variances , which might be considered unfavorable to our approach . for both and , the cutoff index is found to be 3 .see the supplementary material ( tan ) for the bayes risk curves of all these estimators for the case ( [ example - group3 ] ) and the results for other configurations of .a number of observations can be drawn from figures [ fig1][fig2 ] .first , , , and have among the lowest risk curves along the homoscedastic direction . but along the heteroscedastic direction , the risk curves of and rise quickly above the constant risk of as increases .moreover , all the risk curves of , , and along the axis exceed the constant risk of as increases .therefore , , , and fail to be minimax , as mentioned in section [ sec2 ] .second , or has among the highest risk curve , except where the risk curves of and exceed the constant risk of along the heteroscedastic direction .the poor performance is expected for or , because there are considerable differences between the coordinate variances in ( [ example - group3 ] ) .third , among the minimax estimators , with or has the lowest risk curve along various directions , whether the usual versions of , , and are compared ( figure [ fig1 ] ) or the alternative versions are compared ( figure [ fig2 ] ) .fourth , the risk curve of with is similar to that of with along the heteroscedastic direction .but the former is noticeably higher than the latter along the homoscedastic direction as increases , whereas is noticeably lower than the latter along the axis as increases .these results agree with the construction of using a heteroscedastic prior and using a flat , homoscedastic prior .their relative performances depend on the direction in which the risks are evaluated .fifth , with has risk curves below that of or , but either above or crossing those of with and .moreover , with has elevated , almost flat risk curves for from 0 to 16 .this seems to indicate an undesirable consequence of using a non - degenerate prior for in that the risk tends to increase for near 0 , and remains high for far away from 0 .the foregoing discussion involves the comparison of the risk curves as moves away from 0 between and specified with fixed priors .alternatively , we compare the pointwise risks at or and the bayes risks under the prior or between and specified with the prior for a range of .the homoscedastic prior used in the specification of and can be considered correctly specified or misspecified , when the bayes risks are evaluated under , respectively , the homoscedastic or heteroscedastic prior or when the pointwise risks are evaluated along the homoscedastic or heteroscedastic direction .for each situation , has lower pointwise or bayes risks than . see figure a2 in the supplementary material ( tan ) .the estimator and its positive - part version are not only minimax and but also have desirable properties including simplicity , interpretability , and effectiveness in risk reduction .in fact , is defined by taking in a class of minimax estimators .the simplicity of holds because is of the linear form , with and indicating the direction and magnitude of shrinkage .the interpretability of holds because the form of indicates that one group of coordinates are shrunk in the direction of berger s minimax estimator whereas the remaining coordinates are shrunk in the direction of the bayes rule .the effectiveness of in risk reduction is supported , in theory , by showing that can achieve close to the minimum bayes risk simultaneously over a scale class of normal priors ( corollary [ cor4 ] ) . for various scenarios in our numerical study ,the estimators with extreme priors yield more substantial risk reduction than existing minimax estimators .it is interesting to discuss a special feature of and hence of and among linear , shrinkage estimators of the form where and are nonnegative definite matrices and is a scalar function .the estimator corresponds to the choice , which is motivated by the form of the optimal in minimizing the risk of for fixed .on the other hand , berger and srinivasan showed that under certain regularity conditions on , an estimator ( [ general ] ) can be generalized bayes or admissible only if .this condition is incompatible with , unless as in berger s estimator .therefore , including is , in general , not generalized bayes or admissible .this conclusion , however , does not apply directly to the positive - part estimator , which is no longer of the linear form .there are various topics that can be further studied .first , the prior on is fixed , independently of data in the current paper .a useful extension is to allow the prior to be estimated within a certain class , for example , homoscedastic priors , from the data , in the spirit of empirical bayes estimation ( e.g. , efron and morris ) .second , the bayes risk with a normal prior is used to measure average risk reduction in an elliptical region ( section [ sec3.3 ] ) .it is interesting to study how our approach can be extended when using a non - normal prior on , corresponding to a non - elliptical region in which risk reduction is desired .the following extends stein s lemma for computing the expectation of the inner product of and a vector of functions of .[ lem1 ] let be multivariate normal with mean and variance matrix .assume that is almost differentiable stein with for , where . then ,\ ] ] where is the matrix with element .a direct generalization of lemma 2 in stein to a normal random vector with non - identity variance matrix gives where is the row vector with element .taking the element of both sides of the equation gives where is the element of .summing both sides of the preceding equation over gives the desired result .proof of theorem [ th1 ] by direct calculation , the risk of is by lemma [ lem1 ] and the fact that , the third term after the minus sign in is by condition ( [ a - cond ] ) , is nonnegative definite . by section 21.14 andexercise 21.32 in harville , for .then the preceding expression is bounded from below by which leads immediately to the upper bound on .proof for condition ( [ a - cond2 ] ) we show that if condition ( [ a - cond2 ] ) holds , then there exists a nonsingular matrix with the claimed properties .the converse is trivially true .let be the unique symmetric , positive definite matrix such that .then is symmetric , that is , , because .moreover , and commute , that is , , because and is symmetric .therefore , and are simultaneously diagonalizable ( harville , section 21.13 ) .there exists an orthogonal matrix such that and for some diagonal matrices and .then satisfies the claimed properties .proof of inequality ( [ bayes - bound2 ] ) we show that if are independent standard normal variables , then .. then and are independent , , and .the claimed inequality follows because , , and by jensen s inequality .proofs of theorem [ th2 ] and corollary [ cor2 ] consider the transformation and , so that and .problem ( [ opt ] ) is then transformed to , subject to ( ) and , which is of the form of the special case of ( [ opt ] ) with ( ) .but it is easy to verify that if the claimed results hold for the transformed problem , then the results hold for original problem ( [ opt ] ) .therefore , assume in the rest of proof that ( ). there exists at least a solution , , to problem ( [ opt ] ) by boundedness of the constraint set .let and .a key of the proof is to exploit the fact that , by the setup of problem ( [ opt ] ) , is automatically a solution to the problem the karush tucker condition for this problem gives where , ( ) , and satisfying ( ) are lagrange multipliers .first , we show that and hence for .if , then either for , or .the latter case is infeasible by the constraint .suppose . by ( [ a2 ] ) , for each .then for each because .second , we show that . if , then .. then by ( [ a2 ] ) . summing ( [ a3 ] ) over and ( [ a4 ] ) shows that .therefore , or equivalently .third , we show that and . for each and , by ( [ a2])([a3 ] ) and then because . the inequalities also hold for , by application of the argument to problem ( [ a1 ] ) with replaced by some .then because for each , , and is the largest element in .fourth , we show the expressions for and the achieved maximum value . by the definition of , for . by ( [ a2 ] ) , for .let and .then is a solution to the problem by the definition of , and hence lies off the boundary in the constraint set .then is a solution to the foregoing problem with the constraint removed .the problem is of the form of maximizing a linear function of subject to an elliptical constraint .straightforward calculation shows that and the achieved maximum value is , where .finally , we show that the sequence is nonincreasing : , where the equality holds if and only if . because or , this result implies that and hence is a unique solution to ( [ opt ] ) .let so that . by the identity and simple calculation , \frac { d_{k+1}^{-1}}{\sum_{j=1}^{k+1 } d_j^{-1 } } \nonumber \\[-6pt ] \\[-10pt ] & = & d_{k+1 } \frac { \ { r_k - ( k-2 ) \}^2}{r_k ( r_k+1)},\nonumber\end{aligned}\ ] ] where .therefore , .moreover , if and only if , that is , .proof of corollary [ cor3 ] it suffices to show ( [ sol - ineq ] ) . by corollary [ cor2 ] , and hence .then for . because for .proof of theorem [ th3 ] let so that , similarly as in the proof of theorem [ th2 ] . by equation ( [ a5 ] ) with and replaced by , by the relationship and simple calculation , if , combining the two preceding equation gives the first inequality follows because for and is increasing for with a maximum at .the second inequality follows because .therefore , if then if , then and hence this completes the proof .the author thanks bill strawderman and cunhui zhang for helpful discussions . | consider the problem of estimating a multivariate normal mean with a known variance matrix , which is not necessarily proportional to the identity matrix . the coordinates are shrunk directly in proportion to their variances in efron and morris ( _ j . amer . statist . assoc . _ * 68 * ( 1973 ) 117130 ) empirical bayes approach , whereas inversely in proportion to their variances in berger s ( _ ann . statist . _ * 4 * ( 1976 ) 223226 ) minimax estimators . we propose a new minimax estimator , by approximately minimizing the bayes risk with a normal prior among a class of minimax estimators where the shrinkage direction is open to specification and the shrinkage magnitude is determined to achieve minimaxity . the proposed estimator has an interesting simple form such that one group of coordinates are shrunk in the direction of berger s estimator and the remaining coordinates are shrunk in the direction of the bayes rule . moreover , the proposed estimator is scale adaptive : it can achieve close to the minimum bayes risk simultaneously over a scale class of normal priors ( including the specified prior ) and achieve close to the minimax linear risk over a corresponding scale class of hyper - rectangles . for various scenarios in our numerical study , the proposed estimators with extreme priors yield more substantial risk reduction than existing minimax estimators . |
there has been renewed interest in directional bayesian analysis in view of its fundamental applications to molecular biology . due to chemical constraints on the bonds of biomolecules , the geometry of these molecules can be described by a set of angles .other applications include locating and tracking an electric signal and the analysis of forensic fingerprint evidence .all these applications involve circular data which is naturally modelled by the von mises distribution .the probability density function of the von mises distribution with mean on the unit circle and concentration parameter is given by where is the modified bessel function of the first kind and order .the circular variance can be described by where .let be a vector of observations from a von mises distribution .when a conjugate prior is used , the posterior distribution of the mean is itself von mises , and can be easily sampled via .let be the conjugate prior for the concentration .the posterior is where and are observed constants ; in this case and . for the case and the normalization constant is , however in general the normalization constant is intractable . in this paper we shall call the _ bessel exponential distribution_. existing algorithms to sample tend to generate from approximate distributions or have a large overhead of sampled auxiliary variables .we present a new , extremely fast algorithm to sample from the bessel exponential distribution . for large , is approximately ( * ? ? ?9.7.1 ) . plugging this approximation into yields a gamma density with shape and rate .this insight motivates us to use a gamma - based acceptance - rejection sampler for .however , the above approximation for breaks down for small , and thus great care is needed to ensure our rejection sampler is efficient for all values of .we derive the optimal gamma - based proposal distribution and show that the resulting sampler has an acceptance probability of at least 0.7 for all and .the minimum acceptance probability of occurs when the distribution is concentrated around .the algorithm is described in section [ sect : alg ] and derived in section [ sect : derivation ] .enhancements are considered in section [ sect : speedup ] and the algorithm s efficiency is explored in section [ sect : efficiency ] .as discussed above , we can approximate the bessel exponential distribution with a gamma distribution .however , because the ratio of these densities diverges as , we can not directly use a gamma proposal for our rejection sampler .instead we propose values where and has a gamma distribution .this is an application of marsaglia s exact approximation procedure , see for more details . using a shifted gamma proposal with shape andscale leads to the envelope function for , where the amplitude is chosen to ensure the ratio is bounded below one .we can generate a sample from the bessel exponential distribution by generating a sample from and accepting it with probability where and .the samples generated via this procedure have the correct bessel exponential distribution for any choice of the proposal parameters , though the values of these parameters will affect the algorithm s efficiency .in section [ sect : derivation ] we show that the approximate optimal choices for the proposal parameters are where the terms and are and is the principal branch of the lambert w function defined as for . the acceptance - rejection algorithm to generate a sample from the bessel exponential distribution proceeds as follows : 1 .find efficient proposal parameters , which will depend on .2 . draw from a gamma distribution with shape and rate .3 . draw from a uniform distribution on $ ] .4 . accept if , else go to 2 .the detailed procedure is described in algorithm [ alg : kappa ] . when implementing the algorithm , both the bessel functions and the lambert w functioncan be computed using software such as the general scientific library or its r wrapper , the cran package * gsl*. in practice it is often possible to avoid computing these functions , as we show in section [ sect : speedup ] . [ algline : startsetup ] [algline : kappastar ] [ algline : betadefn ] [algline : lambert ] [ algline : endsetup ] [ algline : startloop ] sample from a left - truncated at [ algline : xsampling ] sample from a [ algline : acceptreject ] + [ algline : returnline ]we will now derive the optimal parameters for the proposal distribution of which follows a gamma distribution with shape and rate .we do so by maximizing the expected probability of acceptance \nonumber \\&=\frac{(\eta \beta)^{\eta \alpha+1}}{\gamma(\eta \alpha+1)}\exp\{-\eta \beta\varepsilon-\eta g(\kappa_0;\alpha,\beta,\varepsilon)\}\int_{0}^\infty\frac{\exp(-\eta \beta_0\kappa)}{i_0(\kappa)^\eta } { \mathrm{d}}\kappa \label{eq : acceptprob}\end{aligned}\ ] ] over subject to the constraint . in order for the maximum to be finite as require . by taking logs we see that maximizing with respect to is equivalent to maximizing the constraint implies either or .the lagrangians for constrained optimization corresponding to these conditions are and , neither of which have interior critical points over because the and derivatives have no common root .thus the optimal parameters must lie on the boundary of the parameter space .an examination of the boundaries show that the maximum satisfies and .intuitively , this says that for any we should pick as small as possible while still having be the maximizer of .thus our lagrangian is our optimal parameters are either a critical point of or lie on one or more of the boundaries or .if either or then direct differentiation shows the maximum occurs when and is the unique positive root of .we shall see this is a limiting case of the critical point solution .the only other boundary is ; we shall see this is the solution when is close to . to find the critical points of ,we start by setting the derivatives with respect to and to zero and rearranging yields the optimal parameters as functions of and . this yields where is the digamma function .we must have so that and are positive .notice that the limit corresponds to the boundary case discussed above .the above equations give all optimal parameters in terms of and .note that since constraint at is satisfied whenever , we are free to choose alternative , sub - optimal values for the other parameters if the true optimal values are too difficult to compute. we shall explore this in section [ sect : speedup ] when we use an approximation for the lambert w function . finally , we find the optimal as follows. this value must either lie on the boundary or else satisfy unfortunately plugging into and solving for as a function of alone is analytically intractable .however , one can check that decreases from positive infinity at to negative infinity as . since all admissible lie in the finite interval we can easily find the optimal through any standard one - dimensional root - finding algorithm . if the root lies to the right of , the optimal value is .we plug the optimal into to find all of our optimal parameters in terms of .doing this for each and plugging the resulting parameters into yields a function which we numerically maximize over .let be the optimal value of and let be the optimal parameters corresponding to .these are the desired parameters that maximize the expected acceptance probability .the above numeric maximizations for and may be acceptable when and are known a priori .however , they are computationally prohibitive in the standard monte carlo case where we wish to generate many samples from the bessel exponential distribution with different values of and for each sample .thus our next task is to approximate the optimal parameters with easily computable functions of and . for all and , well approximated by , the positive root of ; indeed is the exact optimum in the boundary case . to approximate we use the bounds (11 ) , rearranging these bounds shows that where these bounds are relatively tight , we found that the convex combination with provides a good approximation to and hence to .the parameter is exactly equal to when is close to its lower bound of negative one . for sufficiently large , drops from its upper limit towards its lower limit .the transition between the two limits is very rapid for .we achieve good accuracy with the approximation where .this approximation is very good when is large or when .a slower , more precise approximation for may lead to parameters which provide better efficiency for small and ; we address this in section [ sect : efficiency ] . given these approximations of and , the parameters and given by .the truncated gamma on line [ algline : xsampling ] can be sampled using dagpunar s algorithm . alternatively , one can use a standard gamma sampling algorithm such as marsaglia tsang and reject when .indeed , the marsaglia tsang algorithm is itself a rejection sampler with a gaussian proposal , and its rejection step can be combined with the rejection step on line [ algline : acceptreject ] for an additional speed - up .the function on line [ algline : lambert ] can be approximated by ( winitzki , ) with no noticeable drop in the expected probability of acceptance .finally , we can implement the simple squeezes to avoid computing the bessel function within the rejection loop on line [ algline : acceptreject ] .specifically , the loop on lines [ algline : startloop ] to [ algline : returnline ] can be replaced with algorithm [ alg : squeezed ] . sample from a left - truncated at sample from a * if * * or * * then * + * if * * or * * then * +we now analyze the efficiency of algorithm [ alg : kappa ] . when using the winitzki approximation , the initial setup ( lines [ algline : startsetup][algline : endsetup ] ) involve arithmetic operations , four square roots , two bessel function evaluations , one logarithm and one exponentiation . in totalthis setup requires approximately 70 microseconds on a 2.4ghz intel i5 computer when using the r package * gsl*. implementation in a lower - level language would increase the speed significantly .each iteration of the rejection loop requires a gamma sample , a uniform sample , between three and five logarithms , and in the worst case a bessel function evaluation .the squeeze in algorithm [ alg : squeezed ] does a good job of avoiding the bessel computation most of the time , and each iteration of the loop requires approximately microseconds .most of these iterations are accepted , and the algorithm , implemented in r , yields approximately 80,000 von mises samples per second when and is drawn uniformly over .when using a compiled language such as c++ , the algorithm yields over one million samples per second . in figure[ fig : efficiency ] we plot the expected probabilities of acceptance as functions of and .the figures were generated by numerically integrating the expected probability of acceptance for each and for each of 2000 equally spaced values of .there is a noticeable dip in efficiency near . recalling that , we see that this region corresponds to diffused , i.e. the true is near zero .this is precisely the region where our bessel function approximation fails , so this drop is to be expected .fortunately the drop in efficiency is not severe and our efficiency remains above for all and . from figure[ fig : efficiency ] we see that our algorithm with the approximate optimal parameters does noticeably worse than the numerically computed true optimal parameters when .this corresponds to the transition region where the optimal rapidly drops from its upper limit of to its lower limit of .our approximation of the optimal is inaccurate in this transition region .a more sophisticated approximation of the optimal would increase the algorithm s efficiency , however , the region is not usually an area of primary interest and we prefer to use the faster approximation .table[x index=0,y index=1 ] eff1.csv ; table[x index=0,y index=2 ] eff1.csv ; table[x index=0,y index=1 ] eff10.csv ; table[x index=0,y index=2 ] eff10.csv ; table[x index=0,y index=1 ] eff100.csv ; table[x index=0,y index=2 ] eff100.csv ;we have described a highly efficient algorithm to sample from the bessel exponential distribution .it is suitable for any application where one wishes generate samples from the posterior distribution for the concentration parameter of the von mises distribution . | motivated by molecular biology , there has been an upsurge of research activities in directional statistics in general and its bayesian aspect in particular . the central distribution for the circular case is von mises distribution which has two parameters ( mean and concentration ) akin to the univariate normal distribution . however , there has been a challenge to sample efficiently from the posterior distribution of the concentration parameter . we describe a novel , highly efficient algorithm to sample from the posterior distribution and fill this long - standing gap . |
kolmogorov complexity ( also known as kolmogorov - chaitin or program - size complexity ) is recognized as a fundamental concept , but it is also often thought of as having little or no applicability because it is not possible to provide stable numerical approximations for finite particularly short strings by using the traditional approach , namely lossless compression algorithms .we advance a method that can overcome this limitation , and which , though itself limited in ways both theoretical and numerical , nonetheless offers a means of providing sensible values for the complexity of short strings , complementing the traditional lossless compression method that works well for long strings .this is done at the cost of massive numerical calculations and through the application of the coding theorem from algorithmic probability theory that relates the frequency of production of a string to its kolmogorov complexity .bennett s logical depth , on the other hand , is a measure of the complexity of strings that , unlike kolmogorov complexity , measures the _ organized _ information content of a string . in an application inspired by the notion of logical depth was reported in the context of the problem of image classification .however , the results in this paper represent the first attempt to provide direct numerical approximations of logical depth . the independence of the two measures kolmogorov complexity and logical depth which has been established theoretically , is also numerically tested and confirmed in this paper .our work is in agreement with what the theory predicts , even for short strings despite the limitations of our approach .our attempt to apply these concepts to practical problems ( detailed in a series of articles ( see e.g. ) ) is novel , and they are indeed proving to have interesting applications where evaluations of the complexity of finite short strings are needed . in sections[ kolmochap ] , [ ld ] , [ codingchap ] and [ formal ] , we introduce the measures , tools and formalism used for the method described in section [ dist ] . in section [ comparison ] ,we report the numerical results of the evaluation and analysis of the comparisons among the various measures , particularly the connection between number of instructions , integer valued program - size complexity , kolmogorov complexity approximated by means of the coding theorem method , and logical depth .when researchers have chosen to apply the theory of algorithmic information ( ait ) , it has proven to be of great value despite initial reservations .it has been successfully applied , for example , to dna false positive repeat sequence detection in genetic sequence analysis , in distance measures and classification methods , and in numerous other applications .this effort has , however , been hamstrung by the limitations of compression algorithms currently the only method used to approximate the kolmogorov complexity of a string given that this measure is not computable .central to ait is the basic definition of plain algorithmic ( kolmogorov - chaitin or program - size ) complexity : where is a universal turing machine and the program that , running on , produces . traditionally , the way to approach the algorithmic complexity of a string has been by using lossless compression algorithms .the result of a lossless compression algorithm applied to is an upper bound of the kolmogorov complexity of .short strings , however , are difficult to compress in practice , and the theory does not provide a satisfactory solution to the problem of the instability of the measure for short strings . the invariance theorem , however , guarantees that complexity values will only diverge by a constant ( e.g. the length of a compiler or a translation program ) . +* invariance theorem * ( ) : if and are two universal turing machines , and and the algorithmic complexity of for and respectively , there exists a constant such that : latexmath:[\[\label{invariance } hence the longer the string , the less important is ( i.e. the choice of programming language or universal turing machine ) .however , in practice can be arbitrarily large , thus having a very great impact on the stability of kolmogorov complexity approximations for short strings .a measure of the structural complexity ( i.e. richness of structure and organization ) of a string can be arrived at by combining the notions of algorithmic information and time complexity . according to the concept of logical depth , the complexity of a stringis best defined by the time that an unfolding process takes to reproduce the string from its shortest description .while kolmogorov complexity is related to compression length , bennett s logical depth is related to decompression time . a typical example that illustrates the concept of logical depth , underscoring its potential as a measure of complexity , is a sequence of fair coin tosses .such a sequence would have a high information content ( kolmogorov complexity ) because the outcomes are random , but it would have no structure because it is easily generated . the string 1111 1111 would be equally shallow as measured by logical depth . its compressed version , while very small , requires little time to decompress into the original string . in contrast , the binary expansion of the mathematical constant is not shallow , because though highly compressible and hence having a low kolmogorov complexity , it requires non - negligible computational time to produce arbitrary numbers of digits from its shortest program ( or indeed from any short program computing the digits of ) .a detailed explanation pointing out the convenience of the concept of logical depth as a measure of organized complexity as compared to plain algorithmic complexity , which is what is usually used , is provided in . for finite strings , one of bennett s formal approaches to the logical depth of a stringis defined as follows : + let be a string and a significance parameter . a string sdepth at significance is given by with the length of the shortest program for , ( therefore ) .in other words , is the least time required to compute from a -incompressible program on a universal turing machine .each of the three linked definitions of logical depth provided in comes closer to a definition in which near - shortest programs are taken into consideration . in this experimental approachwe make no such distinction among significance parameters , so we will denote the logical depth of a string simply by . like , as a function of is uncomputable .a novel feature of this research is that we provide exact numerical approximations for both measures and for specific short , allowing a direct comparison .this was achieved by running a large set of random turing machines and finding the smallest and fastest machines generating each output string .hence these approximations are deeply related to another important measure of algorithmic information theory .the algorithmic probability ( also known as levin s semi - measure ) of a string , is a measure that describes the expected probability of a random program running on a universal ( prefix - free . for details see ) . ] ) turing machine producing .formally , i.e. the sum over all the programs for which with outputs and halts .levin s semi - measure defines a distribution known as the _ universal distribution _ .it is important to notice that the value of is dominated by the length of the smallest program ( when the denominator of reaches its largest value ) .the length of the smallest program that produces the string is .the semi - measure is therefore also uncomputable , because for every , requires the calculation of , involving , which is itself uncomputable . an extension of to non - binary alphabetsis natural .more formally , can be associated with the original definition for binary strings .however , one may want to extend to , in which case for every , the function is semi - computable ( for the same reason that is uncomputable ) .an alternative to the traditional use of compression algorithms to approximate can be derived from a fundamental theorem that establishes the exact connection between and . +* coding theorem * ( levin ) : where we will use to indicate that has been approximated by means of through the coding theorem .an important property of as a semi - measure is that it dominates any other effective semi - measure , because there is a constant such that for all , .for this reason is often called a _ universal distribution _ .the ability of a universal turing machine to simulate any algorithmic process has motivated and justified the use of universal turing machines as the language framework within which definitions and properties of mathematical objects are given and studied . however , it is important to describe the formalism of a turing machine , because exact values of algorithmic probability for short strings will be approximated under this model , both for through ( denoted by ) , and for terms of the number of instructions used by the smallest turing machine producing .+ consider a turing machine with alphabet symbols , states and an additional halting state denoted by ( as defined by rado in his original busy beaver paper ) . at the outsetthe turing machine is in its initial state .the machine runs on a -way unbounded tape .its behavior is determined by the transition function .so , at each step : the machine s current `` state '' ( instruction ) ; and the tape symbol the machine s head is scanning define the transition with a unique symbol to write ( the machine can overwrite a on a , a on a , a on a , and a on a ) ; a direction to move in : ( left ) , ( right ) or ( none , when halting ) ; and a state to transition into ( which may be the same as the one it was in ) .the machine halts if and when it reaches the special halt state . there are turing machines with states and symbols according to the formalism described above , as there are entries in the transition table and any of them may have possible instructions : there are halting instructions ( writing ` 0 ' and ` 1 ' ) and non - halting instructions ( movements , possible symbols to write and states ) .the output string is taken from the number of contiguous cells on the tape the head of the halting -state machine has gone through .a turing machine is considered to produce an output string only if it halts .the output is what the machine has written on the tape .in order to arrive at an approximation of , a method based on the coding theorem was advanced in .it is captured in the following function .let be a turing machine in with empty input . then : where denotes the number of elements of .let be fixed .it has been proved that the function is non - computable ( due to the denominator ) .however , for fixed and small values and is computable for values of the busy beaver problem that are known . for , for example , the busy beaver function tells us that , so given a turing machine with 4 states running on a blank tape that hasnt halted after 107 steps , we know it will never stop . more generally , for every string ( with alphabet ) one can compute a sequence which converges to when . for compute for steps all -turing machines with states ( there is a finite number of them ) and compute the quotient for for machines that halted before steps . since converges for every to ( fixed , fixed , fixed ) , the value of converges for fixed and . in this specific sense approachable , even if as a function of increasing time may increase when a machine produces , or decrease when a machine halts without producing . by the invariance theorem ( eq . [ invariance ] ) and the coding theorem ( eq . [ coding ] ) , is guaranteed to converge to .exact values of were previously calculated for symbols and states for which the busy beaver values are known .that is , a total of 36 , 10000 , 7529536 and 11019960576 turing machines respectively .the distributions were very stable and are proving to be capable of delivering applications that are also in agreement with results from lossless compression algorithms for boundary cases ( where both methods can be applied ) , hence validating the utility of both methods ( compression being largely validated by its plethora of applications and by the fact that it achieves an approximation of , see e.g. ) .the chief advantage of the coding theorem method , however , is that it is capable of dealing with short entities ( unlike compression algorithms , which are designed for large entities ) .there are 26559922791424 turing machines with 5 states and 2 symbols , and the values of busy beaver functions for these machines are unknown . in what followswe describe how we proceeded .calculating is an improvement on our previous numerical evaluations and provides a larger data set to work with , allowing us to draw more significant statistical conclusions vis - - vis the relation between these calculations and strict integer value program - size complexity , as well as to make a direct comparison to bennett s logical depth .we did not run all the turing machines with 5 states to produce , because one can take advantage of symmetries and anticipate some of the behavior of the turing machines directly from their transition tables without actually running them ( this is impossible generally due to the halting problem , but some reductions are possible ) .if is the set of turing machines with states and symbols , as defined above , we reduce it to : where is the transition function of .so is a subset of , with machines with the transition corresponding to initial state and symbol ( this is the initial transition in a ` 0'-filled blank tape ) moving to the right and changing to a state different from the initial and halting ones . for machines with two symbols , as there are different initial transitions ( the machine can write ` 0 ' or ` 1 ' and move to one of the states in ) , and for the other transitions there are possibilities , as in .after running on a ` 0'-filled tape , the procedure for completing the output strings so that they reach the frequency they have in is : * for every , * * if halts and produces the output string , add one occurrence of , the reverse of .* * if does not halt , count another non - halting machine .+ these two completions add the output ( or number of non - halting machines ) of new machines , one for each machine in .these new machines are left - right symmetric to the machines in .formally , this is the set when the original machine halts , its symmetric counterpart halts too and produces the reversed strings , and if the original machine does not halt , neither does the symmetric machine . this way we consider the output of all machines with the initial transition moving to the left and to a state not in . *include occurrences of string `` 1 '' .this corresponds to the machines writing ` 1 ' and halting at the initial transition .there is just one possible initial transition for these machines ( move to the halting state , write ` 1 ' and remain in the initial cell ) .the other transitions can have any of the possible instructions .* include occurrences of string `` 0 '' .this is justified as above , for machines writing ` 0 ' and halting at the initial transition . *include additional non - halting machines , corresponding to machines remaining in the initial state in the initial transition ( these machines never halt , as they remain forever in the initial state ) .there are initial transitions of this kind , as the machine can write different symbols and move in possible directions .if we sum and the machines considered above , having completed them in the manner described , we get the output corresponding to the machines in .moreover , we need the output of those machines starting with a ` 0'-filled tape and with a ` 1'-filled tape .but we do not run any machine twice , as for every machine producing the binary string starting with a ` 1'-filled tape , there is also a 0 - 1 symmetric machine ( where the role of 1 ( of 0 ) in the transition table of is the role of 0 ( of 1 ) in the transition table of ) that when starting with a ` 0'-filled tape produces the complement to one of , that is , the result of replacing all 0s in s with 1s and all 1s with 0s .so we add the complement to every one of the strings found and count the non - halting machines twice to obtain the output of all machines in starting both with a ` 0'-filled tape and with a ` 1'-filled tape .to construct , we ran the machines in , which is of .the output strings found in , together with their frequencies , were completed prior to constructing , following the procedure explained above .it is useful to avoid running machines that we can easily determine will not stop .these machines will consume the runtime without yielding an output . as we have shown above, we can avoid generating many non - halting machines .in other cases , we can detect them at runtime , by setting appropriate filters .the theoretical limit of the filters is the halting problem , which means that they can not be exhaustive . but a practical limit is imposed by the difficulty of checking some filters , which takes up more time than the runtime that is saved .we have employed some filters that have proven useful .briefly , these are : * * machines without transitions to the halting state*. while the transition table is being filled , the simulator checks to ascertain whether there is some transition to the halting state .if not , it avoids running it .* * escapees*. these are machines that at some stage begin running forever in the same direction . as they are always reading new blank symbols , as soon as the number of non - previously visited positions is greater than the number of states , we know that they will not stop , because the machines have necessarily entered an infinite loop . given that , while visiting the last new cells , some of the states have been repeated , and will repeat forever , as the machine s behavior is deterministic .* * cycles of period two*. these cycles are easy to detect .they are produced when in steps and the tape is identical and the machine is in the same state and the same position .when this is the case , the cycle will be repeated infinitely .these filters were implemented in our c++ simulator , which also uses the reduced enumeration of section [ sec : some - reductions ] . to test them we calculated with the simulator and compared the output to the list that was computed in , arriving at exactly the same results , and thereby validating our reduction techniques .running without reducing the enumeration or detecting non - halting machines took 952 minutes . running the reduced enumeration with non - halting detectors took 226 minutes .the busy beaver for turing machines with 4 states is known to be 107 steps , that is , any turing machine with 2 symbols and 4 states running longer than 107 steps will never halt .however , the exact number is not known for turing machines with 2 symbols and 5 states , although it is believed to be 47176870 , as there is a candidate machine that runs for this length of time and halts and no machine with a greater runtime has yet been found .so we decided to let the machines with 5 states run for 4.6 times the busy beaver value for 4-state turing machines ( for 107 steps ) , knowing that this would constitute a sample significant enough to capture the behavior of turing machines with 5 states .the chosen runtime was rounded to 500 steps , which was used to construct the output frequency distribution for .not all 5-state turing machines have been used to build , since only the output of machines that halted at or before 500 steps was taken into consideration . as an experiment to ascertain how many machines we were leaving out , we ran random turing machines for up to 5000 steps . among these ,only 50 machines halted after 500 steps and before 5000 ( that is , a fraction less than , because in the reduced enumeration we do nt include those machines that halt in one step or that we know wo nt halt before we generate them , so it s a smaller fraction ) , with the remaining 1496491379 machines not halting at 5000 steps . as far as these are concerned and given that the busy beaver values for 5 states are unknown we do not know after how many steps they would eventually halt , if they ever do . according to the following analysis , our election of a runtime of 500 steps therefore provides a good estimation of .the frequency of runtimes of ( halting ) turing machines has theoretically been proven to drop exponentially , and our experiments are closer to the theoretically predicted behavior . to estimate the fraction of halting machines that were missed because turing machines with 5 states were stopped after 500 steps , we hypothesize that the number of steps a random halting machine needs before halting is an exponential random variable , defined by we do not have direct access to an evaluation of , since we only have data for those machines for which .but we may compute an approximation of , , which is proportional to the desired distribution . a non - linear regression using ordinary least - squaresgives the approximation with and .the residual sum - of - squares is ; the number of iterations with starting values and is nine .the model s is the same appearing in the general law , and may be used to estimate the number of machines we lose by using a 500 step cut - off point for running time : .this estimate is far below the point where it could seriously impair our results : the less probable ( non - impossible ) string according to has an observed probability of .although this is only an estimate , it suggests that missed machines are few enough to be considered negligible .we now study the relation of to the minimal number of instructions used by a turing machine producing a given string , and to bennett s concept of logical depth .as expected , shows a correlation with the number of instructions used but not with logical depth .first , we are interested in the relation of to the minimal number of instructions that a turing machine producing a string uses .machines in have a transition table with 10 entries , corresponding to the different pairs , with one of the five states and either `` 0 '' or `` 1 '' .these are the 10 instructions that the machine can use .but for a fixed input not all instructions are necessarily used .then , for a blank tape , not all machines that halt use the same number of instructions .the simplest cases are machines halting in just one step , that is , machines whose transition for goes to the halting state , producing a string `` 0 '' or `` 1 '' .so the simplest strings produced in are computed by machines using just one instruction .we expected a correlation between the -complexity of the strings and the number of instructions used . as we show, the following experiment confirmed this .we used a sample of random machines in the reduced enumeration for , that is , the total number of machines .the output of the sample returns the strings produced by halting machines together with the number of instructions used , the runtime and the instructions for the turing machine ( see fig . [fig : distribins ] ) . in order to save space, we only saved the smallest number of instructions found for each string produced , and the smallest runtime corresponding to that particular number of instructions .values according to the minimum number of instructions required . each drop - like " distribution is the set of strings that are minimally produced with the same number of instructions ( horizontal axis ) .the more instructions needed to produce the strings , the more complex they are ( vertical axis in units).,width=377 ] after doing the appropriate symmetry completions we have 99584 different strings , which is to say almost all the 99608 strings found in .the number of instructions used goes from 1 to 10 .when 1 instruction is used only `` 0 '' and `` 1 '' are generated , with a value of . with 2 instructions ,all 2-bit strings are generated , with a value of . for 3 or more instructions , fig .[ fig : distribins ] shows the distribution of values of .table [ tab : meankl ] shows the mean values for the different numbers of instructions used ..mean and string length for different numbers of instructions used .[ cols=">,>,>",options="header " , ] we now provide some examples of the discordance between and .`` 0011110001011 '' is a string with high and low .[ fig : tablelowtime ] shows the transition table of the smallest machine found producing this string .the runtime is low just 29 steps ( of the 99584 different strings found in our sample , only 3360 are produced in fewer steps ) , but it uses 10 instructions and produces a string with complexity .it is the greatest complexity we have calculated for .[ fig : execlowtime ] shows the execution of the machine . on the other hand , `` ''is a string with high but a low value .[ tmtablehightime ] shows the transition table of the machine found producing this string , and fig .[ tmrunhightime ] depicts the execution .the machine uses 9 instructions and runs for 441 steps ( only 710 strings out of the 99584 strings in our sample require more time ) but its value is .this is a low complexity if we consider that in there are 99608 strings and that 90842 are more complex than this one . ' ' .,width=415 ] '' . with a high runtime ,it produces a string with low complexity.,width=453 ] we may rate the overall strength of the relation between and by the correlation , corresponding to a medium positive link . as we previously mentioned , however , the fact that the length of the strings is linked with both variables may bias our interpretation .a more relevant measure is thus , a slight negative but with no significant value between and once is controlled .the results in this paper are important because these measures can be better studied and understood under a specific but widely known general formalism .what we have found is very interesting because it is what one would wish in the best case scenario , stable and reasonable distributions rather than chaotic and unstable ones .the results also suggest that these measures can be applied even if numerically approximated using a specific model of computation .for example , as we expected , the kolmogorov - chaitin complexity evaluated by means of levin s coding theorem from the output distribution of small turing machines correlates with the number of instructions used but not with logical depth .logical depth also yields a measure that is different from the measure obtained by considering algorithmic complexity ( ) alone , and this investigation proves that all these three measures ( kolmogorov - chaitin complexity , solomonoff - levin algorithmic probability and bennett s logic depth ) are consistent with theoretical expectations . as a measure of program - size complexity is traditionally expected to be an integer ( the length of a program in bits ) , but when evaluated through algorithmic probability using the coding theorem it retrieves non - integer values ( still bits ) .these results confirm the utility of non - integer values in the approximation of the algorithmic complexity of short strings , as they provide finer values with which one can tell apart small differences among short strings which also means one can avoid the longer calculations that would be necessary in order to tell apart the complexity of very small objects if only integer values were allowed .thus it also constitutes a complementary and alternative method to compression algorithms . an _ on - line algorithmic complexity calculator _ ( or oacc ) is now available at http://www.complexitycalculator.com .it represents a long - term project to develop an encompassing universal tool implementing some of the measures and techniques described in this paper .it is expected to be expanded in the future as it currently only implements numerical approximations of kolmogorov complexity and levin s semi - measure for short binary strings .more measures , more data and better approximations will be gradually incorporated in the future , covering a wider range of objects , such as longer binary strings , non - binary strings and multidimensional arrays ( such as images ) .bennett , logical depth and physical complexity in rolf herken ( ed ) _ the universal turing machine a half - century survey , _ oxford university press 227257 , 1988 .bennett , how to define complexity in physics and why . in _ complexity , entropy and the physics of information ._ zurek , w. h. , addison - wesley , eds .sfi studies in the sciences of complexity , p 137 - 148 , 1990 .brady , the determination of the value of rado s noncomputable function for four - state turing machines , _ mathematics of computation 40 _ ( 162 ) : 647665 , 1983 .calude , _ information and randomness _ , springer , 2002 .calude and m.a .stay , most programs stop quickly or never halt , _ advances in applied mathematics _, 40 , 295 - 308 , 2008 .chaitin , on the length of programs for computing finite binary sequences : statistical considerations , _ journal of the acm _ , 16(1):145159 , 1969 ._ from philosophy to program size , _ 8th .estonian winter school in computer science , institute of cybernetics , tallinn , 2003 .r. cilibrasi , p. vitanyi , clustering by compression , _ ieee transactions on information theory , _ 51 , 4 , 15231545 , 2005 .t.m . cover and j.a .thomas , _ information theory , _j. wiley and sons , 2006 .delahaye , _ complexit alatoire et complexit organise , _ editions quae , 2009 .delahaye , h. zenil , towards a stable definition of kolmogorov - chaitin complexity , arxiv:0804.3459 , 2007 .delahaye and h. zenil , on the kolmogorov - chaitin complexity for short sequences . in c.calude ( ed . ) , _ randomness and complexity : from leibniz to chaitin _ , world scientific , 2007 .delahaye & h. zenil , numerical evaluation of the complexity of short strings : a glance into the innermost structure of algorithmic randomness , _ applied math . and comp .w. kircher , m. li , and p. vitanyi , the miraculous universal distribution , _ the mathematical intelligencer , _ 19:4 , 715 , 1997 .kolmogorov , three approaches to the quantitative definition of information , _ problems of information and transmission _ , 1(1):17 , 1965 . l. levin , laws of information conservation ( non - growth ) and aspects of the foundation of probability theory ., _ problems in form . transmission _ 10 . 206210 , 1974 . m. li , p. vitnyi , _ an introduction to kolmogorov complexity and its applications , _ springer , 2008 . . rivals , m. dauchet , j .-delahaye , o. delgrange , compression and genetic sequence analysis . , _ biochimie _ , 78 , pp 315 - 322 , 1996 . t. rad , on non - computable functions , _ bell system technical journal , _ vol .3 , pp . 877884 , 1962 .solomonoff , a formal theory of inductive inference : parts 1 and 2 . _ information and control _ , 7:122 and 224254 , 1964 .h. zenil , une approche exprimentale la thorie algorithmique de la complexit , dissertation in fulfilment of the degree of doctor in computer science ( jury members : j .- p .delahaye and c.s .calude , g. chaitin , s. grigorieff , p. mathieu and h. zwirn ) , universit de lille 1 , 2011 .h. zenil , f. soler - toscano , j .-delahaye and n. gauvrit , two - dimensional kolmogorov complexity and validation of the coding theorem method by compressibility , arxiv:1212.6745 [ cs.cc ] .h. zenil , j .-delahaye and c. gaucherel , image information content characterization and classification by physical complexity , _ complexity _ , vol .173 , pages 2642 , 2012 . | we show that real - value approximations of kolmogorov - chaitin ( ) using the algorithmic coding theorem as calculated from the output frequency of a large set of small deterministic turing machines with up to 5 states ( and 2 symbols ) , is in agreement with the number of instructions used by the turing machines producing , which is consistent with strict integer - value program - size complexity . nevertheless , proves to be a finer - grained measure and a potential alternative approach to lossless compression algorithms for small entities , where compression fails . we also show that neither nor the number of instructions used shows any correlation with bennett s logical depth other than what s predicted by the theory . the agreement between theory and numerical calculations shows that despite the undecidability of these theoretical measures , approximations are stable and meaningful , even for small programs and for short strings . we also announce a first beta version of an online algorithmic complexity calculator ( oacc ) , based on a combination of theoretical concepts , as a numerical implementation of the _ coding theorem method_. + * keywords : * coding theorem method ; kolmogorov complexity ; solomonoff - levin algorithmic probability ; program - size complexity ; bennett s logical depth ; small turing machines . |
this paper deals with the justification of the two - scale asymptotic expansions method applied to a thermo - diffusion problem arising in the context of transport of densities of hot colloids in media made of periodically - distributed microstructures .following , we study a system of two coupled semi - linear parabolic equations , where the diffusivity for the concentration is of order and for the temperature it is of order . here denotes the characteristic length scale of the underlying microstructure .we rigorously justify the expansions and and prove an estimate of the type where denotes the macroscopic domain and is the perforated reference cell .estimate basically gives a quantitative indication of the speed of the ( two - scale ) convergence between the unknowns of our problem and their limits , which is detailed in the forthcoming sections .this work follows up previous successful attempts of deriving quantitative corrector estimates using periodic unfolding ; see e.g. .the unfolding technique allows for homogenization results under minimal regularity assumptions on the data and on the choice of allowed microstructures .the novelty we bring in here is the combination of three aspects : ( i ) the asymptotic procedure refers to a suitably perforated domain , ( ii ) presence of a cross coupling in gradient terms , and ( iii ) lack of compactness for .our working techniques combines -independent a priori estimates for the solutions and periodic unfolding - based estimates such as the _ periodicity defect _ in and the _ folding mismatch _ in .estimate improves existing convergence rates for semi - linear parabolic equations with possibly non - linear boundary conditions in or small diffusivity in from to .this improvement is obtained by studying all equations in the two - scale space and by suitably rearranging and controlling occurring error terms .it is worth noting that the availability of corrector estimates for the thermo - diffusion system allows in principle the construction of rigorously convergent multiscale numerical methods ( for instance based on msfem like in ) to capture thermo - diffusion effects in porous media .interestingly , for the thermo - diffusion system posed in perforated domains such convergent multiscale numerical methods are yet unavailable .the paper is structured as follows : in section [ sec : model ] , we introduce the thermo - diffusion model and prove existence as well as a priori estimates for the solutions of the microscopic problem respective the two - scale limit problem . the periodic unfolding method and auxiliary corrector estimates are presented in section [ subsec : two - scale ] and [ subsec : auxiliary ] , respectively .finally , the corrector estimates in are proved in section [ subsec : proof ] .we conclude our paper with a discussion in section [ sec : discuss ] .we investigate a system of reaction - diffusion equations which includes mollified cross - diffusion terms and different diffusion length scales .the cross - diffusion terms are motivated by the incorporation of soret and dufour effects as outlined in . for more information on phenomenological descriptions of thermo - diffusion ,we refer the reader to .the concentrations of the transported species through the perforated domain are denoted by , while is the temperature . the overall interplay between transport and reaction is modeled here by the following system of partial differential equations : supplemented with the neumann boundary conditions and the initial conditions first of all , it is important to note that the -scaling for some of the terms in the system is variable with .we refer to the suitably scaled heat conduction - diffusion interaction terms and as `` weak thermal couplings '' , while the `` high - contrast '' is thought here particularly in terms of the heat conduction properties of the composite material that can be seen in . in this context , denotes the normal outer unit vector of . the matrix is the diffusivity associated to the concentration of the ( diffusive ) species , is the heat conductivity , while and are the soret and dufour coefficients .note that , , , and are either positive definite matrices , or they are positive real numbers .furthermore , the reaction term models the smoluchovski interaction production . in the original model from ,the function is an additional unknown modeling the mass of deposited species on the pore surface , and it is shown to possess the regularity .here we assume as given data .we point out that the linear boundary terms are relevant for the regularity of solutions , but that they are not required to prove the convergence rate of order of in . to deal with perforated domains we employ the method of periodic unfolding as presented in .let denote the standard unit - cell .we fix here and for all the following assumptions on the domain and the microstructure .[ assump : domain ] our geometry is designed as follows : the domain is a -polytope with length for all .the reference hole is an open lipschitz domain and the perforated cell satisfies .moreover is a connected lipschitz domain and is identical on all faces of .the set of all nodal points is given via .with this we define the pore part and the perforated domain , which is connected , via where denotes the interior of the set .both sets are open and form together the original domain .[ fig : cell ] the assumptions on the domain guarantee the existence of suitable extensions from to ( cf .theorem [ thm : extension ] ) . also traces exist and are well - defined on the boundaries and . with this ,perforated domains with isolated holes as well as the prominent `` pipe - model '' for porous media are included in our considerations , see figure [ fig : cell ] .the boundary of the perforated domain is given by .indeed , intersected pore structures at the boundary as in figure [ fig : cell](ii ) are not excluded .[ rem : boundary ] in the following we denote by a sequence of numbers satisfying .this implies that all microscopic cells , for , are contained in and no intersected cells occur at the boundary .this assumption ( tremendously ) simplifies the presentation in this paper , however , we believe that the same results can be obtained for lipschitz domains by considering a bigger -polytope with . then , all relevant coefficients , functions , and solutions are suitably extended from to .[ assump : data ] we impose the following restrictions on the data : the diffusion matrices and are given via where are symmetric and uniformly elliptic , i.e. the constants are non - negative .the reaction term is globally lipschitz continuous , i.e. for all .the sink / source term is given via for any data ; ( \mathrm{w}^{1,\infty}({\omega } ; { \mathrm{l}}^2(\partial t))) ] , and uniformly bounded where the constants are independent of .the existence of solutions , non - negativity , and uniform boundedness follow from the lemmata 3.2 3.6 and theorem 3.8 in by replacing and with and , respectively .note that the proof can be generalized from diffusion coefficients to symmetric matrices as in assumption [ assump : data](i ) . in equation ( 35 )respective ( 57 ) in it holds for and : this argumentation also requires linear boundary terms .otherwise one has to argue as in ( * ? ? ?3.2 ) or ( * ? ? ?* prop . 1 ) anddifferentiate the whole equation with respect to time and then use a second grnwall argument .since our solutions are uniformly bounded in , we may consider reaction terms with arbitrary growth as in .also note that estimate remains valid for all and . for the parameters , we obtain in the limit the following two - scale system supplemented with the boundary conditions and the initial conditions here and denote the normal outer unit vector of and , respectively . to capture the oscillations in the limitwe define the space of -periodic functions via where and are opposite faces of the unit cube with . with thisthe effective coefficients are given via the standard unit - cell problem \cdot [ { \nabla_{\!y}}\phi { + } \xi ] { \,\mathrm{d}}y .\end{aligned}\ ] ] note that the integral is taken over and not the average .in full , formula reads with here .for the boundary data , we obtain in the limit the usual average \times{\omega}:\quad v_0(t , x ) = \dashint_{\partial t } \mathbb{v}(t , x , y ) { \,\mathrm{d}}y .\end{aligned}\ ] ] finally , we state the existence and uniqueness of solutions for the limit system .[ thm : exist - limit ] let the assumptions [ assump : domain ] and [ assump : data ] hold and let the initial value satisfy and . there exists a unique solution of with the existence and boundedness of unique solutions follows by galerkin approximation as in . in particular , the higher -regularity of follows by ( * ? ? ?* thm . 5 ) .by slightly modifying the proof of ( * ? ? ?4 ) after equations ( 40)(42 ) , the assumptions on the initial values can be relaxed from and to and . to prove the -estimates for the gradients and the -estimates for the time derivative we can argue as in by exploiting the symmetry of and as in as well as the fact that the boundary terms are linear .the usual two - scale decomposition is given via the mappings : { \mathbb{r}}^d \to \mathbb{z}^d ] denotes the component - wise application of the standard gauss bracket and ] for all such that is well - defined in . in the same mannerwe define the _ boundary unfolding operator _ by ( ( * ? ? ?* def . 5.1 ) ) + { \varepsilon}y \right ) .\end{aligned}\ ] ] following we define the _ folding ( averaging ) operator _ via + y _ * \right ) } u\left ( z , \lbrace\tfrac{x}{{\varepsilon}}\rbrace \right ) { \,\mathrm{d}}z \;\bigg\vert_{{\omega}_{\varepsilon}},\end{aligned}\ ] ] where denotes the usual average and is the restriction of to . to derive quantitative estimates for the differences and , we need to test the weak formulation of the original system with -functions which are one - scale pendants of the limiting solution .there are two options to naively fold a two - scale function , namely is only well - defined in , if at least belongs to , and our limit ( respective the corrector for ) does not satisfy strong differentiability in general .the second option is neither a suitable test function , since it is not -regular . to overcome this regularity issue, we define the _ gradient folding operator _ following and adapt its definition to perforated domains .the gradient folding operator is defined as follows : for every , the function is given in as the solution of the elliptic problem note that is uniquely determined by the lax milgram lemma implying the well - definedness of . for simplicity, we define for the norm where the second identity follows from lemma [ lemma : prop - t ] .both folding operators , and , are linear and bounded operators satisfying where the first estimate is due to jensen s inequality , while the second one is due to hlder s inequality .we are now collecting several results which are essential ingredients in the proof of our error estimates .note that also belongs to the space since and we can apply the unfolding operator via , where denotes the characteristic function of the set .for the sake of brevity is omitted in the following .[ lemma : unfold - err ] for all and we have respectively , where only depends on the domains and .the proof for the first estimate is based on the application of the poincar wirtinger inequality on each cell , see ( * ? ? ?3.1 ) or ( * ? ? ?2.3.4 ) with .the second estimate follows from the first one with cf .also ( * ? ? ?* eq . ( 3.4 ) ) .note that is indeed well - defined for one - scale functions . to control the mollified gradient we prove : [ lemma : prop - t-2 ] for and all and we have where depends on the mollifier and .according to ( * ? ? ?* thm . 6 ) we obtain for every (x , y ) & = \left[\operatorname{\mathcal{t}_{\varepsilon}}\left ( \int_{b(x,\delta ) } \nabla j_\delta(x - \xi)u(\xi ) { \,\mathrm{d}}\xi \right ) \right ] ( x , y ) \\ & = \int_{b \left ( { \varepsilon}\left[\tfrac{x}{{\varepsilon } } \right ] + { \varepsilon}y,\delta \right ) } \nabla j_\delta({\varepsilon}[\tfrac{x}{{\varepsilon } } ] { + } { \varepsilon}y - \xi)u(\xi ) { \,\mathrm{d}}\xi . \end{aligned}\ ] ] for , we define the following -dimensional annulus of thickness and with volume .we arrive at + { \varepsilon}y,\delta \right ) } \nabla j_\delta({\varepsilon}[\tfrac{x}{{\varepsilon } } ] { + } { \varepsilon}y - \xi)u(\xi ) { \,\mathrm{d}}\xi - \int_{b(x,\delta ) } \nabla j_\delta(x - \xi)u(\xi ) { \,\mathrm{d}}\xi \right\vert \\ & \leq \left\vert \int_{b_\mathrm{diff } } \nabla j_\delta(x - \xi)u(\xi ) { \,\mathrm{d}}\xi \right\vert \leq \vert \nabla j_\delta \vert_{{\mathrm{l}}^2(b_\mathrm{diff } ) } \vert u\vert_{{\mathrm{l}}^2(b_\mathrm{diff } ) } \leq \sqrt{{\varepsilon } } c \vert j_\delta\vert_{\mathrm{c}^\infty({\mathbb{r}}^d ) } \vert u\vert_{{\mathrm{l}}^2({\omega } ) } , \end{aligned}\ ] ] which proves the assertion .having defined two folding operators , being dual to and assuring -regularity , we call their difference _ folding mismatch _ and control it as follows . [ thm : fold - mismatch ] for only depending on and it holds the proof is based on ( * ? ? ?3.2 ) and adapted to perforated domains in appendix [ app : appendixb ] .since unfolded sobolev functions are in general not -periodic , we need to control the so - called _ periodicity defect _ ,cf . . in the case of slow diffusionit reads : [ thm : per - defect ] for every , there exists a -periodic function such that where the constant only depends in the domains and .the proof relies on ( * ? ? ?* thm.2.2 ) which we can apply after suitably extending from the perforated domain to the whole domain .let denote the extension of as in theorem [ thm : extension ] . according to ( * ? ? ?* thm.2.2 ) , there exists a two - scale function satisfying with only depending on , , and defined on the whole unit - cell as in , cf .also .note that it holds .recalling the definition of in with , we define via , which gives and the proof is finished .for the case of classical diffusion , we consider instead of .this is related to the fact that , in the limit system , the -equation is given in the macroscopic domain , whereas the -equation as posed in the two - scale space , and hence , it can not be reduced to only . [ thm : per - defect-2 ] for every , there exists a -periodic function such that where the constant only depends in the domains and .for the desired estimates hold with according to ( * ? ? ? * thm.2.3 ) . choosing as in the proof of theorem [ thm :per - defect ] yields the assertion .having collected all preliminaries , we can now state and prove the corrector estimates for our thermo - diffusion model .[ thm : main ] let and denote the unique solution of and , respectively , according to theorem [ thm : exist - eps ] and theorem [ thm : exist - limit ] . if the initial values satisfy then we have where the constant depends on the given data and the norms in and .note that the domain is convex , bounded , and has a lipschitz boundary .since and belong to the space , we can apply ( * ? ? ?3.2.1.3 ) and obtain that the limit belongs to the better space . if not stated otherwise , the following notion of weak formulation is to be understood pointwise in ] with , as well as recalling and yields * part b : classical diffusion .* we point out that the higher regularity of the limit implies the higher -regularity of the corrector which is the unique minimizer of the unit - cell problem with ._ step 1 : reformulation of -equation . _the weak formulation of the -equation is given via for all test functions .first of all note that the cross - diffusion term is of order thanks to hlder s inequality and the boundedness in and applying the unfolding operators and , in particular , rewriting and {+}y),y) ] holds for all , we obtain the pointwise estimate thanks to the lipschitz continuity of .together with embedding we obtain for the approximation error we choose the test function in such that + r(\operatorname{\mathcal{t}_{\varepsilon}}u_{\varepsilon } ) ( \operatorname{\mathcal{t}_{\varepsilon}}u_{\varepsilon}{- } u ) { \,\mathrm{d}}x{\,\mathrm{d}}y \nonumber\\ & \quad + \int_{{\omega}\times \partial t } \left ( a \operatorname{\mathcal{t}_{\varepsilon}^\mathrm{b}}u_{\varepsilon}+ b \mathbb{v } \right ) ( \operatorname{\mathcal{t}_{\varepsilon}^\mathrm{b}}u_{\varepsilon}{- } u ) { \,\mathrm{d}}x{\,\mathrm{d}}\sigma ( y ) + \delta^{u_{\varepsilon}}_\mathrm{cross , app , fold } , \end{aligned}\ ] ] where we added and ] , arguing as in for the boundary term , and using the boundedness of in via now , we choose , where denotes the extension from to according to theorem [ thm : extension ] .note that the test function belongs to the space which differs from step 1 wherein it belonged to .indeed it holds almost everywhere in .inserting into and rearranging gives \cdot [ \operatorname{\mathcal{t}_{\varepsilon}}(\nabla u_{\varepsilon } ) -( \nabla u { + } { \nabla_{\!y}}u ) ] \nonumber \\ & \hspace{140pt } + r(u)(\operatorname{\mathcal{t}_{\varepsilon}}u_{\varepsilon}{- } u ) \big\rbrace { \,\mathrm{d}}x { \,\mathrm{d}}y \nonumber \\ & \quad + \int_{{\omega}\times\partial t } ( a u + b \mathbb{v } ) ( \operatorname{\mathcal{t}_{\varepsilon}^\mathrm{b}}u_{\varepsilon}{- }u ) { \,\mathrm{d}}x{\,\mathrm{d}}\sigma ( y ) + \delta^u_\mathrm{per , fold } \end{aligned}\ ] ] and another folding mismatch ( u { - } \operatorname{\mathcal{t}_{\varepsilon}}u + { \varepsilon}\operatorname{\mathcal{g}_{\varepsilon}}u ) \nonumber \\ & \hspace{60pt } { - } \mathbb{d } [ \nabla u { + } { \nabla_{\!y}}u ] \cdot [ \nabla u { + } { \nabla_{\!y}}u - \operatorname{\mathcal{t}_{\varepsilon}}(\nabla u { + } { \varepsilon}\nabla \operatorname{\mathcal{g}_{\varepsilon}}u ) ] \big\rbrace { \,\mathrm{d}}x{\,\mathrm{d}}y \nonumber \\ & \quad + \int_{{\omega}\times\partial t } ( a u + b \mathbb{v } ) ( u { - } \operatorname{\mathcal{t}_{\varepsilon}}u + { \varepsilon}\operatorname{\mathcal{g}_{\varepsilon}}u ) { \,\mathrm{d}}x{\,\mathrm{d}}\sigma ( y ) .\end{aligned}\ ] ] the folding mismatch has the same form as in when replacing with .finally , we control the norm by using once more lemma [ lemma : unfold - err ] and theorem [ thm : fold - mismatch ] . applying young s inequality with in yields _ step 3 : derivation of grnwall - type estimates . _ subtracting equation from yields \cdot [ \operatorname{\mathcal{t}_{\varepsilon}}(\nabla u_{\varepsilon } ) - ( \nabla u { + } { \nabla_{\!y}}u ) ] \\ & \hspace{50pt } + [ r(\operatorname{\mathcal{t}_{\varepsilon}}u_{\varepsilon } ) - r(u ) ] ( \operatorname{\mathcal{t}_{\varepsilon}}u_{\varepsilon}{- } u ) \big\rbrace { \,\mathrm{d}}x{\,\mathrm{d}}y \\ & \quad + \int_{{\omega}\times\partial t } a |\operatorname{\mathcal{t}_{\varepsilon}^\mathrm{b}}u_{\varepsilon}{- } u|^2 { \,\mathrm{d}}x{\,\mathrm{d}}\sigma ( y ) + \delta^{u_{\varepsilon}}_\mathrm{cross , app , fold } - \delta^u_\mathrm{per , fold}.\end{aligned}\ ] ] using the uniform ellipticity of , the lipschitz continuity of , and the estimations of the periodicity defect in gives choosing and integrating over with we get * final step .* we add and and finally obtain the application of grnwall s lemma and the convergence of the initial values in complete the proof of .our corrector estimates generalize the qualitative homogenization result obtained in in two ways : on the one hand we prove quantitative estimates . on the other hand ,we consider slow thermal diffusion as well as different scalings and of the cross - diffusion terms . under slightly more general assumptions on the data with respect to the -dependence, our estimates imply in particular the rigorous but qualitative homogenization limit for this system . * what is the limit for arbitrary ?* for all the limiting -equation remains as it is and the cross - diffusion disappears in the limit .for we have a priori that weakly in and we expect the additional term in the limit .the choice is not meaningful , since the cross - diffusion term is unbounded with . for cross - diffusion term vanishes in the limiting -equation and for it diverges with .indeed only the choice is meaningful , since it corresponds to the scaling of .* possible generalizations concerning the data . *our analysis allows for not - exactly periodic coefficients such as with as in .the coefficients and as well as the reaction term may also be not - exactly periodic in the same manner .moreover all coefficients may additionally depend lipschitz continuously on time .the sink / source term may be less regular by choosing (x) ] . on the boundary we may consider globally lipschitz continuous reaction terms . in this case ,the boundary term in is controlled by , where denotes the global lipschitz constant .non - linear boundary terms may require better initial values to derive the -regularity of the time derivatives as in , however the error estimates hold as they are .* on the choice of the initial values .* for given the obvious choice is such that the assumption is satisfied .perturbations of the form , which preserve non - negativity , are possible as well . in the case of slow diffusionsuch a direct choice is not possible mainly because and live in spaces of dimension and , respectively .let be given .one possible choice is , however we are not able to prove in this case , since is not a hilbert space . hence , we assume strong differentiability , such as or , so that is well - defined in .we recall elementary properties for the periodic unfolding operator and the boundary unfolding operator as well as extensions operators .[ lemma : prop - t ] let with .the operators and are linear and bounded .the product rule holds for all , and , , respectively .the norms are preserved via for all and , respectively .one has the integration formulas for all and , respectively . if , then it is with . for all it holds = r(\operatorname{\mathcal{t}_{\varepsilon}}u) ] for all .[ thm : extension ] under the assumptions [ assump : domain ] on the domain there exists a family of linear operators such that for every it holds where only depends on the domains , , and .the proof of proposition [ thm : fold - mismatch ] for the folding mismatch follows and is adapted to perforated domains .we define the scale - splitting operator by -lagrangrian interpolants , as customary in finite element methods ( fem ) , following ( * ? ? ?* sec . 3 ) .by the sobolev extension theorem , there exists for every and a function and , respectively , such that it holds where only depends on the domain .then is given via : for every node we define ( note that this definition is slightly different than in .therein the average is taken over balls centered at and not touching the pores .the present definition has the advantage that the equality holds for all nodes . )we define on the whole by interpolating the nodal values with -lagrangian interpolants yielding polynomials of degree , for more details see ( * ? ? ?4.1 ) or ( * ? ? ?2.3.6 ) . for given two - scale functions of product form , we can now construct approximating sequences in via and require only the minimal regularity and . according to (3.1 ) the macroscopic interpolants satisfy where only depends on and .furthermore , we can control the difference between and by : the proof follows along the lines of ( * ? ? ?3.4 ) by replacing with . in a first step estimateis derived for using the `` folded function '' and the estimates . in a second step this result is generalized to arbitrary two - scale functions by exploiting the tensor product structure of the space and expressing in terms of an orthonormal basis with . | the present work deals with the derivation of corrector estimates for the two - scale homogenization of a thermo - diffusion model with weak thermal coupling posed in a heterogeneous medium endowed with periodically arranged high - contrast microstructures . the terminology `` weak thermal coupling '' refers here to the variable scaling in terms of the small homogenization parameter of the heat conduction - diffusion interaction terms , while the `` high - contrast '' is thought particularly in terms of the heat conduction properties of the composite material . as main target , we justify the first - order terms of the multiscale asymptotic expansions in the presence of coupled fluxes , induced by the joint contribution of sorret and dufour - like effects . the contrasting heat conduction combined with cross coupling lead to the main mathematical difficulty in the system . our approach relies on the method of periodic unfolding combined with -independent estimates for the thermal and concentration fields and for their coupled fluxes . * msc 2010 : * 35b27 , 35q79 , 74a15 , 78a48 . * keywords : * homogenization , corrector estimates , periodic unfolding , gradient folding operator , perforated domain , thermo - diffusion , composite media . |
the main goal in output regulation is to find a controller such that the output of a given plant asymptotically follows a given reference signal generated by an exosystem .it is known that a regulating feedback controller contains a built - in copy of the exosystem .robustness of regulation is needed in order to make the controller work despite some perturbations of the plant , e.g. , parameter uncertainties and modelling errors .if the controller is required to tolerate all arbitrarily small perturbations of the plant , then the internal model principle due to francis and wonham and davison states that if the plant has -dimensional output space , then every robustly regulating controller must contain a -fold copy or in short -copy of the exosystem . in this paperwe study the robust regulation problem in a situation where the controller is only required to tolerate uncertainties from a restricted class of perturbations .such a situation can arise due to several different reasons . in the simplest situation, contains only a finite number of plants , for example , if our controller is required to function after specific component failures . in the case of only one possible failure , the original plant changes to a new plant and . on the other hand , the class becomes infinite in a situation where the values of some specific parameters of the plant are not known accurately .our example in section [ sec : example ] illustrates the latter case . in the situation where robustness is only required with respect to a given class of perturbations , it is natural to ask if the controller must contain full -copy internal model of the exosystem .this problem was studied by paunonen in using state space methods .it was shown in that the -copy internal model guaranteeing robustness with respect to all small perturbations can be relaxed in many situations , and this observation leads to design of controllers with so - called _ reduced order internal models_. in this paper , we introduce frequency domain conditions for a controller to achieve output regulation and robustness with respect to a given class of perturbations .our results give a precise meaning for reduced order internal models in the frequency domain .in addition , we present methods for constructing controllers with reduced order internal models .our constructions result in minimal complexity requirements for the number of copies built into the robustly regulating controllers .we choose the reference signal to be of the form the class of signals that can be presented in this form is fairly large as it contains reference signals that are linear combinations of sinusoids , and in particular finite approximations of uniformly continuous periodic signals . in the first part of this paper we present our theoretical results .our first main result shows that a stabilizing controller where is analytic at , is robustly regulating for the class of perturbed plants if and only if for all .this is the frequency domain analogue of the time domain condition that was presented in .the condition leads to our second main results that gives a lower bound for the ranks of the matrices in the controller .in particular , if the plants have the same number of inputs and outputs and are invertible at , then the lower bounds for the ranks of in the controller that is robust with respect to the class are the controller constructions presented shows that the bounds are optimal , as our method results in controllers satisfying for . in the frequency domainthe controller containing a full -copy internal model of the exosystem satisfies for all . in light of results ,the frequency domain interpretation is that the controller contains a _ reduced order internal model _ if for some . in a situationwhere for some , e.g. , when and contains only two plants , robust output regulation can then be achieved without the full internal model of the exosystem . in the second part of the paperwe construct a controller to solve the robust output regulation problem for a given class of perturbations .the design procedure is a two step process where we first stabilize the system and then design a robustly regulating controller for the stabilized plant .the robust regulation of the stabilized plant is carried out using the controller where are chosen in such a way that they satisfy the regulation condition and have ranks defined in .in addition , we choose such that the eigenvalues of have nonpositive real parts . we show that for such choices of parameters there exists such that for any controller stabilizes the closed - loop system .controllers of this type have been used in robust output regulation with full internal models in . in the final part of the paperwe illustrate the results by designing a robustly regulating controller for a laboratory process with five water tanks . in the studied experimental setupthe restricted class of perturbations arises naturally from considering the unknown valve positions of the water tank system as parameters with uncertainty . by design the controller with a reduced order internal model achieves output regulation irregardless of the valve positions .robust output regulation with a restricted class of perturbations has been studied previously using frequency domain techniques for stable systems in by the authors . in this paperwe extend the results of most notably by introducing a controller design procedure for unstable plants , by establishing the optimality of the presented lower bounds for , and by giving a new and simplified proof for the characterization of robust controllers .our results also establish a new sufficient condition for the solvability of the robust output regulation problem for a class of perturbations . finally , we present a new real world example on construction of controllers with reduced order internal models .locatelli and schiavoni studied a similar control problem in .however , in the controller was required to be robustly regulating in a small neighborhood of a given finite set of plants , and consequently the controller required a full -copy of the exosystem .in this section we introduce the notation used in this paper and state the robust output regulation problem .we denote the class of functions that are bounded and analytic in the right half plane by .the set of all matrices of arbitrary size and of all -matrices over a set are denoted by and , respectively .we denote the rank , the range , the kernel , and the moore - penrose pseudoinverse of a matrix by , , , and , respectively .we assume that the class of perturbations has the following properties throughout the rest of the paper . *the nominal plant is in the class , i.e. , .* every is analytic at the points .in this paper we assume that the error feedback controller is of the form , where and where is analytic at for all .this means in particular that the poles of the controller are located at the frequencies the reference signal and that their orders are at most one .the plant and the controller form the closed loop depicted in fig .[ fig : closedloop ] . here is an external disturbance .the closed loop transfer function from to is fig1.eps ( 1,13) ( 17,12.5) ( 32.5,9) ( 59,12.5) ( 53,23) ( 74,9) ( 91,13) given a class of perturbations , choose the parameters and of the controller in such a way that * the controller stabilizes the plant , i.e. . * if is such that stabilizes , then where with distinct real numbers and . if condition is satisfied , we say that _ regulates _ . in the time - domain , this corresponds to the output converging to of asymptotically with respect to time .in this section we present a characterization for controllers that are robust with respect to a given class of perturbations .the following theorem is the main result of this section .[ thm : robchar2 ] assume the controller is of the form .* if is such that stabilizes , then regulates if and only if for all . *if stabilizes , then it solves the robust output regulation problem for the class of perturbations if and only if is satisfied for all that are stabilized by the controller .the second part of the theorem [ thm : robchar2 ] is a direct consequence of the first part and the statement of the robust output regulation problem . in order to prove the first part ,let be such that stabilizes and let be arbitrary . since is analytic at it has the taylor series expansion and since has a simple pole at it has the laurent series since is stable , it has the taylor series since , the coefficient of the term of the laurent series expansion on the left hand side is zero , i.e. similarly , since , comparing the constant terms we see that in order to show the sufficiency of , observe that implies that if the condition holds .thus does not have a pole at .consequently , if holds for all then is regulating .next we show the necessity of .if is regulating then is stable .this implies that .using , we see that . thus , the condition holds .if the controller solves the robust output regulation problem for a class of perturbations , then theorem [ thm : robchar2 ] gives us the following optimal lower bounds for the ranks of the matrices .we call the bound of the theorem _ the minimal order of the internal model related . _ [ thm : minimumrank ] let be the minimum dimension over all the subspaces of satisfying here is the preimage of for all .a. if of the form stabilizes all the plants in and solves the robust output regulation problem , then b. if for all and is stabilizable , then there exists a robustly regulating controller satisfying for all .the second item is justified by section [ sec : contdesign ] . in order to prove the first item ,let be the rank of , and let be linearly independent columns of .we set .since is robustly regulating , theorem [ thm : robchar2 ] implies that for all there exists a vector such that thus , .it follows that .we want to point out that if is invertible for all , then since is a unique element .the second item implies the following sufficient solvability condition .[ cor : solvability ] the robust regulation problem is solvable if the plant is stabilizable , for all , and for all . the input to output effect of the internal model is characterized by transmission zeros . in was shown that the regulation condition implies that this simply means that there exists a transmission zero at blocking the pole of the reference signal .the transmission zero must be in the appropriate direction , which is determined by the plant and the reference signal .thus , the aim of the robust regulation is to find a controller that aligns the direction of the transmission zero with the reference signal for every plant in .this is achieved by building an internal model of large enough order into the controller . a way of doing this is proposed in section [ sec : contdesign ] .an internal model of order basically introduces overlapping transmission zeros . in the extreme case is a blocking zero , meaning that the whole transfer function vanishes at .this is what is required in the classical robust regulation .if we consider more than one reference signals , then the condition is required for all of them .for example , if there is an additional reference signal then is replaced by the condition this naturally increases the size of the internal model .in this section , we propose a two stage robust controller design .first stage is to find a stabilizing controller for the given nominal plant , and in the second stage a robust controller for the stabilized plant is found .combining the two controllers results in a robustly regulating controller for the nominal plant .we begin by showing that for stable plants there exists a simple robust controller , and then proceed to show how the design procedure is carried out for unstable plants . the next theorem is the main result of this subsection and it gives a design procedure of a robustly regulating controller .the simple structure suffices because the nominal plant is stable .additional structure is needed for example if the plant is unstable ( see section [ sec : unstable ] ) or there are some other design goals , e.g , .optimization , which together with the minimal internal model leads to the controller .[ thm : contdesignstab ] assume that for all and that is stable .then the contoller solves the robust output regulation problem for a class of perturbations if the design parameters and are chosen in the following way : 1 .find a subspace such that and .2 . choose a basis of .3 . define .\end{aligned}\ ] ] 4 .choose an invertible matrix so that the eigenvalues of are zero or have negative real parts , and that the jordan blocks related to the zero eigenvalue are trivial .5 . set 6 .choose suitably small to guarantee closed - loop stability .the proof of theorem [ thm : contdesignstab ] is divided into two theorems .theorem [ thm : designparameterck ] shows that the proposed controller is regulating and theorem [ thm : designparametere ] shows that with the choices made there exists a sufficiently small guaranteeing stability of the closed loop . before proceedingfurther we discuss the choice of in the design procedure .the condition is not automatically satisfied if the rank of is less than .this can happen if the plant has transmission zeros at or there are less inputs than outputs . in the classical robust regulationit is well known that in order to stabilize the nominal plant with a controller containing a full internal model the plant must not have transmission zeros at the poles of the reference signal .indeed , the condition is not satisfied if and . in our case, may have transmission zeros or less inputs than outputs , since need not in general be .the choice achieving the minimal order internal model exists if has full rank since the condition is then trivially satisfied .a particular choice for would be with .the condition or equivalently is then needed . in general , choosing is not optimal , since can be strictly greater than the minimal order of the internal model related to the pole . on the other hand ,if are invertible for all , then item ( iii ) of theorem [ thm : minimumrank ] shows that is an optimal choice .[ thm : designparameterck ] let be stable and assume that for all .if of are as in , then the condition holds for every .we show that holds for arbitrarily chosen .since and there exists such that it follows that .[ thm : designparametere ] let be stable and assume that for all .if of are as in , then there exists such that of stabilizes for every ] where . by the stability of and the definition of , is bounded in .thus , there exists small enough such that is bounded in whenever .next we show the existence of suitable .we decompose where by lemma [ lem : localbound ] , is bounded in by a bound independent of .in addition , is bounded in since and are analytic in .the decomposition implies that we can choose such that is bounded in for all $ ] .this completes the proof of the stability of .since is stable , it remains to show the stability of . by the stability of and the decomposition , we only need to show that is stable . by the above discussion only have poles of order one .thus , it has the representation where is the projection to along and is an analytic function .since and we have that .consequently , , and is analytic .this completes the proof . for unstable plants ,the design procedure is given in the following theorem .it is based on the two stage approach proposed in ( * ? ? ?* section 5.3 ) .[ thm : contdesignunstab ] if the steps of items 1 and 2 below can be carried out , then the controller of step 3 is robustly regulating . 1 .stabilize the nominal plant using a stabilizing controller that does not possess poles at for .2 . find a controller of form that stabilizes the plant and satisfies the condition for every .a robustly regulating controller of is given by [ rem : contdesign ] in step 2 we may use the approach proposed for stable plants if we choose the matrices implied by as earlier , but in we replace by when choosing . in particular ,if the plant is invertible at , then so is the stabilized plant , because does not posses poles at .it follows that step 2 of the above design procedure can be carried out ._ proof of theorem [ thm : contdesignunstab ] . _theorem 5.3.6 of ( the generalization to the current case follows by ( * ? ? ?* section 8) ) shows that stabilizes since stabilizes and stabilizes .thus we only need to show that is regulating for .since does not possess poles at and is of the form , it is obvious that is of the form as well with the same matrices .the matrices satisfy the condition by assumption .let us consider the laboratory process of fig .[ fig : tanks ] with five water tanks .there is a hole in the bottom of each water tank and the water from the tanks four and five flows to the tanks below them .the three pumps with operating voltages , , induce a flow where is a constant . denote the deviation from the initial water levels of tank by , .the aim of our control problem is to choose the inputs so that the outputs follow the reference signal i.e. the water level of tank 1 is changing in a periodic manner while kept one unit above the initial level in the other two bottom tanks . the parameters correspond to how the three valves are set prior to the experiment .the flow induced by the first pump to tank 1 is and to tank 5 , and similarly for the other two valves .the changes to the valve positions can be considered as perturbations to the system .the transfer function of the system linearized at the initial water levels is where the parameters and depend on the tank cross - sections , the outlet hole cross - sections , constants of proportionality , and the initial water levels . for more details ,see where a similar system with four tanks was considered .here we choose the initial setup so that and for simplicity , i.e. we have let the initial positions of the valves be , i.e. the nominal plant is the frequency domain representation of is in order to find a robustly regulating controller , we define a basis of for and .let be the natural basis vector of .because of the upper triangular block structure of , it is easy to deduce that and . following the design procedure of theorem [ thm : designparameterck ], we choose and .now the controller satisfies the regulation property .it remains to choose invertible and small enough to guarantee stability .we can choose since all the eigenvalues of have positive real parts for every .if we choose , then we have in order to show that the controller is stabilizing for we note that the proof of theorem [ thm : designparametere ] shows that it is sufficient that is stable .the stability follows now by observing that the zeros of have negative real parts , i.e. can not have poles in the closed right half plane .we end this section by comparing the proposed design procedure to the classical one . herewe have used the knowledge that the first two inputs do not affect the third output , i.e. , we have structured perturbations , whereas the classical design procedure ignores this fact and the perturbations are taken to be totally arbitrary .one should also note that small perturbations in the other parameters such as the hole or tank diameters do not affect the zero components of the plant transfer function , so actually we gain robustness for all small parameter changes . in our controllerthe internal model is minimal in the sense that the ranks of the matrices for are minimal . since and have rank two instead of rank three , which would be the caseif classical approach is used , the order of the controller s realization is reduced by two .k. kim , s. skogestad , m. morari , and r. d. braatz .necessary and sufficient conditions for robust reliable control in thepresence of model uncertainties and system component failures ., 70:6777 , 2014 .p. laakkonen and l. paunonen . a simple controller with a reduced order internal model in the frequency domain . in _ proceedings of the 15th european control conferenceaalborg , denmark , june 29 july 1 2016 . | the internal model principle states that all robustly regulating controllers must contain a suitably reduplicated internal model of the signal to be regulated . using frequency domain methods , we show that the number of the copies may be reduced if the class of perturbations in the problem is restricted . we present a two step design procedure for a simple controller containing a reduced order internal model achieving robust regulation . the results are illustrated with an example of a five tank laboratory process where a restricted class of perturbations arises naturally . |
the double - helix structure of dna is the source of many complications in its _ in - vivo _ functioning during condensation / decondensation , replication , or transcription .for example the necessary unzipping of the molecule during transcription induces torsional strain along dna .the development of magnetic tweezers or the use of quartz cylinders in optical tweezers allow researchers to investigate the _ in - vitro _ response of dna molecules to torsional stress .studies of the behaviour of this twist storing polymer are not just a game for physicists as it has been clearly established that , e.g. the assembly of reca could be stalled by torsional constraints , or that the rate of formation and the stability of the complex formed by promoter dna and rna polymerase depends on the torque present in the dna molecule . on the theoretical side ,matters are made difficult by the nonlocality of the topological property that is associated with the torsional constraint : the link .the two sugar - phosphate backbones of dna have opposite orientation and the ends of a double - stranded dna molecule can only be chemically bound in such a way that each strand joins itself , thereby yielding two interwound closed curves ( no mbius - like configuration can exist ) . for a circularly closed dna molecule ,the link is the number of times one of the sugar - phosphate backbone winds around the other .once the three dimensional shape of the molecule is projected onto a plane , the link is given by half the number of signed crossings between the two backbones .this quantity is best seen as the number of turns put in an initially planar plasmid ( a piece of circularly closed double - stranded dna ) before closing it .link has been shown to consist of two parts : the two sugar - phosphate backbones of a plasmid can be linked because the plasmid lies in a plane but the base - pairs are twisted around their centre line ( the curve joining the centroids of the base - pairs ) and/or the centre line itself follows a writhed path in space . in general the two possibilities coexist and the link of a dna molecule is the addition of the two quantities : the twist is a local quantity in the sense that it can be computed as the single integral of the twist rate : , where is the arclength along the centre line and the total contour length of the molecule . in elastic dna models , the twist rate normally coupled to mechanical quantities ( e.g. the torque ) characterizing the molecule .contrary to the twist , wr is a _ global _ property of the centre line of the molecule .we first define the _ directional writhe _ of a closed curve .consider a closed curve in 3d and project this curve , along a certain direction , on a plane .the number of signed crossings seen in the plane is the directional writhe _ for that direction_. one could then reiterate the procedure with different directions of projection and compute the directional writhe for each direction .the average value obtained for the directional writhes , when all directions are considered , is the writhe of the 3d curve .the following double integral has been introduced by clugreanu and white : where is the position of points on and is the unit tangent to .as soon as a 3d curve does not self - intersect , the writhe of the curve is given by eq .( [ equa : writhe_def ] ) : if with . the computation of the double integral of eq .( [ equa : writhe_def ] ) is analytically hard and numerically time consuming to perform .an important result though enables one to reduce the double integral to a single integral , provided several hypotheses are fulfilled .it is the main purpose of this paper to show that these hypotheses are not met in the case of plectonemic dna .fuller s theorem states that the writhe of the curve can be computed by considering the writhe of a reference curve ( which should be known or easy to compute ) and the continuous deformation ( a homotopy ) morphing to : where is the unit tangent to and where ] and .the unit tangent to is .the first hypothesis is that none of the curves self - intersects .the second hypothesis is that , and this is the point we want to emphasize , there should be no point along any of the curves where .for each value of , the ( unit ) tangent with ] and all ] , the computed value could be as far as from the correct value , but the fractional part will be accurate .this discrepancy between and the actual writhe has already been pinpointed in the case where dna is treated as a fluctuating chain under low twist ( i.e. without plectonemes ) .nevertheless we have found a certain number of references where fuller s formula is used . in most cases the hypotheses of fuller stheorem were not checked , if only mentioned . insome works the formula is used in a scheme that provides an estimate for the torsional stiffness of the dna molecule and this has been shown to lead to incorrect results .nevertheless , in some few papers , the formula was used to asses the writhe of dna configurations under high stretching force and low twist , in which cases antipodal points are absent and the formula is correct .during a force - extension experiment on a single dna molecule thermal agitation deforms the molecule whose shape locally adopts random directions ( fig . [fig : antipodal_wlc ] , left ) . in the absence of ( or under low ) twist ,dna is modeled as a worm - like chain ; the path followed by its centre line in space looks like a ( directed ) random walk . in such configurations ,the writhe is usually evaluated using fuller s formula ( [ equa : fuller_2nd_theo ] ) with the reference curve shown in fig .[ fig : antipodal_wlc ] , left .since the writhe is classically defined for a closed curve , the reference and actual curves are all closed by imaginary -shaped curves ( dashed in fig .[ fig : antipodal_wlc ] ) that connect the top to the base of the configurations .( other choices of closures and interferences due to the closure in the computation of the writhe are discussed in . ) strictly speaking the curve for which we compute the writhe consists of two parts : the closure and the part corresponding to the molecule . as the closure remains unchanged in the continuous deformation ] , and . as stated above , eq .( [ equa : fuller_euler_angle ] ) is only valid if the curve can be deformed into the reference curve without passing through configurations having their tangent vector facing - : for all , .we remark that in the present case where is the axis an antipodal point is a point where the curve passes through the south pole of the unit sphere : . , , , , and ( see fig . [fig : diag_z_de_n ] ) obtained by numerical continuation . configurations and each have an antipodal point to the reference curve of fig .[ fig : antipodal_wlc ] .these antipodal points are located at the middle point of the end - loop of the plectonemic structure.,title="fig : " ] , , , , and ( see fig . [fig : diag_z_de_n ] ) obtained by numerical continuation .configurations and each have an antipodal point to the reference curve of fig .[ fig : antipodal_wlc ] .these antipodal points are located at the middle point of the end - loop of the plectonemic structure.,title="fig : " ] in the case where the actual configuration exhibits a single or multiple antipodal points it has been proposed in to evaluate the writhe from eq .( [ equa : fuller_euler_angle ] ) with euler angles defined on a truncated unit sphere : would not be allowed to reach and hence antipodal points would be avoided ( the curve so defined would be very near the real curve , hence the writhes would almost be the same ) .we stress that this process is not sufficient as even for a given curve with no antipodal points , it is not clear whether eq .( [ equa : fuller_euler_angle ] ) is valid or not . in order for eq .( [ equa : fuller_euler_angle ] ) to be valid , one has to exhibit a continuous deformation from the axis to the curve that is entirely free of antipodal points .there are many cases where the actual curve is free of antipodal points , but where a deformation free of antipodal points does not exist . consequently in these cases eq .( [ equa : fuller_euler_angle ] ) yields an incorrect result ( unless antipodal points of opposite signs cancel out ) .nevertheless , it has been noted that under high stretching force ( e.g. pn ) and low torque the dna molecule is almost straight and no such antipodal points exist .in such a case fuller s formula is correct provided no plectonemes are present , as we will see now . .writhe ( first line ) and link ( third line ) of the configurations of fig .[ fig : antipodal_plectonems ] , computed by continuation or use of the double integral ( eq .( [ equa : writhe_def ] ) ) .the second line is computed from eq .( [ equa : fuller_euler_angle ] ) and the last line is computed from eq .( [ equa : link_phi_psi ] ) . formulas ( [ equa : fuller_euler_angle ] ) and ( [ equa : link_phi_psi ] ) are not applicable on configurations and that each have an antipodal point , and yield incorrect results for configurations and . [ cols="<,^,^,^,^,^",options="header " , ] , , , , and of fig . [fig : antipodal_plectonems ] .an antipodal point arises when the curve passes through the south pole of the sphere , i.e. for configurations and .,title="fig : " ] + , , , , and of fig .[ fig : antipodal_plectonems ] .an antipodal point arises when the curve passes through the south pole of the sphere , i.e. for configurations and .,title="fig : " ] in magnetic tweezer experiments , when a large amount of turns are put in ( by rotation of the magnetic bead around the axis ) , the dna molecule reacts by forming plectonemes .the number of turns imposed on the magnetic bead is given by the link of the molecule .we now show that the presence of plectonemes in the supercoiled configuration prevents the existence of a deformation , from ( i.e. the axis ) to , that is free of antipodal points , and consequently forbid the use of eq.([equa : fuller_euler_angle ] ) . in terms of the euler anglesthe twist of the molecule can be computed as where is the third euler angle ( see e.g. ) .using eqs .( [ equa : fuller_euler_angle ] ) and ( [ equa : twist ] ) we obtain : this formula usually is the starting point for computations of the link of supercoiled configurations , see e.g. or .we show here that it yields incorrect results when plectonemes are present .we have performed computations to model the elastic response of a twist storing filament subject to tensile and torsional constraints and we quantitatively reproduced the plectonemic regime characterised by the linear decrease of the end - to - end distance of the filament as a function of the number of turns put in .the plectonemic configurations were computed numerically using a continuation algorithm and was computed by continuity so that no integer number of turns is missed ( we also performed numeric integration of the double integral of eq .( [ equa : writhe_def ] ) for the writhe of the configurations , with closures , and always obtained consistent results ) .these shapes serve as an illustration for the computation of the writhe and consequently are used for their geometry only .the fact that they are mechanical equilibria is not relevant here .the continuous dark curve in fig . [ fig : diag_z_de_n ] shows an output of the numerics , drawn in the plane , where is the vertical extension of the molecule .the curve starts at point which corresponds to a straight and twisted configuration .the path is then monotonically decreasing in .we have selected five configurations , , , , and which are drawn in fig .[ fig : antipodal_plectonems ] , together with their corresponding tangent indicatrices ( see definition in section [ section : global_and_local ] ) drawn in fig .[ fig : unit_spheres ] . configurations and each comprise an antipodal point located at the middle point of the end - loop of the plectonemic structure . this can be verified in fig .[ fig : unit_spheres]- and where the tangent indicatrices pass through the south pole of the unit sphere . ) .five configurations , , , , and are selected and plotted in fig .[ fig : antipodal_plectonems ] together with their tangent indicatrices in fig .[ fig : unit_spheres ] .the inset shows a zoom around point , the first antipodal point , where the gray curve jumps by an amount of -2 units . from the two curves coincide . ]using the same geometric configurations , we compute from eq .( [ equa : link_phi_psi ] ) and we plot the corresponding curve , in gray , on the same diagram . we see that each antipodal event introduces a shift of two units in the gray curve , which is consequently broken .as expected this confirms that eq .( [ equa : link_phi_psi ] ) is only valid _ modulo _ 2 and hence should not be used to estimate the link of plectonemic configurations .the writhe and link of the five example configurations are given in table [ table : wr_lk_wrf_lkf ] where they are compared with and given by eqs .( [ equa : fuller_euler_angle ] ) and ( [ equa : link_phi_psi ] ) . for configuration ,which is separated from the reference curve by two antipodal events , we see that ( resp . ) is 4 units away from the correct ( resp . ) value .this configuration is an illustration of the fact that fuller s formulas ( eq . ( [ equa : fuller_euler_angle ] ) or ( [ equa : link_phi_psi ] ) ) can be wrong even for a configuration that does not comprise any antipodal point , which is clearly illustrated in fig .[ fig : unit_spheres]- where we see that the tangent indicatrix is nowhere near the south pole .we first comment on the size of the gap between the two curves .a plectonemic dna configuration that shows ( positive ) crossings on a lateral projection has . if we continuously deform this configuration to the reference curve of fig .[ fig : antipodal_wlc ] left by unwinding the plectonemic region , an antipodal point arises each time the tangent at the apex of the terminal loop ( at the end of the plectonemic region ) is facing downward .this happen times .as proved in the presence of antipodal points leads to a discrepancy in fuller s formula of : .since , we have that : , which corresponds to an error of up to 100% .this is apparent in fig .[ fig : diag_z_de_n ] where the broken gray curve stays near the vertical axis while the ( continuous ) dark curve monotonically increases in link .second we note that the discrepancy between the two curves occurs shortly after self - contact has started in the filament , for , see inset of fig .[ fig : diag_z_de_n ] .plectonemic structure may exist for small number of turns , provided the pulling force is not too large .on the other hand , a large pulling force does not rule out the occurence of plectonemes , provided that is large enough .this leaves a small parameter regime ( large pulling force , low number of turns ) where plectonemes are absent . when the two sources of discrepancies are considered ( random walk antipodal points , and plectonemic antipodal points ) one sees that the use of eq .( [ equa : fuller_euler_angle ] ) ( resp .( [ equa : link_phi_psi ] ) ) to compute the writhe ( resp .the link ) in a model for dna under tensile and/or torsional stress is to be avoided unless and the tensile force is large .we summarize here few properties of the quantities , which do not always give the writhe of a curve .the quantity is a function of the curve only , whereas the quantity also depends on the reference curve .the quantity yields the correct value for the writhe of a closed curve as soon as the curve is not self - intersecting . along a continuous deformation with ] , the quantity jumps by two units when the curve has an antipodal point ( say at ) with regard to the reference curve . on the other handthe writhe stays continuous .this means that the quantity no longer yields the correct value for the writhe as soon as an antipodal event happens , , but , .this is an important point and many authors seem to believe that the quantity only has problems for configurations actually comprising an antipodal point ( in the above example ). therefore the quantity may not be equal to the writhe even for curves that _ do not _ comprise any antipodal point .this make the use of uneasy , as one has to first verify the absence of antipodal events in the _ entire _ continuous deformation . on the contrarythe use of is much easier in the sense that one just has to check that the actual curve does not self - intersect . in this sensethe quantities and do not suffer from the same pathologies in the computation of the writhe , contrarily to what is claimed in .another consequence is that , when sampling dna configurations to construct a statistical ensemble and compute writhe averages and fluctuations , it is not enough to introduce , as was done in , a small forbidden region around the south pole of the unit sphere to ensure that yields a correct value .in fact many of these sampled configurations , even free of antipodal points , are configurations that suffer the same problems as the configurations with above : , as numerically verified in . finally we want to point out the following property .we saw that if a continuous deformation with ] and .we note .we have for all , and consequently is the arc length in part .the end loop connects the two helices and : with ] and .we have for all , and consequently is the arclength in part . the end loop closes the curve : with ] . the reference curve of fig .[ fig : two_reference_curves_plasmid ] top comes to mind naturally and makes calculations easiest , but yields antipodal events and hence an incorrect result , as we shall see now . for the supercoiled plasmid of fig .[ fig : clamped_ply ] . ]the writhe of the reference curve is zero .consequently we only have to compute the integral of eq .( [ equa : fuller_2nd_theo ] ) over the four different parts , , , and .clearly when the plasmid is long enough , i.e. when is large , the contribution to of the helical parts and become dominant. indeed their contribution scales with while the contribution of the end loops and remains bounded .consequently we focus on the contributions of the helical parts and , for large values of , that is we look at the limit when . for the helical part ,the corresponding tangent of the reference curve is .fuller integral for part is then : for the helical part , the corresponding tangent of the reference curve is . the fuller integral for part is then : neglecting the end loop contributions we arrive at : we first remark that although ( see eq .( [ equa : double_integral_formula ] ) ) .the discrepancy is due to the fact that in the continuous deformation ] in fig .[ fig : closed_circuit ] . onewould then have a closed circuit with one antipodal point and no self - crossing , in contradiction with the result established in .hence no such ` good ' deformation exists. this means that as soon as one ` bad ' deformation exists between a reference curve and a curve the formula does not yield the correct result and no ` good ' deformation can exist .a consequence of this is that rotating the reference curve of appendix a ( fig .[ fig : two_reference_curves_plasmid ] , top ) to introduce a new , and ` good ' , reference curve ( e.g. fig .[ fig : two_reference_curves_plasmid ] , bottom ) is hopeless . to show this we first introduce a ` bad ' deformation between the rotated reference curve ( e.g. fig .[ fig : two_reference_curves_plasmid ] , bottom ) and the supercoiled plasmid of fig .[ fig : clamped_ply ] : we untangle the plasmid using a self - crossing , as in fig .[ fig : closed_circuit]-c , to obtain a ` stadium ' shaped curve which we subsequently rotate to the ( rotated ) reference curve .the existence of this ` bad ' deformation means that no ` good ' deformation can exist . | the linking and writhing numbers are key quantities when characterizing the structure of a piece of supercoiled dna . defined as double integrals over the shape of the double - helix , these numbers are not always straightforward to compute , though a simplified formula exists . we examine the range of applicability of this widely - used simplified formula , and show that it can not be employed for plectonemic dna . we show that inapplicability is due to a hypothesis of fuller theorem that is not met . the hypothesis seems to have been overlooked in many works . * pacs numbers * : 87.10.-e , 87.14.gk , 02.40.-k , 87.15.-v + |
many social , technological , and biological networks belong to a common scale - free ( sf ) structure which consists of many low degree nodes and a few high degree nodes called as hubs .the degree distribution follows a power - law , therefore an sf network has an extreme vulnerability against hub attacks .in addition , these real networks are classified into assotative and disassortative networks . for examples , typical social networks , e.g. coauthorships and actor collaborations ,are assotative , while typical technological and biological networks , e.g. internet , world - wide - web , protein - interaction , and food webs , are disassortative . in assotative networks ,nodes with similar degrees tend to be connected , and thus positive degree - degree correlations appear . in disassotative networks, nodes with different degrees : low and high degrees tend to be connected , and thus negative degree - degree correlations appear . through the above findings in network science , superior network theories and efficient algorithms have been developed for analyzing network topology and dynamics .however , studies for the cases with degree - degree correlations are not clear enough for successful analysises of topological structures and epidemics on networks except some percolation analysises . recently, it has been numerically and theoretically found that an onion - like structure with positive degree - degree correlations gives the nearly optimal robustness against hub attacks in an sf network .on the other hand , the average behavior of stochastically generated network models or empirical data samples of real networks is discussed in many applications .usually , some characteristic quantities such as degree distribution or clustering coefficient are investigated for a network , then their quantities are averaged over the samples of networks in which the existence of a generation rule ( mechanism ) of the networks is assumed . in this paper , we focus on the beforehand averaged network structure over samples , and calculate the degree distribution for several models of growing network with or without degree - degree correlations . this representation will give a general framework for numerically investigating the characteristic topological quantities in growing networks . since our framework is supported by the interesting property of ordering that older nodes tend to have higher degrees in a randomly growing network , a wide range of application to growing networks can be expected .we consider a set of growing networks in which a new node is added with probabilistic links to existing nodes in a network at each time step .to study the average behavior of the stochastic processes in many samples , we use the ensemble average of adjacency matrix defined as follows . here , without loss of generality , we set connected two nodes as the minimum initial configuration : , , and the degree . note that at each time step the matrix is expanding with the inverse shape of elements , , at the right - bottom corner .the diagonal element is always due to no self - loop at each node .other elements are , , as the average number of links from to over the samples of networks .the value of each element is defined in order according to the passage of time .we assume that there are no adding links between existing nodes at any time .only links between selected old nodes and new node are added in a growing network . in the sample - based description ,the ensemble average of adjacency matrix is where denotes an adjacency matrix of the -th sample whose elements are or corresponded to the connection or disconnection from to , but or undefined for because of the size at time . denotes the number of samples .we remark that an adjacency matrix is fundamental and important because it includes the necessary and sufficient information about a network structure and is useful for a mathematical treatment . in this explicit representation of general framework , for each -th node , , the ( out-)degree is updated from time to , the ( out-)degree of -th node added at time is defined by the sum of links from node to nodes , we should remark that the iterative calculations of eqs .( [ eq_de_general_ki])([eq_de_general_kn ] ) are equivalent to the averaged values over the samples after calculating the degrees for each sample of the networks at time . with this equivalence in mind , we investigate the asymptotic behavior of for a large .we note that is a monotone non - decreasing function of time because of from eq .( [ eq_de_general_ki ] ) in growing networks .as examples , we apply the ensemble average of adjacency matrix to some network models . however , our approach is applicable to other growing networks especially in a wide class , e.g. with approximately power - law or exponential degree distribution . in the following , we assume that each link is undirected : . since the continuous - time approximation of eq.([eq_de_general_ki ] )is generally our approach is applicable to the babarsi and albert ( ba ) model as follow .preferential attachment : uniform attachment : where denotes the initial number of nodes , and denotes the number of adding links at each time step .for the two cases of preferential and uniform attachments , and have been derived with the corresponding degree distributions and , respectively . the analysis in ba modelis based on the invariant ordering property of degrees in which older nodes get more links averagely at any time in the growth of network . under the invariant ordering property ,our approach can be regarded as an extension of mathematical treatment in the ba model through the representation by the ensemble average of adjacency matrix over samples of growing networks .we preliminary introduce a duplication - divergence ( d - d ) model without mutations of random links between existing nodes , whose generation mechanism is known as fundamental in biological protein - protein interaction networks . in the d - d model , at each time step , a new node is added and links to neighbor nodes of a uniformly randomly chosen node ( see figure [ fig_basic_process](a ) ) .some duplication links are deleted with probability . here ,no mutations are to simplify the discussion and to connect to the next subsection .although the degree distribution in the d - d model can be approximately analyzed by the approach of mean - field - rate equation , we show the applicability of our approach to the d - d model in order to extend it to more general networks .moreover , in the next section , we reveal that older nodes tend to have higher degrees in the d - d model , whose ordering of degree for node index was not found from the above approach .\(a ) d - d model \(b ) copying model since the -th new node links to the neighbor node of a chosen node from existing nodes in a network of d - d model , we have where we use the uniform selection probability of each node and the no - deletion rate for linking to the neighbor nodes . from eqs .( [ eq_de_general_ki])([eq_de_general_kn ] ) ( [ eq_a_ni_d - d ] ) , we obtain by applying eq .( [ eq_de_d - d_ki ] ) recursively , we derive \(a ) \(b ) \(c ) in addition , the continuous - time approximation of eq .( [ eq_de_d - d_ki ] ) is from the separation of variables method , we obtain the solution when we denote the initial degree at the inserted time for a node , the above solution is rewritten as from the existence of parallel curves shown in fig .[ fig_para_d - d ] , the ordering of degrees is not changed . in other word, older nodes get more links averagely .thus , we obtain where is the number of nodes at time , and denotes the initial number of nodes .in the tail of degree distribution , the exponent of power - law is asymptotically .note that the slightly different exponent to the conventional approximation is not strange , since the mutations are necessary for their d - d models .figure [ fig_d - d](a)-(c ) shows the time - course of in the case of , , and , respectively , averaged over samples .the black , orange , and magenta lines are the numerical results of eqs .( [ eq_de_d - d_ki])([eq_de_d - d_kn ] ) for the node , , and .the cyan line guides the estimated slope of in log - log plot . in fig .[ fig_d - d](d ) , the red , green , and blue lines show the degree distributions for , , and , respectively , at the size .the magenta , cyan , and gray dashed lines guides the corresponding slopes of for these .\(a ) \(b ) \(c ) \(d ) a modification of the d - d model by adding a mutual link between a new node and a randomly chosen node has been proposed .the mutual link contributes to avoid the singularity called as non - self - averaging even for without mutations .the growing network is constructed as shown in fig . [ fig_basic_process](b ) .it is referred to as copying model . at each time step , a new node is added .the new node links to a uniformly randomly chosen node , and to its neighbor nodes with probability . in the copying model ,we have since the -th new node links to a uniformly randomly chosen node and to the neighbor node when other node is chosen from existing nodes in the network .these effects are in the first term and the second term in eq .( [ eq_a_ni_copying ] ) . from eqs .( [ eq_de_general_ki])([eq_de_general_kn ] ) ( [ eq_a_ni_copying ] ) , we obtain by applying eq .( [ eq_de_mut_ki ] ) recursively , we derive where we use and the staring formula . in particular , by the mathematical induction , we confirm that the case of generates a sequence of the complete graphs with links at every node of .first , and are obvious .next , we assume , from eqs.([eq_de_mut_ki])([eq_de_mut_kn ] ) we derive \(a ) \(b ) \(c ) on the other hand , the continuous - time approximation of eq.([eq_de_mut_ki ] ) is since this form is a 1st order linear differential equation , by applying the solution , we obtain where and are constants of integration , and we use and .note that the solution is only different by to eq.([eq_de_sol ] ) , and can be ignored for a large . as similar to the d - d model in subsection 3.2 , from eq .( [ eq_approx_pk ] ) under the invariant ordering ( [ eq_ordering ] ) in the parallel curves shown as fig .[ fig_para_copying ] , the degree distribution asymptotically follows a power - law with the exponent .figure [ fig_copying](a)-(c ) shows the time - course of in the cases of , , and , respectively , averaged over samples .the black , orange , and magenta lines are the numerical results of eqs .( [ eq_de_mut_ki])([eq_de_mut_kn ] ) for the node , , and .the cyan line guides the estimated slope of in log - log plot .it fits to the lines of for a large .moreover , as shown in fig .[ fig_copying](d ) , eq .( [ eq_approx_pk ] ) gives a good approximation at the size .the red , green , and blue lines show the degree distributions for , , and , respectively .the magenta , cyan , and gray dashed lines guides the corresponding slopes of for these in the fitting to the tails .\(a ) \(b ) \(c ) \(d ) in this subsection , we emphasize that our approach is effective through the numerical estimation , even when an analytic derivation is intractable .we consider a copying model with positive degree - degree correlations based on a cooperative generation mechanism by linking homophily , in which densely connected cores among high degree nodes emerge . in more detail ,the difference to the previously mentioned copying model is that the -th new node links to the neighbor nodes of a randomly chosen node with a probability from existing nodes in the network .such a function is necessary to enhance the degree - degree correlations , and is a parameter . since the degree of new node is unknown in advance due to the stochastic process , it is temporary set as .thus , instead of eq .( [ eq_a_ni_copying ] ) , we substitute for eqs .( [ eq_de_general_ki])([eq_de_general_kn ] ) .\(a ) \(b ) \(c ) although the theoretical analysis of eqs .( [ eq_de_general_ki])([eq_de_general_kn ] ) for eq .( [ eq_a_ni_correl ] ) is difficult , the iterative calculations are possible numerically .when we assume , we derive an exponential distribution as follows . from , we obtain under the invariant ordering ( [ eq_ordering ] ) in the parallel curves shown as fig .[ fig_para_correl ] .figure [ fig_correl](a)-(c ) shows that denoted by black , orange , and magenta lines for , , and is approximated by in the copying model with degree - degree correlations .the cyan lines guide the estimated slopes , , and for , , and , respectively , in the numerical fittings for the iterative calculations of eqs .( [ eq_de_general_ki])([eq_de_general_kn])([eq_a_ni_correl ] ) .thus , as shown in fig .[ fig_correl](d ) , the tails in denoted by red , green , and blue lines for , , and are approximated by shown as magenta , cyan , and gray dashed lines at the size .note that is only slightly deviated but the exponential part is remained by adding shortcut links in order to self - organize a robust onion - like structure in the incrementally growing network .\(a ) \(b ) \(c ) \(d ) we consider the asymptotic behavior of the node degrees for a general case in growing networks .when the time - course of degree follows a monotone increasing function of time , there exists the inverse function .it is possible that the time - course is an average of observed real data for node , e.g. born at time .from we have then , on the assumption of the invariant ordering ( [ eq_ordering ] ) in parallel curves of , we derive where denote the derivative of by the variable . we should remark that various degree distributions of non - power - law may appear depending on the shape of monotone increasing function according to what type of generation in growing networks .we derive the invariant ordering ( [ eq_ordering ] ) at any time within parallel curves of monotone increasing functions for the d - d and copying models discussed in subsections 3.2 and 3.3 . in the following ,we use double mathematical induction for node index and time .once is satisfied for or at , is obtained from eqs .( [ eq_de_general_ki])([eq_de_d - d_ki])([eq_de_mut_ki ] ) . here , from eqs .( [ eq_de_d - d_ki])([eq_de_d - d_kn ] ) at with the initial condition , we obtain from eqs .( [ eq_de_mut_ki])([eq_de_mut_kn ] ) at with the same initial condition , we also obtain since the assumption in eq .( [ eq_assump ] ) is satisfied for or and at , we have by applying eq . ( [ eq_assump ] ) recursively for . on the other hand , from eqs .( [ eq_de_d - d_ki])([eq_de_d - d_kn ] ) , we rewrite at , and for .then , we have similarly , from eqs .( [ eq_de_mut_ki])([eq_de_mut_kn ] ) , we rewrite at , and for . then , we also have by applying ( [ eq_assump ] ) recursively for after substituting eq .( [ eq_delta_kn1 ] ) or ( [ eq_delta_kn2 ] ) to the right - hand side of eq .( [ eq_assump ] ) , we obtain from eqs .( [ eq_induct1])([eq_induct2 ] ) , we obtain the ordering ( [ eq_ordering ] ) after all .next , we consider the existing condition of the ordering ( [ eq_ordering ] ) within parallel curves of for a general case of growing networks discussed in subsection 3.5 . from eq .( [ eq_de_general_ki ] ) for or at , we have if older nodes tend to get more links , hold in the ensemble average of adjacency matrix . then , we remark from eq .( [ eq_cond_general ] ) and or with the initial values and . on the other hand , from eqs .( [ eq_de_general_ki])([eq_de_general_kn ] ) , we derive if eq .( [ eq_cond_general ] ) hold with , for we have therefore , on the condition ( [ eq_cond_general ] ) , we obtain the ordering ( [ eq_ordering ] ) from eqs .( [ eq_general_ij])([eq_general_n ] ) .we have proposed the explicit representation by the ensemble average of adjacency matrix over samples of growing networks .the important point is that the adjacency matrix is averaged in advance before calculating a characteristic quantity about the topological structure for each sample .the ensemble average has been applied to some network models : ba , d - d , and copying models for investigating the degree distributions in the asymptotic behavior by using the theoretical and numerical analysises for difference equations and the corresponding continuous - time approximation of differential equations with variables and .we have derived and for the d - d and copying models under the invariant ordering of degrees which is supported in randomly grown networks . moreover , for the copying model with positive degree - degree correlations , we have shown that the numerical calculations of the difference equation give a good approximation of an estimated exponential distribution , even when an analytic derivation is intractable . the copying model with positive degree - degree correlationsis related to the self - organization of robust onion - like networks . our approach may be also applicable to data analysis for social networks , when the observed time - course of degree is a monotone increasing function like power - law or logarithm in the average over samples by ignoring short - time fluctuations , and a node index or represents an ordering of its birth time .this expectation is supported as follows .it is helpful for grasping a trend to study the average behavior of many users ( nodes ) added at a same ( sampling interval ) time into a network community .random growth in a social network probably corresponds to encountered chances among people .moreover , as time goes by , the number of his / her friends for a member of social networks is usually increasing . it is natural that the connections to friends are maintained .however , we must consider the effect of rewirings between old members in the definition of adjacency matrix . also from a practical viewpoint , space to store an adjacency matrix may cause a problem for dig data . from the conventional analysis for special network models to a general framework , the representation by the ensemble average will open a door for investigating the characteristic quantities , e.g. node degrees in growing networks .in particular , the time - course of a quantity depending on the birth time of node is considered as a key point .the discussion about other quantities such as clustering coefficient or the number of paths of a given length requires further studies of how to analyze the average behavior related to the transitivity .the author would like to thank anonymous reviewers for their valuable comments .this research is supported in part by a grant - in - aid for scientific research in japan , no .25330100 .62 albert , r. , jeong , h. and babarsi , a .-l . error and attack tolerance of complex networks ._ nature _ 406 : pp.3644 , ( 2000 ) .babarsi , a .-albert , r. emergence of scaling in random networks ._ science _ 286 : pp.509512 , ( 1999 ) .babarsi , a .-l . , albert , r. and jeong , h. mean - field theory for scale - free random networks . _ physica a _ 272 : pp.173187 , ( 1999 ) .callaway , d.s . , hopcroft , j.e . , kleinberg , j.m . , newman , m.e.j . and strogatz , s.h .are randomly grown graphs really random ? ._ physical review e _ 64 : pp.041902 , ( 2001 ) .hayashi , y. growing self - organized design of efficient and robust complex networks ._ ieee xplore digital library _ http://dx.doi.org/10.1109/saso2014.17 , proc . of 2014 ieee 8th inton saso : self - adaptive and self - organizing systems 2014 , pp.5059 .arxiv : physics/1411.7719 , ( 2014 ) .herrmann , h.j . ,schneider , c.m . , moreira , a.a ., andrade jr .j.s . , and havlin , s. onion - like network topology enhances robustness against malicious attacks ._ journal of statistical mechanics _p01027 , ( 2011 ) .kim , j. , krapivsky , p.l . and redner , s. infinite - order percolation and giant fluctuations in a protein interaction networks ._ physical review e _ 66 : pp.055101(r ) , ( 2002 ) .newman , m.e.j .assortative mixing in networks ._ physical review letters _ 89(20 ) : pp.208701 , ( 2003 ) .newman , m.e.j .networks -an introduction .oxford university press , ( 2010 ) .pastor - satorras , r. , smith , e. and sole , r.v . evolving protein interaction networks through gene duplication . _journal of theoretical biology _222(2 ) : pp.199210 , ( 2003 ) .schneider , c.m . , moreira , a.a ., andrade jr .havlin , s. and herrmann , h.j . mitigation of malicious attacks on networks ._ proceedings of the national academy of sciences of the united states of america _810(10 ) : pp.38383841 , ( 2011 ) .sole , r.v ., pastor - satorras , r. , smith , e. and kepler , t.b . a model of large - scale proteome evolution ._ advances in complex systems _5(1 ) : pp.4354 , ( 2002 ) .tanizawa , t. , havlin , s. and stanley , h.e .robustness of onionlike correlated networks against targeted attacks ._ physical review e _ 85 : pp.046109 , ( 2012 ) .wu , z .- x . and holme , p. onion structure and network robustness ._ physical review e _ 81 : pp.026116 , ( 2011 ) . | various important and useful quantities or measures that characterize the topological network structure are usually investigated for a network , then they are averaged over the samples . in this paper , we propose an explicit representation by the beforehand averaged adjacency matrix over samples of growing networks as a new general framework for investigating the characteristic quantities . it is applied to some network models , and shows a good approximation of degree distribution asymptotically . in particular , our approach will be applicable through the numerical calculations instead of intractable theoretical analysises , when the time - course of degree is a monotone increasing function like power - law or logarithm . |
in order to fix ideas , it is useful to give an explicit `` neural network '' interpretation to the theory that will be developed .the model will consist of 2 layers of nodes .the input layer has a `` pattern of activity '' that represents the components of the input vector , and the output layer has a pattern of activity that is the collection of activities of each output node .the activities in the output layer depend only on the activities in the input layer .if an input vector is presented to this network , then each output node `` fires '' discretely at a rate that corresponds to its activity .after nodes have fired the probabilistic description of the relationship between the input and output of the network is given by , where is the location in the output layer ( assumed to be on a rectangular lattice of size ) of the node that fires .in this paper it will be assumed that the order in which the nodes fire is not observed , in which case is a sum of probabilities over all permutations of , which is a symmetric function of the , by construction .the theory that is introduced in section [ subsect : probencodedecode ] concerns the special case . in the casethe probabilistic description is proportional to the firing rate of node in response to input .when there is an indirect relationship between the probabilistic description and the firing rate of node , which is given by the marginal probability it is important to maintain this distinction between events that are observed ( i.e. given ) and the probabilistic description of the events that are observed ( i.e. ) .the only possible exception is in the limit , where has all of its probability concentrated in the vicinity of those that are consistent with the observed long - term average firing rate of each node .it is essential to consider the case to obtain the results that are described in this paper .a theory of self - organising networks based on an analysis of a probabilistic encoder / decoder was presented in .it deals with the case referred to in section [ subsect : nninterp ] .the objective function that needs to be minimised in order to optimise a network in this theory is the euclidean distortion defined as where is an input vector , is a coded version of ( a vector index on a -dimensional rectangular lattice of size ) , is a reconstructed version of from , is the probability density of input vectors , is a probabilistic encoder , and is a probabilistic decoder which is specified by bayes theorem as can be rearranged into the form where the reference vectors are defined as although equation [ eq : objective ] is symmetric with respect to interchanging the encoder and decoder , equation [ eq : objectivesimplified ] is not .this is because bayes theorem has made explicit the dependence of on . from a neural network viewpoint the feed - forward transformation from the input layer to the output layer , and describes the feed - back transformation that is implied from the output layer to the input layer .the feed - back transformation is necessary to implement the objective function that has been chosen here .minimisation of with respect to all free parameters leads to an optimal encoder / decoder . in equation[ eq : objectivesimplified ] the are the only free parameters , because is fixed by equation [ eq : refvect ] .however , in practice , both and may be treated as free parameters , because satisfy equation [ eq : refvect ] at stationary points of with respect to variation of .the probabilistic encoder / decoder requires an explicit functional form for the posterior probability .a convenient expression is where can be regarded as a node `` activity '' , and .any non - negative function can be used for , such as a sigmoid ( which satisfies ) where and are a weight vector and bias , respectively .a drawback to the use of equation [ eq : postprob ] is that it does not permit it to scale well to input vectors that have a large dimensionality .this problem arises from the restricted functional form allowed for .a solution was presented in where , and is a set of lattice points that are deemed to be `` in the neighbourhood of '' the lattice point , and is the inverse neighbourhood defined as the set of lattice points that have lattice point in their neighbourhood .this expression for satisfies ( see appendix [ app : postprobpmdnorm ] ) .it is convenient to define which is another posterior probability , by construction .it includes the effect of the output nodes that are in the neighbourhood of node only . is thus a localised posterior probability derived from a localised subset of the node activities .this allows equation [ eq : postprobpmd ] to be written as , so is the average of the posterior probabilities at node arising from each of the localised subsets that happens to include node .the model may be extended to the case where output nodes fire . is then replaced by , which is the probability that are the first nodes to fire ( in that order ) . with this modification, becomes where the reference vectors are defined as the dependence of and on output node locations complicates this result .assume that is a symmetric function of its arguments , which corresponds to ignoring the order in which the first nodes choose to fire ( i.e. is a sum over all permutations of ) .for simplicity , assume that the nodes fire independently so that ( see appendix [ app : upperbound ] for the general case where does not factorise ) . may be shown to satisfy the inequality ( see appendix [ app : upperbound ] ) , where and are both non - negative . as , and when , so the term is the sole contribution to the upper bound when , and the term provides the dominant contribution as .the difference between the and the terms is the location of the average : in the term it averages a vector quantity , whereas in the term it averages a euclidean distance .the term will therefore exhibit interference effects , whereas the term will not .the model may be further extended to the case where the probability that a node fires is a weighted average of the underlying probabilities that the nodes in its vicinity fire .thus becomes where is the conditional probability that node fires given that node would have liked to fire . in a sense , describes a `` leakage '' of probability from node that onto node . then plays the role of a soft `` neighbourhood function '' for node .this expression for can be used wherever a plain has been used before .the main purpose of introducing leakage is to encourage neighbouring nodes to perform a similar function .this occurs because the effect of leakage is to soften the posterior probability , and thus reduce the ability to reconstruct accurately from knowledge of , which thus increases the average euclidean distortion . to reduce the damage that leakage causes, the optimisation must ensure that nodes that leak probability onto each other have similar properties , so that it does not matter much that they leak .the focus of this paper is on minimisation of the upper bound ( see equation [ eq : upperboundpieces ] ) to in the multiple firing model , using a scalable posterior probability ( see equation [ eq : postprobpmd ] ) , with the effect of activity leakage taken into account ( see equation [ eq : leakage ] ) .gathering all of these pieces together yields where . in order to ensure that the model is truly scalable, it is necessary to restrict the dimensionality of the reference vectors . in equation [ eq : objectivemodel ] , which is not acceptable in a scalable network . in practice, it will be assumed any properties of node that are vectors in input space will be limited to occupy an `` input window '' of restricted size that is centred on node .this restriction applies to the node reference vector , which prevents from being fully minimised , because is allowed to move only in a subspace of the full - dimensional input space .however , useful results can nevertheless be obtained , so this restriction is acceptable .optimisation is achieved by minimising with respect to its free parameters .thus the derivatives with respect to are given by and the variations with respect to are given by the functions , , , and are derived in appendix [ app : derivatives ] . inserting a sigmoidal function then yields the derivatives with respect to and as because all of the properties of node that are vectors in input space ( i.e. and ) are assumed to be restricted to an input window centred on node , the eventual result of evaluating the right hand sides of the above equations must be similarly restricted to the same input window .the expressions for and , and especially their derivatives , are fairly complicated , so an intuitive interpretation will now be presented .when is stationary with respect to variations of it may be written as ( see appendix [ app : d1d2refvect ] ) . and factors do not appear in this expression because is normalised to sum to unity . the first term ( which derives from )is an incoherent sum ( i.e. a sum of euclidean distances ) , whereas the second term ( which derives from ) is a coherent sum ( i.e. a sum of vectors ) .the first term contributes for all values of , whereas the second term contributes only for , and dominates for . in order to minimise the first term the like to be as large as possible for those nodes that have a large .since is the centroid of the probability density , this implies that node prefers to encode a region of input space that is as far as possible from the origin .this is a consequence of using a euclidean distortion measure , which has the dimensions of , in the original definition of the distortion in equation [ eq : objective ] . in order to minimise the second term the superposition of weighted bythe likes to have as large a euclidean norm as possible .thus the nodes co - operate amongst themselves to ensure that the nodes that have a large also have a large .the purpose of this section is to work through a case study in order to demonstrate the various properties that emerge when is minimised .it convenient to begin by ignoring the effects of leakage , and to concentrate on a simple ( non - scaling ) version of the posterior probability model ( as in equation [ eq : postprob ] ) , where the are threshold functions of it is also convenient to imagine that a hypothetical infinite - sized training set is available , so it may be described by a probability density .this is a `` frequentist '' , rather than a `` bayesian '' , use of the notation , but the distinction is not important in the context of this paper .assume that is drawn from a training set , that has 2 statistically independent subspaces , so that furthermore , assume that and each have the form i.e. is a loop ( parameterised by a phase angle ) of probability density that sits in -space . in order to make it easy to deduce the optimum reference vectors ,choose so that the following 2 conditions are satisfied for this type of training set can be visualised topologically .each training vector consists of 2 subvectors , each of which is parameterised by a phase angle , and which therefore lives in a subspace that has the topology of a circle , which is denoted as . because of the independence assumption in equation [ eq : independentpdf ] , the pair lives on the surface of a 2-torus , which is denoted as .the minimisation of thus reduces to finding the optimum way of designing an encoder / decoder for input vectors that live on a 2-torus , with the proviso that their probability density is uniform ( this follows from equation [ eq : parametricpdf ] and equation [ eq : parametricpdfconstraint ] ) . in order to derive the reference vectors , the solution(s ) of the stationarity condition must be computed .the stationarity condition reduces to ( see appendix [ app : d1d2refvect ] ) it is useful to use the simple diagrammatic notation shown in figure [ fig : s1s1 ] .[ fig : s1s1 ] topology with a threshold superimposed.,title="fig:",width=377 ] each circle in figure [ fig : s1s1 ] represents one of the subspaces , so the two circles together represent the product .the constraints in equation [ eq : parametricpdfconstraint ] are represented by each circle being centred on the origin of its subspace ( is constant ) , and the probability density around each circle being constant ( is constant ) .a single threshold function is represented by a chord cutting through each circle ( with 0 and 1 indicating on which side of the chord the threshold is triggered ) .the that lie above threshold in each subspace are highlighted . both and must lie above threshold in order to ensure , i.e. they must both lie within regions that are highlighted in figure [ fig : s1s1 ] . in this casenode will be said to be `` attached '' to both subspace 1 and subspace 2 .a special case arises when the chord in one of the subspaces ( say it is )does not intersect the circle at all , and the circle lies on the side of the chord where the threshold is triggered . in this case does not depend on , so that , in which case node will be said to be `` attached '' to subspace 1 but `` detached '' from subspace 2 .the typical ways in which a node becomes attached to the 2-torus are shown in figure [ fig : torus ] .[ fig : torus ] topology as a torus with the effect of 3 different types of threshold shown.,title="fig:",width=377 ] in figure [ fig : torus](a ) the node is attached to one of the subspaces and detached from the other . in figure[ fig : torus](b ) the attached and detached subspaces are interchanged with respect to figure [ fig : torus](a ) . in figure[ fig : torus](c ) the node is attached to both subspaces .consider the configuration of threshold functions shown in figure [ fig : attachone ] .this is equivalent to all of the nodes being attached to loops to cover the 2-torus , with a typical node being as shown in figure [ fig : torus](a ) ( or , equivalently , figure [ fig : torus](b ) ) .[ fig : attachone ] when is minimised , it is assumed that the 4 nodes are symmetrically disposed in subspace 1 , as shown .each is triggered if and only if lies within its quadrant , and one such quadrant is highlighted in figure [ fig : attachone ] .this implies that only 1 node is triggered at a time .the assumed form of the threshold functions implies , so equation [ eq : stationaryrefvect ] reduces to whence consider the configuration of threshold functions shown in figure [ fig : attachboth ] .this is equivalent to all of the nodes being attached to patches to cover the 2-torus , with a typical node being as shown in figure [ fig : torus](c ) .[ fig : attachboth ] in this case , when is minimised , it is assumed that each subspace is split into 2 halves .this requires a total of 4 nodes , each of which is triggered if , and only if , both and lie on the corresponding half - circles .this implies that only 1 node is triggered at a time .the assumed form of the threshold functions implies that the stationarity condition becomes whence consider the configuration of threshold functions shown in figure [ fig : attacheither ] .this is equivalent to half of the nodes being attached to loops to cover the 2-torus , with a typical node being as shown in figure [ fig : torus](a ) .the other half of the nodes would then be attached in an analogous way , but as shown in figure [ fig : torus](b ) .thus the 2-torus is covered twice over .[ fig : attacheither ] in this case , when is minimised , it is assumed that each subspace is split into 2 halves .this requires a total of 4 nodes , each of which is triggered if ( or ) lies on the half - circle in the subspace to which the node is attached .thus exactly 2 nodes and are triggered at a time , so that for simplicity , assume that node is attached to subspace 1 , then and the stationarity condition becomes this may be simplified to yield write the 2 subspaces separately ( remember that node is assumed to be attached to subspace 1 ) if this result is simultaneously solved with the analogous result for node attached to subspace 2 , then the terms vanish to yield consider the left hand side of figure [ fig : attachone ] for the case of nodes , when the threshold functions form a regular -ogon . then denotes the part of the circle that is associated with node , whose radius of gyration squared is given by ( assuming that the circle has unit radius ) gather the results for in equations [ eq : refvectone ] ( referred to as type 1 ) , [ eq : refvectboth ] ( referred to as type 2 ) , and [ eq : refvecteither ] ( referred to as type 3 ) together and insert them into in equation [ eq : d1d2refvect ] to obtain ( see appendix [ app : d1d2compare ] ) in figure [ fig : plotn1 ] the 3 solutions are plotted for the case .[ fig : plotn1 ] for for each of the 3 types of optimum.,title="fig:",width=377 ] for the type 3 solution is never optimal , the type 1 solution is optimal for , and the type 2 solution is optimal for this behaviour is intuitively sensible , because a larger number of nodes is required to cover a 2-torus as shown in figure [ fig : torus](c ) than as shown in figure [ fig : torus](a ) ( or figure [ fig : torus](b ) ) . in figure [fig : plotn2 ] the 3 solutions are plotted for the case . [fig : plotn2 ] for for each of the 3 types of optimum.,title="fig:",width=377 ] for the type 1 solution is optimal for , and the type 2 solution is optimal for large , but there is now an intermediate region ( type 1 and type 3 have an equal at ) where the -dependence of the type 3 solution has now made it optimal .again , this behaviour is intuitively reasonable , because the type 3 solution requires at least 2 observations in order to be able to yield a small euclidean resonstruction error in each of the 2 subspaces , i.e. for the 2 nodes that fire must be attached to different subspaces .note that in the type 3 solution the nodes that fire are not guaranteed to be attached to different subspaces . in the type 3 solutionthere is a probability that ( where ) nodes are attached to subspace , so the trend is for the type 3 solution to become more favoured as is increased . in figure[ fig : plotnasymptotic ] the 3 solutions are plotted for the case .[ fig : plotnasymptotic ] for for each of the 3 types of optimum.,title="fig:",width=377 ] for the type 2 solution is never optimal , the type 1 solution is optimal for , and the type 3 solution is optimal for .the type 2 solution approaches the type 3 solution from below asymptotically as . in figure[ fig : phasediagram ] a phase diagram is given which shows how the relative stability of the 3 types of solution for different and , where the type 3 solution is seen to be optimal over a large part of the plane .[ fig : phasediagram ] thus the most interesting , and commonly occurring , solution is the one in which half the nodes are attached to one subspace and half to the other subspace ( i.e. solution type 3 ) .although this result has been derived using the non - scaling version of the posterior probability model ( as in equation [ eq : postprob ] ) , it may also be used for scaling posterior probabilities ( as given in equation [ eq : postprobpmd ] ) in certain limiting cases , and also for cases where the effect of leakage is small .the effect of leakage will not be analysed in detail here . however , its effect may readily be discussed phenomenologically , because the optimisation acts to minimise the damaging effect of leakage on the posterior probability by ensuring that the properties of nodes that are connected by leakage are similar .this has the most dramatic effect on the type 3 solution , where the way in which the nodes are partitioned into 2 halves must be very carefully chosen in order to minimise the damage due to leakage .if the leakage is presumed to be a local function , so that , which is a localised `` blob''-shaped function , then the properties of adjacent node are similar ( after optimisation ) . since nodes that are attached to 2 different subspaces necessarily have very different properties , whereas nodes that are attached to the same subspace can have similar properties , it follows that the nodes must split into 2 continguous halves , where nodes are attached to subspace 1 and nodes are attached to subspace 2 , or vice versa . the effect of leakage is thereby minimised , with the worst effect occurring at the boundary between the 2 halves of nodes .the above analysis has focussed on the non - scaling version of the posterior probability , in which all nodes act together as a unit .the more general scaling case where the nodes are split up by the effect of the neighbourhood function will not be analysed in detail , because many of its properties are essentially the same as in the non - scaling case . for simplicityassume that the neighbourhood function is a `` top - hat '' with width ( an odd integer ) centred on .impose periodic boundary conditions so that the inverse neighbourhood function is also a top - hat , . in this casean optimum solution in the non - scaling case ( with ) can be directly related to a corresponding optimum solution in the scaling case by simply repeating the node properties periodically every nodes . strictly speaking, higher order periodicities can also occur in the scaling case ( and can be favoured under certain conditions ) , where the period is ( is an integer ) , but these will not be discussed here .the effect of the periodic replication of node properties is interesting . the type 3 solution ( with leakage and with splits the nodes into 2 halves , where nodes are attached to subspace 1 and nodes are attached to subspace 2 , or vice versa .when this is replicated periodically every nodes it produces an alternating structure of node properties , where nodes are attached to subspace 1 , then the next nodes are attached to subspace 2 , and thenthe next nodes are attached to subspace 1 , and so on .this behaviour is reminiscent of the so - called `` dominance stripes '' that are observed in the mammalian visual cortex .the purpose of this section is to demonstrate the emergence of the dominance stripes in numerical simulations .the main body of the software is concerned with evaluating the derivatives of , and the main difficulty is choosing an appropriate form for the leakage ( this has not yet been automated ) .the parameters that are required for a simulation are as follows : 1 . : size of 2d rectangular array of nodes . : size of 2d rectangular input window for each node ( odd integers ) .ensure that the input window is not too many input data `` correlation areas '' in size , otherwise dominance stripes may not emerge .dominance stripes require that the correlation _ within _ an input window are substantially stronger than the correlations _ between _ input windows that are attached to different subspaces . : size of 2d rectangular neighbourhood window for each node ( odd integers ) .the neighbourhood function is a rectangular top - hat centred on .the size of the neighbourhood window has to lie within a limited range to ensure that dominance stripes are produced .this corresponds to ensuring that lies in the type 3 region of the phase diagram in figure [ fig : phasediagram ] .it is also preferable for the size of the neighbourhood window to be substantially smaller than the input window , otherwise different parts of a neighbourhood window will see different parts of the input data , which will make the network behaviour more difficult to interpret .4 . : size of 2d rectangular leakage window for each node ( odd integers ) . for simplicity the leakage assumed to be given by , where is a `` top - hat '' function of which covers a rectangular region of size centred on **. * * the size of the leakage window must be large enough to correlate the parameters of adjacent nodes , but not so large that it enforces such strong correlations between the node parameters that it destroys dominance stripes . 5 . : additive noise level used to corrupt each member of the training set : wavenumber of sinusoids used in the training set . in describingthe training sets the index will be used to denote position in input space , thus position in input space lies directly `` under '' node of the network . in 1d simulationseach training vector is a sinusoid of the form , where is a random phase angle , and is a random number sampled uniformly from the interval ] .there are 3 parameter types to initialise .the weights were all initialised to random numbers sampled from a uniform distribution in the interval $ ] , whereas the biasses and the reference vector components were all initialised to 0 .because the 2d simulations took a very long time to run , they were periodically interrupted and the state of all the variables written to an output file .the simulation could then be continued by reading this output file in again and simply continuing where the simulation left off .alternatively , some of the variables might have their values changed before continuing .in particular , the random number generator could thus be manipulated to simulate the effect of a finite sized training set ( i.e. use the _ same _ random number seed at the start of each part of the simulation ) , or an infinite - sized training set ( i.e. use a _ different _ random number seed at the start of each part of the simulation ) .the size of the parameter could also thus be manipulated should a large value be required initially , and reduced to a small value later on , as required in order to guarantee that when the input subspaces are seen to be statistically independent , and dominance stripes may emerge .there are many ways to choose the boundary conditions . in the numerical simulationsperiodic boundary conditions will be avoided , because they can lead to artefacts in which the node parameters become topologically trapped .for instance , in a 2d simulation , periodic boundary conditions imply that the nodes sit on a 2-torus .leakage implies that the node parameter values are similar for adjacent nodes , which limits the freedom for the parameters to adjust their values on the surface of the 2-torus .for instance , any acceptable set of parameters that sits on the 2-torus can be converted into another acceptable set by mapping the 2-torus to itself , so that each of its `` coils up '' an integer number of times onto itself . such a multiply wrapped parameter configurationis topologically trapped , and can not be perturbed to its original form .this problem does not arise with non - periodic boundary conditions .there are several different problems that arise at the boundaries of the array of nodes : 1 .the neighbourhood function can not be assumed to be a rectangular top - hat centred on .instead , it will simply be truncated so that it does not fall off the edge of array of nodes , i.e. for those that lie outside the array .2 . the leakage function will be similarly truncated . however , in this case must normalise to unity when summed over , so the effect of the truncation must be compensated by scaling the remaining elements of .the input window for each node implies that the input array must be larger than the node array in order that the input windows never fall off the edge of the input array .the most important result is the emergence of dominance stripes . for are thus 2 numbers that need to be displayed for each node : the `` degree of attachment '' to subspace 1 , and similarly for subspace 2 .there are many ways to measure degree of attachment , for instance the probability density gives a direct measurement of how strongly node depends on the input vector , so its `` width '' or `` volume '' in each of the subspaces could be used to measure degree of attachment . however , in the simulations presented here ( i.e. sinusoidal training vectors ) the degree of attachment is measured as the average of the absolute values of the components of the reference vector in the subspace concerned .this measure tends to zero for complete detachment . for 1d simulations2 dominance plots can be overlaid to show the dominance of subspaces 1 and 2 for each node . for 2d simulations it is simplest to present only 1 of these plots as a 2d array of grey - scale pixels , where the grey level indicates the dominance of subspace 1 ( or , alternatively , subspace 2 ) .the parameter values used were : , , , , , , , , .this value of implies , so is approximately an integer multiple of , as required for an artefact - free simulation . in figure[ fig : dominance1d ] a plot of the 2 dominance curves obtained after 3200 training updates is shown .[ fig : dominance1d ] this dominance plot clearly shows alternating regions where subspace 1 dominates and subspace 2 dominates .the width of the neighbourhood function is 21 , which is the same the period of the variations in the dominance plots , i.e. within each set of adjacent 21 nodes half the nodes are attached to subspace 1 and half to subspace 2 .there are boundary effects , but these are unimportant .the normalisation of the expression for in equation [ eq : postprobpmd ] may be demonstrated as follows : in the first step the order of the and the summations is interchanged using , in the second step the numerator and denominator of the summand cancel out .it is possible to simplify equation [ eq : objectivemultifire ] by using the following identity note that this holds for all choices of .this allows the euclidean distance to be expanded thus each term of this expansion can be inserted into equation [ eq : objectivemultifire ] to yield has been assumed to be a symmetric function of in the first two results , and the definition of in equation [ eq : refvectmultifire ] has been used to obtain the third result .these results allow in equation [ eq : objectivemultifire ] to be expanded as , where by noting that , an upper bound for in the form follows immediately from these results .note that whereas can have either sign . in the special case where ( i.e. and are independent of each other given that is known ) reduces to which is manifestly positive .this is the form of that is used throughout this paper . and are as given in equation [ eq : upperboundpieces ] , i.e. it is assumed that and has the scalable form given in equation [ eq : postprobpmd ] . define a compact matrix notation as follows using this matrix notation , the functions , , , and may be defined as the variation of is then given by in order to rearrange the expressions to ensure that only a single dummy index is required at every stage of evaluation of the sums it will be necessary to use the result the derivative is given by use matrix notation to write this as finally remove the explicit summations to obtain the required result the derivative is given by use matrix notation to write this as finally remove the explicit summations to obtain the required result the differential is given by use matrix notation to write this as reorder the summations to obtain relabel the indices and evaluate the sum over the kronecker delta to obtain finally remove the explicit summations to obtain the required result the differential is given by use matrix notation to write this as reorder the summations to obtain relabel the indices and evaluate the sum over the kronecker delta to obtain finally remove the explicit summations to obtain the required result equation [ eq : upperboundpieces ] can be written as where the constant terms do not depend on . however , from equation [ eq : upperboundpieces ] the derivative can be written as using bayes theorem the stationarity condition yields a matrix equation for the which may then be used to replace all instances of in equation [ eq : upperboundrefvectpieces ] .this yields the result order to compare the value of that is obtained when different types of supposedly optimum configurations of the threshold functions are tried , the that solves ( see appendix [ app : d1d2refvect ] ) must be inserted into the expression for . in the following derivationsthe constant term is omitted , and the definition ( see equation [ eq : radiussquared ] ) has been used . [ [ type-3-optimumhalf - the - nodes - are - attached - one - subspace - and - half - are - attached - to - the - other ] ] type 3 optimum : half the nodes are attached one subspace and half are attached to the other ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | this paper shows how a folded markov chain network can be applied to the problem of processing data from multiple sensors , with an emphasis on the special case of 2 sensors . it is necessary to design the network so that it can transform a high dimensional input vector into a posterior probability , for which purpose the partitioned mixture distribution network is ideally suited . the underlying theory is presented in detail , and a simple numerical simulation is given that shows the emergence of ocular dominance stripes . |
gallager s capacity theorem ( * ? ? ? * theorem 8.5.1 ) is literally reproduced on p. 1 of the poster along with the figure ( * ? ? ?* figure 8.5.1 ) plus some explanations taken from gallager s book . on p. 2 , the so - called gallager channel ( now called gaussian waveform channel ) , i.e. , a gaussian filter with additive white gaussian noise ( awgn ) , is treated as a special instance of gallager s theorem .on p. 3 , the heat channel , a linear time - varying ( ltv ) filter with awgn , is depicted followed by a characterization of its capacity by water - filling in the time - frequency plane ( * ? ? ?* theorem 2 ) . finally , on p. 4 , an example of a signal transmission is given aiming at underpinning the apparent contradiction to gallager s theorem in figs . 3 and 4 of the poster .the curves for the heat channel in the latter two figures are incorrect ; the correct curves are provided in the present publication. actually tends to infinity ( see the proof of the theorem in ) ]evaluation of the double integrals occurring in the water - filling theorem ( * ? ? ? * theorem 2 ) ( cf .p. 3 of ) and subsequent elimination of the parameter results in the closed - form representation of the capacity ( in nats per transmission ) of the heat channel , namely ^ 2+o(\alpha\beta)\,(\alpha\beta\rightarrow\infty ) , \label{eq1}\ ] ] where is the inverse function of , and is the standard landau little - o symbol .( [ eq1 ] ) coincides with the capacity formula ( * ? ? ?* eq . ( 9 ) ) ( as it should ) .by reason of the gaussian time window ( see fig .[ figure_1 ] ) at the input and output side of the ltv filter ( see ( * ? ? ?* fig . 5 ) and ( * ? ? ?( 4 ) ) , resp . ) , capacity achieving input / output signals for the heat channel will have the approximate duration ( at least when the input energy is sufficiently high [ but not too high ! ] ) . therefore ,transition to capacity as a rate should be done as follows . put for the average input energy , where is average power .then , form the time average . because of eq .( [ eq1 ] ) it asssumes the value ^ 2\log_2{\mathrm{e}}\;\;\mbox{(bit / s ) } , \label{eq2}\ ] ] where , is the approximate bandwidth in positive frequencies measured in hertz ( cf .2 ) ) , and is the one - sided noise power spectral density of the awgn .similarly , for as a function of we obtain the parametric representation ^ 2 \log_2{\mathrm{e}}}\label{eq3a}\\ \frac{\bar{c}(\mathrm{snr})}{w}&=&\frac{1}{2\pi}[w_0(4\pi\mathrm{snr})]^2 \log_2{\mathrm{e}}\;\;(\mbox{bit / s / hz})\label{eq3b}.\end{aligned}\ ] ] in fig .[ figure_2 ] , is plotted against snr ; here , eq .( [ eq2 ] ) is used . in fig .[ figure_3 ] , the curve for as a function of is given parametrically by eqs .( [ eq3a ] ) , ( [ eq3b ] ) . in both figures , in case of the heat channel the label stands for .figs . 3 and 4 of the poster should be replaced by figs . 2 and 3 , resp . , of the present publication .moreover , the caption in ( * ? ? ?7 ) is to be replaced by example : , , bits per transmission ( exact value ) ." the exact value , by the way , has been computed numerically. 1 r. g. gallager , _ information theory and reliable communication_. new york , ny : wiley , 1968 .e. hammerich , see version 1 of this publication ( arxiv:1207.4707v1 ) .e. hammerich , `` on the heat channel and its capacity , '' _ proc .information theory _ ,seoul , korea , 2009 , pp .e. hammerich , `` on the heat channel and its capacity , '' available online at arxiv:1101.0287v3 . | we correct an alleged contradiction to gallager s capacity theorem for waveform channels as presented in a poster at the 2012 ieee international symposium on information theory . |
when the skin is pinched , wrinkles appear quite early on its surface .the same phenomenon occurs when the skin is sheared , i.e. pinched with one finger moving in one direction and the other fixed or moving in the opposite direction . in fact , pinching is one of the tests performed by dermatologists and surgeons when trying to assess the direction of greatest tension in the neighborhood of a site of interest .sometimes called _ lines of cleavage _ , the orientations of the lines of greatest tension are crucial to the way a scar heals . for a cut across the lines ,the lips of a wound will be pulled away from one another during the healing process , while they will be drawn together if the cut has occurred parallel to the lines . in one casethe resulting scar can be quite unsightly , in the other it is almost invisible . in this paperwe investigate the mechanical stability of two toy models for the human skin under shear in its plane , and view the onset of small - amplitude , unstable solutions as a prototype for skin wrinkling . of course , skin is a complex , multi - faced organ , and it is not easily , nor perhaps realistically , modeled . in section [ section2 ] , we first view it as an initially isotropic , neo - hookean layer of finite thickness .although a two - layered model of skin would be more realistic , it would greatly complicate the theoretical analysis of the shear instability properties .therefore , we prefer to consider an epidermis of vanishing thickness on top of a hyperelastic dermis , by defining a proper surface elastic energy . in other wordswe let one of the layer s faces be a material curve endowed with intrinsic elastic properties associated with extensibility , but no bending stiffness ( see for a rigorous exposition of such elastic coatings ) .we account for the lines of greatest tension by imposing a finite plane pre - stretch in a given direction ; in other words , we simulate those lines through _ strain - induced anisotropy_. then we investigate whether surface energy and pre - stretch promote or attenuate the appearance of wrinkles when the layer is subject to simple shear in the direction of the cleavage lines . in section [ shear instability of a fibre - reinforced skin tissue ] , we then view skin as being intrinsically _ anisotropic _ that is , we switch to the point of view that lines of greatest tension are due to the presence of families of parallel bundles of stiff collagen fibres imbedded in a softer elastin matrix . the introduction of even the simplest anisotropy transverse isotropy due to a single privileged direction complicates the equations of incremental instability greatly , and we thus restrict attention to a homogeneous solid without surface tension .we also omit finite - size effects by considering a half - space instead of a layer , and thus by focusing on the biot surface instability phenomenon . here , the anisotropic contribution to the stored energy is that recently proposed by ciarletta et al. .its polyconvexity ensures good properties from the physical point of view , such as strong ellipticity in compression , in contrast to the standard reinforcing model used recently by destrade et al. for the same stability study .first we consider an isotropic elastic material of finite thickness , undergoing an homogeneous shear . in order to mimic the response of human skin, we incorporate the presence of a residual stretch along the main cleavage lines , so that the base deformation field reads where is the current position of a material point which was at in the reference configuration , and is a constant .hence , we see that the deformation can be decomposed as a plane stretch of amount followed by a simple shear of amount , with deformation gradient written as we note that the layer s thickness remains unchanged through the deformation . for simplicity , we take the layer to be made of an isotropic neo - hookean incompressible material with a surface energy at the free boundary , so that its total strain energy reads where is the shear modulus , is the left cauchy - green deformation tensor , is the surface tension coefficient , is the elastic shear modulus per surface unit , and the comma denotes partial derivative .this is akin to endowing one of the material boundary of the layer with a surface energy with a term proportional to changes in area ( as is often done in fluid mechanics , see e.g. ) , and another contribution depending on the elastic deformation of the surface .this last term is proportional to a stretch measure of the surface deformation tensor , which is chosen for invariance requirements because we consider this elastic layer as a hemitropic film .now from the constitutive assumptions in eq.([const ] ) , we find that , the cauchy stress tensor corresponding to the large deformation in eq . is given by where is a lagrange multiplier due to the constraint of incompressibility .writing that the boundary at is traction - free fixes the value of as .straightforward calculations reveal that principal stretches of the deformation field in eq.([def ] ) are ( ) given by and that the eulerian principal axes are obtained after an anti - clockwise rotation of angle of the in - plane coordinate axes about the axis , where from eq ., we note that we now look for a perturbation solution in the neighbourhood of the large deformation , using the theory of incremental deformations . hence we call the incremental displacement field , for which the incremental incompressibility condition imposes that the constitutive equation for the components of the incremental nominal stress reads in general as , where is the increment in the lagrange multiplier , and is the fourth - order tensor of instantaneous moduli , i.e. the push - forward of the fixed reference elasticity tensor . in the absence of body forces, we can therefore write the equilibrium equation of the incremental nominal stress as in the case of a neo - hookean material , it is easy to check that the components of are simply so that eq.([divs ] ) in the coordinate system aligned with the eulerian principal axes takes the following simplified form differentiating these incremental equilibrium equations with respect to , , and , respectively , and using the incremental incompressibility condition in eq.([inc1 ] ) , we find that that is , the incremental lagrange multiplier is a laplacian field . now , for the incremental _ boundary conditions _ , we consider that the bottom of the layer is fixed ( clamped condition ) : while the top face remains free of incremental traction : we search for solutions to eqs.([inc1],[pi],[plapl ] ) in the following form : corresponding to the occurrence of plane wrinkles with wavenumber , forming an angle with the direction of maximum extension .it is easy to show that a solution in the form of eq.([sep ] ) is given by where are yet arbitrary constants , and is fixed by imposing eqs.([pi ] ) , as using eqs ., it can be checked that only four independent boundary conditions result from eqs .. setting ^t ] , defining the structural tensor so that represents the fibre stretch , where is the right cauchy - green deformation tensor . in order to build a strain measure for the fibres ,we introduce the structural invariant , defined as follows : :{\bf \widehat m}=(\lambda_\alpha-\lambda_\alpha^{-1})^2\ ] ] as discussed in , this choice provides a physically consistent deformation measure when _ and _ when , thereby allowing to account both for compression and extension of the fibres . accordingly , the strain energy density of the skin tissue is defined as : where is the anisotropic elastic modulus for the fibre reinforcement . the constitutive relation eq.([enaniso ] ) ensures strong - ellipticity of the tissue in planar deformations , a characteristic which is not met for example for the so - called standard model of fibre reinforcement chosen by .it is a simple exercise to show that for a small tensile strain along the direction of the fibres , we have , where for the tensile stretch and for the lateral stretches ; then , the resulting infinitesimal stress is , showing that ( at least in the linear regime ) the ratio is a measure of the stiffness of the fibres compared to the stiffness of the matrix .we set in this section for the sake of simplicity , so that the half - space is subject to _ simple shear _ only , with deformation gradient and principal stretches respectively .it is then easy to show that , so that the total strain energy does not depend on fibre orientation , only on the amount of shear .the corresponding cauchy stress tensor does depend on fibre orientation , as follows let us look for a perturbed surface wave in the form of eq.([sep ] ) ; to do so we need the components of the instantaneous moduli tensor in eq.([s1 ] ) .for the anisotropic strain energy density defined in eq.([enaniso ] ) and given in eq.([fsimple ] ) we find the components in the coordinate system aligned with the directions of simple shear , , ( see appendix for explicit expressions ) .then we take the incremental quantities to be of the form where the amplitudes are functions of only . by a well established procedure we can use eqs.([inc1]-[divs ] ) to eliminate andwrite the incremental equations as a first - order differential system known as the _ stroh formulation_ , ^{\mathop{t } } , \label{stroh}\ ] ] where the prime denotes differentiation with respect to the function argument . hereit turns out that the blocks , and are symmetric ( explicit expressions are given in the appendix ) .all is in place now for a complete resolution of the surface instability problem .there exist many strategies for this resolution , see a partial list and references in destrade et al. . herewe adopted a straightforward approach , because it turned to be tractable numerically .noticing that the stroh matrix is constant , a solution of the system eq.([stroh ] ) has the form ^t , \label{sol1}\ ] ] where is a constant vector and are the eigenvalues of . in order for the wrinkles amplitude to decay with depth, we retain the three roots , , with positive imaginary part ( i.e. ) .this gives the following general solution , where , , are constants , and , are square ( ) matrices built from the eigenvectors , taken proportional to any column vector of the matrix adjoint to .now , the traction - free boundary condition at can be written as : where is the surface impedance matrix .the condition for the onset of a surface instability is thus as we do not know _ a priori _ in which directions the wrinkles are to appear for a given angle of the fibres with respect to the direction of shear , we need to span the entire plane and find the angle for which the corresponding amount of shear is minimal , indicating the earliest onset of wrinkling .this is the main difference of instability behaviour between an isotropic material ( such as the material in the previous section ) , where the wrinkles appear _ aligned _ with a principal direction of pre - deformation , and an anisotropic material , where the wrinkles may be _ oblique _ with respect to the direction of least stretch . for our simulations ,we chose material constants such that ( matrix alone ) , ( matrix stiffer than fibres ) , ( matrix as stiff as fibres ) , and ( matrix softer than fibres ) . for each choice of we found and as functions of . varying the angle can be interpreted as either varying orientation of the fibres for a shear that occurs along a fixed axis , such as the -axis in eq .( [ simple ] ) , or as varying the axis along which shear is taking place for a fixed fibre direction .for illustrative purposes , a typical surface buckling solution is depicted in figure [ fig : buckling3d ] , where we chose ( fibres are twice stiffer than matrix in linear regime ) and ( fibres are originally almost at right angle to the direction of shear ) : there , according to figure [ fig : criticalalpha ] , we have and .a ) as a function of the reference fibre angle : the presence of fibres clearly leads to earlier surface instability in shear .( b ) the critical instability angle as a function of the reference fibre angle : these results are harder to interpret because is defined in the current configuration and in the reference configuration .a remapping of the variations of and with the current fibre angle is shown in figure [ fig : criticalalpha*].,title="fig:",height=207 ] b ) as a function of the reference fibre angle : the presence of fibres clearly leads to earlier surface instability in shear .( b ) the critical instability angle as a function of the reference fibre angle : these results are harder to interpret because is defined in the current configuration and in the reference configuration .a remapping of the variations of and with the current fibre angle is shown in figure [ fig : criticalalpha*].,title="fig:",height=207 ] ( fibres are twice stiffer than matrix in linear regime ) and ( fibres are almost at right angle to the direction of shear ) , the first wrinkles appear when the amount of shear reaches , and the corresponding angle of the wrinkles with respect to the direction of shear is .note the decay of the wrinkles amplitude with depth.,height=283 ] \a ) .along the wavefront the material is alternatively elongated and compressed , orthogonal to it the material is neither elongated nor compressed .b ) the fibres are shown by bold black lines.,title="fig:",height=132 ] b ) .along the wavefront the material is alternatively elongated and compressed , orthogonal to it the material is neither elongated nor compressed .b ) the fibres are shown by bold black lines.,title="fig:",height=136 ] to set the stage for analyzing the results in figure [ fig : criticalalpha ] , we note that the first wrinkles to appear will occupy the least energy configuration possible while satisfying the zero traction boundary condition .when the half - space is sheared , line elements are compressed in certain directions and elongated in others .the effect of superposing a small - amplitude wrinkle is to alternatively elongate and compress the material along the direction of the wrinkle front , i.e. in the direction given by eq .( [ sji ] ) , see figure [ fig : wrinklefront](a ) .this alternating behaviour , along with the zero traction boundary condition , makes it difficult to informally comprehend the influence of the wrinkle orientation , however we do notice a pattern . in the neo - hookean isotropic case ( ) ,the wavefront is along the direction of greatest compression : hence here , wrinkling the material in a direction under compression , due to the shear , allows the material to release some potential energy . in the anisotropic case ( ) ,the presence of fibres makes the wavefront of the first wrinkle tend towards being orthogonal to the fibres .for instance , figure [ fig : wrinklefront](b ) depicts the fibre orientation for the solution in the previous figures . clearlythe _ current _ direction of fibres ( in the deformed state of finite simple shear ) , is closely linked to the value to the wavefront orientation . to study this relationship we re - examine the data in figure [ fig : criticalalpha ] by mapping to the fibre orientation in the deformed body , that is the angle between the spatial vector of the current fibre orientation and the -axis , from which is also measured .the results of this remapping are shown in figure [ fig : criticalalpha * ] .we first turn our attention to the plots of against , see figure [ fig : criticalalpha*](a ) . on the dashed lines , the fibres are neither compressed or stretched . the ( almost straight ) continuous black lines and indicate when the fibres are aligned with the directions of greatest stretch and greatest compression , respectively .they are given by the equations respectively , where the are given in eq . and evaluated at .the curve helps us elucidate why there exists a point ( denoted ) where all anisotropic materials become unstable in shear at the same threshold shear as in an isotropic neo - hookean material ( where ) : clearly , this phenomenon occurs when the shear is such that the fibres are aligned with the direction of greatest stretch .then , it turns out that great simplifications occur in the stroh formulation of the instability problem , and that the buckling criterion coincides with that of the neo - hookean model , see proof in the appendix .this is an artifact of our specific choice of strain energy density in eq .. in figure [ fig : criticalalpha*](b ) , displaying the plots of against , we drew the line .clearly , in a region close to , the wavefront is almost aligned with the fibres , as is the case in an isotropic neo - hookean material .as increases , the neighborhood of this alignment widens , indicating that the stiffer the fibres are , the closer the instability curves in figure [ fig : criticalalpha*](b ) will be to the line and the less the wrinkles will alter the extended fibres .the overall general conclusion is that stiffer fibres lead to earlier onset of instability ( notwithstanding the punctual fixing of all curves at point , due to the very special case where fibres end up being aligned with the direction of greatest stretch in the deformed configuration . )this result is in agreement with the casual observation that old skin ( presumably with stiffer collagen bundles ) wrinkles earlier than young skin when pinched .a ) as a function of the current angle the fibres with respect to the direction of shear .( b ) the critical instability angle as a function of .the point indicates a surface instability state common to all materials ( independent of the material parameters).,title="fig : " ] b ) as a function of the current angle the fibres with respect to the direction of shear .( b ) the critical instability angle as a function of .the point indicates a surface instability state common to all materials ( independent of the material parameters).,title="fig : " ]in this work we have investigated the occurrence of shear instability in skin tissue within the framework of nonlinear elastic theories . in section [ section2 ] , we have considered the skin tissue as a neo - hookean layer of finite thickness , whilst the epidermis is modeled _ as a hemitropic film with given surface energy_. moreover , we have taken into account the presence of the cleavage lines of skin as preferred direction of residual stretches inside the tissue . under these assumptions , a linear stability analysis has been performed using the method of incremental elastic deformations , and an analytical form of the dispersion relation has been reported in eq .( [ disp ] ) .the results demonstrate that the presence of surface energy makes the layer more stable , in the sense that it needs to be sheared more for wrinkles to develop than when surface energy is absent ( figure [ slab](a ) ) .furthermore , the surface energy fixes the surface instability wavelength at threshold at a finite value , as depicted in figure [ slab](b ) . we have also found that wrinkles appear earlier when the shear takes place perpendicular to the direction of pre - stretch than when it occurs along that direction , as confirmed by the anectodal evidences shown in figures [ skin1 ] and [ skin2 ] . in section [ shear instability of a fibre - reinforced skin tissue ] , we have investigated the effect of fibre reinforcement in the dermis layer on the shear instability characteristics . for this purpose, we have used the polyconvex strain energy function in eq.([enaniso ] ) for modeling the transverse isotropic reinforcement along a preferential fibre direction .a stroh formulation of the incremental elastic equations has been derived in eq .( [ stroh ] ) , and solved numerically using an iterative technique . as shown in figures [ fig : criticalalpha ] and [ fig : criticalalpha * ] , we have found that the presence of fibres always lowers the shear threshold at which geometrical instability happens : the stiffer the fibres , the earlier the wrinkles appear in shear . considering that anisotropic stiffness of skin greatlyincrease with ageing , our results are in agreement with the fact that older skin wrinkles earlier when pinched .the presence of a universal point of instability at shear threshold when the fibres are aligned with the direction of greatest stretch , irrespective of the value of , can be observed on figures [ fig : criticalalpha ] and [ fig : criticalalpha * ] .this anchor point is present for no matter how stiff the fibres are compared to the matrix ( in the appendix we identify its origin . )however , it represents a very special case of shear , and when we move away from the region of influence of this point , we notice that all bifurcation curves indicate a significant lowering of the shear threshold of instability ( as soon as the fibres become at least as stiff as the matrix , ) . in experimental tests ( see e.g. n annaidh et al. ) , collagen fibres in human skin are determined to be at least 500 times stiffer than the elastin matrix .we may thus deduce that our model , away from the anchor point , predicts that surface instability will occur early , at low levels of shear , in line with the visual observations of figures [ skin1 ] and [ skin2 ] . in conclusion, this mathematical study of wrinkle formation in sheared skin confirms that pinching experiments in dermatology are useful tools to evaluate the local mechanical properties of the tissue . 99 j.c .waldorf , g. perdikis , and s.p .terkonda , planning incisions , _ oper ._ * 4 * ( 2002 ) 199 - 206 .c.j.kraissl , the selection of appropriate lines for elective surgical incisions ._ plastic reconstr .* 8 * ( 1951 ) 1 - 28 .cox , the cleavage lines of the skin . _j. surg . _* 29 * ( 1941 ) 234 - 240 .steigmann , and r.w .ogden , plane deformations of elastic solids with intrinsic boundary elasticity , _ proc .* a453 * ( 1997 ) 853 - 877 .biot , surface instability of rubber in compression .sci . research _ * a12 * ( 1963 ) 168 - 182 .p. ciarletta , i. izzo , s. micera , and f. tendick , stiffening by fibre reinforcement in soft materials : a hyperelastic theory at large strains and its application , _ j. biomech .behavior biomed ._ * 4 * ( 2011 ) 1359 - 1368 .m. destrade , m.d .gilchrist , d.a .prikazchikov , and g. saccomandi , surface instability of sheared soft tissues ._ j. biomech ._ * 130 * ( 2008 ) 061007 , 1 - 6 .b. lautrup , _ physics of continuous matter _( 2nd ed . , crc press , boca raton 2011 ) .steigmann , and r.w .ogden , elastic surface - substrate interactions , _ proc .lond . _ * a455 * ( 1999 ) 437 - 474 .m. destrade , and r.w .ogden , surface waves in a stretched and sheared incompressible elastic material , _ int .j. non - linear mech ._ * 40 * ( 2005 ) 241 - 253 .ogden , _ nonlinear elastic deformations _ ( dover , new york 1997 ) .flavin , surface waves in pre - stressed mooney material , _q. j. mech ._ * 16 * ( 1963 ) 441 - 449 .s. mora , m. abkarian , h. tabuteau , and y. pomeau , surface instability of soft solids under strain ._ soft matter _ * 7 * ( 2011 ) 10612 - 10619 .p. ciarletta , and m. ben amar , papillary networks in the dermal - epidermal junction of skin : a biomechanical model .* 42 * ( 2012 ) 68 - 76 .agache , c. monneur , j.l .leveque , and j. de rigal , mechanical properties and young s modulus of human skin in vivo . _ arch .* 269 * ( 1980 ) 221 - 232 .for an incompressible anisotropic material with strain energy density given in eq . , there are 31 non - zero instantaneous moduli in the coordinate system aligned with the directions of simple shear , , in eq .. they are found from eq . as follows . the symmetric blocks , , and of the corresponding stroh matrix are given by respectively , with , \notag \\& \eta= ( 3\mu + l_{1111})\cos^2\theta + 2l_{1121}\cos\theta\sin\theta + l_{2121 } \sin^2\theta , \notag \\ & \kappa= l_{1112 } + ( 3\mu + l_{1221})\cos\theta\sin\theta,\notag \\ & \nu= l_{1212}\cos^2\theta + 2l_{1121}\cos\theta\sin\theta + ( 3\mu + l_{2222})\sin^2\theta , \notag \\ & \chi= \left[\mu \sin 2\alpha + 2 \beta ( \sin 2 \alpha + \sin 2\theta)\right ] k + \left[\mu \cos^2\theta + 2 \beta ( \cos^2\theta - \cos^2 \alpha)\right ] k^2 . \notag\end{aligned}\ ] ] tremendous simplifications occur when and are expressed in the coordinate system aligned with the lagrangian principal axes and the fibres are aligned with the direction of greatest stretch .then , for wrinkles aligned with that direction , we find that the stroh matrix reads it clearly shows that the incremental deformation in the plane of shear is uncoupled from the out - of - plane component .further , the in - plane components do not involve and are identical to the components of the stroh matrix for an isotropic neo - hookean material .it follows that these wrinkles appear at the critical amount of shear found by destrade et al. , independently of the value of . the corresponding value for the largest stretch is and the angle of the fibres in the reference configuration is rad . | we propose two toy - models to describe , predict , and interpret the wrinkles appearing on the surface of skin when it is sheared . with the first model , we account for the lines of greatest tension present in human skin by subjecting a layer of soft tissue to a pre - stretch , and for the epidermis by endowing one of the layer s faces with a surface tension . for the second model , we consider an anisotropic model for the skin , to reflect the presence of stiff collagen fibres in a softer elastic matrix . in both cases , we find an explicit bifurcation criterion , linking geometrical and material parameters to a critical shear deformation accompanied by small static wrinkles , with decaying amplitudes normal to the free surface of skin . |
one major obstacle to the fulfillment of the promise of quantum computing is the current scarcity of quantum algorithms .quantum computing researchers simply have not yet found enough quantum algorithms to determine whether or not future quantum computers will be general purpose or special purpose computing devices . as a result , much more researchis crucially needed to determine the algorithmic limits of quantum computing .one of the most promising and versatile approaches to creating new quantum algorithms is based on the quantum hidden subgroup ( qhs ) paradigm , originally suggested by alexei kitaev .this class of quantum algorithms encompasses the deutsch - jozsa , simon , shor algorithms , and many more . in this paper ,our strategy for finding new quantum algorithms is to decompose shor s quantum factoring algorithm into its basic primitives , then to generalize these primitives , and finally to show how to reassemble them into new qhs algorithms .taking an `` alphabetic building blocks approach , '' we will use these primitives to form an `` algorithmic toolkit '' for the creation of new quantum algorithms , such as wandering shor algorithms , continuous shor algorithms , the quantum circle algorithm , the dual shor algorithm , a qhs algorithm for feynman integrals , free qhs algorithms , and more . toward the end of this paper ,we show how grover s algorithm is most surprisingly almost a qhs algorithm , and how this suggests the possibility of an even more complete `` algorithmic tookit '' beyond the qhs algorithms .before discussing how shor s algorithm can be decomposed into its primitive components , let s take a quick look at an example of the execution of shor s factoring algorithm . as we discuss this example , we suggest that the reader , as an exercise , try to find the basic qhs primitives that make up this algorithm .can you see them ?shor s quantum factoring algorithm reduces the task of factoring a positive integer to first finding a random integer relatively prime to , and then next to determining the period of the following function {ccl}\mathbb{z } & \overset{\varphi}{\longrightarrow } & \mathbb{z}\operatorname{mod}n\\ x & \longmapsto & a^{x}\operatorname{mod}n\text { , } \end{array}\ ] ] where denotes the additive group of integers , and where denotes the integers under multiplication with is found by selecting a random integer , and then applying the euclidean algorithm to determine whether or not it is relatively prime to .if not , then the is a non - trivial factor of , and there is no need to proceed futher .however , this possibility is highly unlikely if is large . ] .since is an infinite group , shor chooses to work instead with the finite additive cyclic group of order , where and with the `` approximating '' map {ccll}\mathbb{z}_{q } & \overset{\widetilde{\varphi}}{\longrightarrow } & \mathbb{z}\operatorname{mod}n & \\ x & \longmapsto & a^{x}\operatorname{mod}n\text { , } & 0\leq x < q \end{array}\ ] ] shor begins by constructing a quantum system with two quantum registers the left intended for holding the arguments of , the right for holding the corresponding values of .this quantum system has been constructed with a unitary transformation implementing the approximating map . as an example , let us use shor s algorithm to factor the integer , assuming that has been randomly chosen .thus , .unknown to us , the period is , and hence , .we proceed by executing the following steps : * initialize * apply the inverse fourier transform to the left register , where is a primitive -th root of unity , to obtain * apply the unitary transformation to obtain * apply the fourier transform to the left register to obtain where * measure the left register .then with probability the state will `` collapse '' to with the value measured being the integer , where . a plot of is shown in figure 1 .( see and for details .) fig1.ps + * figure 1 . a plot of * **. ** the peaks in the above plot of occur at the integers the probability that at least one of these six integers will occur is quite high .it is actually .indeed , the probability distribution has been intentionally engineered to make the probability of these particular integers as high as possible .and there is a good reason for doing so .the above six integers are those for which the corresponding rational is closest to a rational of the form . by closest we mean that in particular , are rationals respectively closest to the rationals the six rational numbers are `` closest '' in the sense that they are convergents of the continued fraction expansions of , respectively . hence , each of the six rationals can be found using the standard continued fraction recursion formulas .but ... , we are not searching for rationals of the form . instead, we seek only the denominator .unfortunately , the denominator can only be obtained from the continued fraction recursion when the numerator and denominator of are relatively prime .given that the algorithm has selected one of the random integers , the probability that the corresponding rational has relatively prime numerator and denominator is , where denotes the euler phi ( totient ) function .so the probability of finding is actually not , but is instead .as it turns out , if he repeats the algorithm times , we will obtain the desired period with probability bounded below by approximately . however , this is not the end of the story .once we have in our possession a candidate for the actual period , the only way we can be sure we have the correct period is to test by computing .if the result is , we are certain we have found the correct period .this last part of the computation is done by the repeated squaring algorithm via the expression where is the radix 2 expansion of . ] .now that we have taken a quick look at shor s algorithm , let s see how it can be decomposed into its primitive algorithmic components .we will first need to answer the following question : _ what is a quantum hidden subgroup algorithm ? _ but before we can answer the this question , we need to provide an answer to an even more fundamental question : _ what is a hidden subgroup problem ?_ a map from a group into a set is said to have * hidden subgroup structure * if there exists a subgroup of , called a * hidden subgroup * , and an injection , called a * hidden injection * , such that the diagram{ccc}g & \overset{\varphi}{\longrightarrow } & s\\ \nu\searrow & & \nearrow\iota_{\mathbf{\varphi}}\\ & g / k_{\mathbf{\varphi } } & \end{array}\ ] ] is commutative .the notion generalizes in an obvious way to more complicated diagrams . ] , where denotes the collection of right cosets of in , and where is the natural surjection of onto .we refer to the group as the * ambient group * and to the set as the * target set*. if is a normal subgroup of , then is a group , called the * hidden quotient group * , and is an epimorphism , called the * hidden epimorphism*. we will call the above diagram the * hidden subgroup structure * of the map .( see , . )the underlying intuition motivating this formal definition is as follows : given a natural surjection ( or epimorphism ) , an `` archvillain with malice of forethought '' hides the algebraic structure of by intentionally renaming all the elements of , and `` maliciously tossing in for good measure '' some extra elements to form a set and a map .the hidden subgroup problem can be stated as follows : * hidden subgroup problem ( hsp ) . * _ be a map with hidden subgroup structure .the problem of determining a hidden subgroup _ _ of _ _ is called a * hidden subgroup problem ( hsp)*. an algorithm solving this problem is called a * hidden subgroup algorithm*. _ the corresponding quantum form of this hsp is stated as follows : * hidden subgroup problem ( quantum version ) . * _ be a map with hidden subgroup structure . construct a quantum implementation of the map _ _ as follows : let _ _ and _ _ be hilbert spaces defined respectively by the orthonormal bases _ _ _ _ and let _ _ _ , where _ _ _ denotes the identity .] of the ambient group _. finally , let _ _ _ _ be a unitary transformation such that__{ccc}\mathcal{h}_{g}\otimes\mathcal{h}_{s } & \longrightarrow & \mathcal{h}_{g}\otimes\mathcal{h}_{s}\\ \left\vert g\right\rangle \left\vert s_{0}\right\rangle & \mapsto & \left\vert g\right\rangle \left\vert \varphi(g)\right\rangle \end{array}\ ] ] _ determine the hidden subgroup _ _ with bounded probability of error by making as few queries as possible to the blackbox _. a quantum algorithm solving this problem is called a * quantum hidden subgroup ( qhs ) algorithm*. _ _we are now in a position to construct one of the fundamental algorithmic primitives found in shor s algorithm .let be a map from a group to a set with hidden subgroup structure .we assume that all representations of are equivalent to unitary representations .let denote a * complete set of distinct irreducible unitary representations * of .using multiplicative notation for ,we let denote the * identity * of , and let denote its image in .finally , let denote the * trivial representation * of .if is abelian , then becomes the * dual group * of characters .the generic qhs algorithm is given below : quantum subroutine qrand * initialization * application of the inverse fourier transform to the left register where denotes the cardinality of the group .* application of the unitary transformation * application of the fourier transform of to the left register where denotes the degree of the representation , where denotes the contragradient representation ( i.e. , ) , where , and where . * measurement of the left quantum register with respect to the orthonormal basis thus , with probability the resulting measured value is the entry , and the quantum system `` collapses '' to the state * step 5 .output , and stop .but shor s algorithm consists of more than the primitive qrand . for many ( but not all ) hidden subgroup problems ( hsps ) , the corresponding generic qhs algorithm qrand either is not physically implementable or is too expensive to implement physically .for example , the hsp is usually not physically implementable if the ambient group is infinite ( e.g. , is the infinite cyclic group ) , and is too expensive to implement if the ambient group is too large ( e.g. , is the symmetric group ) . in this case , there is a standard generic way of `` tweaking '' the hsp to get around this problem , which we will call * pushing*. let be a map from a group to a set .a map from a group to the set is said to be a * push * of , written provided there exists an epimorphism from onto , and a transversal be an epimorphism from a group to a group .then a transversal of is is a map such that is the identity map .( it immediately follows that is an injection . ) in othere words , a transversal of an epimorphism is a map which maps eacl element of to an element of contained in the coset , i.e. , to a coset representative of . ] of such that , i.e. , such that the following diagram is commutative{ccc}g & \overset{\varphi}{\longrightarrow}\quad & s\\ \quad\uparrow\tau & \quad\nearrow\widetilde{\varphi}\quad & \\ \widetilde{g } & & \end{array}\ ] ] if the epimorphism and the transversal are chosen in an appropriate way , then execution of the generic qhs subroutine with input , i.e. , execution of will with high probability produce an irreducible representation of the group which is sufficiently close to an irreducible representation of the group .if this is the case , then there is a polynomial time classical algorithm which upon input produces the representation .obviously , much more can be said about pushing .but unfortunately that would take us far afield from the objectives of this paper .for more information on pushing , we refer the reader to . it would be amiss not to mention that the above algorithmic primitive of pushing suggests the definition of a second primitive which we will call * lifting*. let be a map from a group to a set .a map from a group to the set is said to be a * lift * of , written provided there exists an morphism from to such that , i.e. , such that the following diagram is commutative{cl}\underline{g } & \\ \ \\eta\downarrow\quad & \quad\searrow\underline{\varphi}\quad\\ g & \quad\overset{\varphi}{\longrightarrow}\quad s \end{array}\ ] ] fig2.eps + * figure 2 . pushing and lifting hsps . *we are now in position to describe shor s algorithm in terms of its primitive components . in particular , we are now in a position to see that shor s factoring algorithm is a classic example of a qhs algorithm created from the push of an hsp .let be the integer to be factored .let denote the additive group of integers , and denote the integers under multiplication .shor s algorithm is a qhs algorithm that solves the following hsp {rrc}\varphi:\mathbb{z } & \longrightarrow & \mathbb{z}_{n}^{\times}\\ m & \longmapsto & a^{m}\operatorname{mod}n \end{array}\ ] ] with unknown hidden subgroup structure given by the following commutative diagram {ccc}\mathbb{z } & \overset{\varphi}{\longrightarrow } & \mathbb{z}_{n}^{\times}\\ \nu\searrow & & \nearrow\iota\\ & \mathbb{z}/p\mathbb{z } & \end{array } \text { \ , } \ ] ] where is an integer relatively prime to , where is the hidden integer period of the map , where is the additive subgroup of all integer multiples of ( i.e. , the hidden subgroup ) , where is the natural epimorpism of the integers onto the quotient group ( i.e. , the hidden epimorphism ) , and where is the hidden monomorphism .an obstacle to creating a physically implementable algorithm for this hsp is that the domain of is infinite .as observed by shor , a way to work around this difficulty is to push the hsp .in particular , as illustrated by the following commutative diagram{ccl}\mathbb{z\qquad } & \overset{\varphi}{\longrightarrow } & \qquad\mathbb{z}_{n}^{\times}\\ \mu\searrow\nwarrow\tau & & \nearrow\varphi = push\left ( \varphi\right ) = \varphi\circ\tau\\ & \mathbb{z}_{q } & \end{array } \text { \ \ , } \ ] ] a push is constructed by selecting the epimorphism of onto the finite cyclic group of order , where the integer is the unique power of such that , and then choosing the transversal is an injection such that is the identity map on , i.e. , a map that takes each element of onto a coset representative of the element in . ]{rrc}\tau:\mathbb{z}_{q } & \longrightarrow & \mathbb{z}\\ m\operatorname{mod}q & \longmapsto & m \end{array } \text { \ , } \ ] ] where ._ this push _ _ is called _ * shor s oracle*. shor s algorithm consists in first executing the quantum subroutine qrand , thereby producing a random character of the finite cyclic group .the transversal used in pushing has been engineered in such a way as to assure that the character is sufficiently close to a character of the hidden quotient group . in this case `` sufficiently close '' means that which means that is a continued fraction convergent of , and thus can be found found by the classical polynomial time continued fraction algorithm and can in the obvious way be identified with points of in the unit circle in the complex plane . with this identification, we can see that this inequalty is equivalent to saying the the chordal distance bewteen these two rational points on the unit circle is less than or equan to .hence , shor s algorithm is using the topology of the unit circle . ] .now let s use the primitives described in sections 3 , 4 , and 5 to create other new qhs algorithms , called wandering shor algorithms .wandering shor algorithms are essentially qhs algorithms on free abelian finite rank groups which , with each iteration , first select a random cyclic direct summand of the group , and then apply one iteration of the standard shor algorithm to produce a random character of the approximating finite group , called a * * group probe * * , we mean an epimorphic image of the ambient group . ] .three different wandering shor algorithms are created in .the first two wandering shor algorithms given in are quantum algorithms which find the order of a maximal cyclic subgroup of the hidden quotient group .the third computes the entire hidden quotient group .the first step in creating a wandering shor algorithm is to find the right generalization one of the primitives found in shor s algorithm , namely , the transversal of shor s factoring algorithm . in other words ,we need to construct the `` correct '' generalization of the transversal from to a free abelian group of rank . for this reason ,we have created the following definition : let be the free abelian group of rank , let onto the cyclic group of order with selected generator .a transversal be an epimorphism from a group to a group .then a transversal of is is a map such that is the identity map .( it immediately follows that is an injection . ) in othere words , a transversal of an epimorphism is a map which maps eacl element of to an element of contained in the coset , i.e. , to a coset representative of . ] of is said to be a * shor transversal * provided that : * for all * for each ( free abelian ) basis of , the coefficients of satisfy .later , when we construct a generalization of shor transversals to free groups of finite rank , we will see that the first condition simply states that a shor transversal is nothing more than a 2-sided schreier transversal .the second condition of the above definition simply says that maps the generator of onto a generator of a free direct summand of .( for more details , please refer to section 12 of this paper . ) in , we show how to use the extended euclidean algorithm to construct the epimorphism and the transversal . fig3.eps + * figure 3 .flowchart for the first wandering shor algorithm ( a.k.a . , a vintage shor algorithm ) .this algorithm finds the order * * * of a maximal cyclic subgroup of the hidden quotient group * * **. * * flow charts for the three wandering shor algorithms created in are given in figures 3 through 5 . in ,these were also called * vintage shor algorithms*. fig4.eps + * figure 4 .flowchart for the second wandering shor algorithm ( a.k.a . , a vintage shor algorithm ) .this algorithm finds the order * * * of a maximal cyclic subgroup of the hidden quotient group * * **. * * fig5.eps + * * figure 5 .flowchart for the third wandering shor algorithm , a.k.a . , a vintage shor algorithm .this algorithm finds the entire hidden quotient group * * *. * the algorithmic complexities of the above wandering shor algorithms is given in .for example , the first wandering shor algorithm is of time complexity where is the rank of the free abelian group .this can be readily deduced from the abbreviated flowchart given in figure 6 .fig6.ps + * figure 6 .abbreviated flowchart for the first wandering shor algorithm . *in in and in , the algorithmic primitives found in above sections of this paper were used to create a class of algorithms called continuous shor algorithms . by a * continuous variable shor algorithm * ,we mean a quantum hidden subgroup algorithm that finds the hidden period of an admissible function from the reals to itself . by an admissible function , we mean a function belonging to any sufficiently well behaved class of functions .for example , the class of functions which are lebesgue integrable on every closed interval of . there , are many other classes of functions that work equally as well .actually , the papers , give in succession three such continuous shor algorithms , each successively more general than the previous .for the first algorithm , we assume that the unknown hidden period is an integer .the algorithm is then constructed by using rigged hilbert spaces , , linear combinations of dirac delta functions , and a subtle extension of the fourier transform found in the generic qhs subroutine qrand , which has been described previously in section 4 of this paper . in step 5 of qrand ,the observable is measured , where is an integer chosen so that .it then follows that the output of this algorithm is a rational which is a convergent of the continued fraction expansion of a rational of the form .the above quantum algorithm is then extended to a second quantum algorithm that finds the hidden period of functions , where the unknown period is a rational .finally , the second algorithm is extended to a third algorithm which finds the hidden period of functions , when is an _ arbitrary real number_. we point out that for the third and last algorithm to work , we must impose a very restrictive condition on the map , i.e. , the condition that the map is continuous .we have shown in previous sections how the mathematical primitives of pushing and lifting can be used to create new quantum algorithms . in particular , we have described how pushing and lifting can be used to derive new hsps from an hsp on an arbitrary group . we now see how group duality can be exploited by these two primitives to create even more quantum algorithms. fig7.eps + * figure 7 . using duality to create new qhs algorithms . * to this end, we assume that is an _ abelian _ group .hence , its dual group of characters exists is non - abelian , then its dual is not a group , but instead the representation algebra over the group ring . the methods described in this section can also be used to create new quantum algorithms for hsps on on the representation algebra . ] .it now follows that pushing and lifting can also be used to derive new hsps from an arbitrary hsp on the dual group . in , this method is used to create a number of new quantum algorithms derived from shor - like hsps .a roadmap is shown in figure 8 of the developmental steps taken to find and to create a new qhs algorithm on , which is ( in the sense described below ) dual to shor s original algorithm .we call the algorithm developed in the final step of figure 8 the * dual shor algorithm*. fig8.eps + * figure 8. roadmap for creating the dual shor algorithm . * as indicated in figure 5 , our first step is to create an intermediate qhs algorithm based on a shor - like hsp from the additive group of integers to a target set .the resulting algorithm `` lives '' in the infinite dimensional space defined by the orthonormal basis .this is a physically unemplementable quantum algorithm created as a first steping stone in our algorithmic development sequence .intuitively , this algorithm can be viewed as a `` distillation '' or a `` purification '' of shor s original algorithm . as a next step ,* duality * is used to create the * quantum circle algorithm*. this is accomplished by devising a qhs algorithm for an hsp on the dual group of the additive group of integers .( by , we mean the * additive group of reals * , which is isomorphic to the multiplicative group , i.e. , the * unit circle * in the complex plane . ) once again , this is probably a physcally unemplementable quantum algorithm .but its utility lies in the fact that it leads to the physically implementable quantum algorithm created in the last and final developmental step , as indicated in figure 8 . for in the final step ,a physically implementable qhs algorithm is created by * lifting * the hsp to an hsp . for the obvious reason , we call the resulting algorithm a * dual shor algorithm*. for detailed descriptions of each of these quantum algorithms , i.e. ,the `` distilled '' shor , the quantum circle , and the dual shor algorithms , the reader is referred to and .we give below brief descriptions of the quantum circle and the dual shor algorithms .for the * quantum circle algorithm * , we make use of the following spaces ( each of which is used in quantum optics ) : * the rigged hilbert space with orthonormal basis . by orthonormal we mean that , where denotes the dirac delta function .the elements of are * formal integrals * of the form .( the physicist dirac in his classic book on quantum mechanics refers to these integrals as infinite sums .see also and . ) * the complex vector space of formal sums with orthonormal basis . by orthonormal we mean that , where denotes the kronecker delta .we can now design an algorithm which solves the following hidden subgroup problem : * hidden subgroup problem for the circle . * _ let _ _ be an admissible function from the circle group _ _ to the complex numbers _ _ with hidden rational period _ _ _ , where _ _ _ denotes the rational circle , i.e. , the rationals _. _ _ by an admissible function , we mean a function belonging to any sufficiently well behaved class of functions .for example , the class of functions which are lebesgue integrable on .there , are many other classes of functions that work equally as well . if ( with ) is a rational period of a function , then also a period of .hence , the minimal rational period of is always a reciprocal integer .the following quantum algorithm finds the reciprocal integer period of the function .circle - algorithm * initialization * application of the inverse fourier transform * step 2 .application of the unitary transformation * application of the fourier transform remark . letting , we have where is the unknown reciprocal period .but{ll}a & \text{if \ }n=0\operatorname*{mod}a\\ 0 & \text{otherwise}\end{array } \right.\ ] ] hence, * measurement of with respect to the observable to produce a random eigenvalue .the above quantum circle algorithm can be extended to a quantum algorithm which finds the hidden period of a function , when is an arbitrary real number .but in creating this extended quantum algorithm , a very restrictive condition must be imposed on the map , namely , the condition that be continuous .we now give a brief description of the * dual shor algorithm*. the dual shor algorithm is a qhs algorithm created by making a discrete approximation of the quantum circle algorithm .more specifically , it is created by lifting the qhs circle algorithm for to the finite cyclic group , as illustrated in the commutative diagram given below:{ll}\;\,\mathbb{z}_{q } & \\ \mu\downarrow\;\ ; & \;\;\searrow\widetilde{\varphi}=push\left ( \varphi \right ) = \varphi\circ\mu\\ \mathbb{r}\mathbf{/}\mathbb{z } & \longrightarrow\;\;s \end{array}\ ] ] intuitively , just as in shor s algorithm , the circle group is `` approximated '' with the finite cyclic group , where the group is identified with the additive group and where the hidden subgroup is identified with the additive group with .this is a physically implementable quantum algorithm . in a certain sense, it is actually faster than shor s algorithm . for the last step of shor s algorithmuses the standard continued fraction algorithm to determine the unknown period . on the other hand ,the last step of the dual shor algorithm uses the much faster euclidean algorithm to compute the greatest common divisor of the integers , thereby determining the desired reciprocal integer period . for more details, please refer to and .we now discuss a qhs algorithm based on feynman path integrals .this quantum algorithm was developed at the mathematical sciences research institute ( msri ) in berkeley , california when one of the authors of this paper was challenged with an invitation to give a talk on the relation between feynmann path integrals and quantum computing at an msri conference on feynman path integrals . until recently , both authors of this paper thought that the quantum algorithm to be described below was a highly speculative quantum algorithm .for the existence of feynman path integrals is very difficult ( if not impossible ) to determine in a mathematically rigorous fashion .but surprisingly , jeremy becnel in his doctoral dissertation actually succeeded in creating a firm mathematical foundation for this algorithm .we should mention , however , that the physical implementability of this algorithm is still yet to be determined . definition .let paths be the real vector space of all continuous paths \longrightarrow\mathbb{r}^{n}$ ] which are with respect to the inner product with scalar multiplication and vector sum defined as * * we wish to create a qhs algorithm for the following hidden subgroup problem : * hidden subgroup problem for * paths . _ be a functional with a hidden subspace _ _ of _ paths _ _ such that__ our objective is to create a qhs algorithm which solves the above problem , i.e. , which finds the hidden subspace .let be the rigged hilbert space with orthonormal basis , and with bracket product . we will use the following observation to create the qhs algorithm : * observation .* paths _ , where _ _ denotes the orthogonal complement of the hidden vector subspace _. _ _ the qhs algorithm for feynman path integral is given below :feynman * initialize * apply * apply * apply {lll}\left\vert \psi_{3}\right\rangle & = & { \displaystyle\int\limits_{\text{\textsc{paths } } } } \mathcal{d}y{\displaystyle\int\limits_{\text{\textsc{paths } } } } \mathcal{d}x\ e^{-2\pi ix\cdot y}\left\vert y\right\rangle \left\vert \varphi\left ( x\right ) \right\rangle \\ & = & { \displaystyle\int\limits_{\text{\textsc{paths } } } } \mathcal{d}y\ \left\vert y\right\rangle { \displaystyle\int\limits_{\text{\textsc{paths } } } } \mathcal{d}x\ e^{-2\pi ix\cdot y}\left\vert \varphi\left ( x\right ) \right\rangle \end{array}\ ] ] but however, so , * measure with respect to the observable to produce a random element of the above algorithm suggests an intriguing question .can the above qhs feynman integral algorithm be modified in such a way as to create a quantum algorithm for the jones polynomial ?in other words , can it be modified by replacing paths with the space of gauge connections , and making suitable modifications ?this question is motivated by the fact that the integral over gauge transformations looks very much like a fourier transform , where denotes the * wilson loop * over the knot .in this and the following section of this paper , our objective is to show that a free group is the the most natural domain for qhs algorithms .in retrospect , this is not so surprising if one takes a discerning look at shor s factoring algorithm . for in section 6, we have seen that shor s algorithm is essentially a qhs algorithm on the free group which has been pushed onto the finite group . in particular ,let be a map with hidden subgroup structure from a finitely generated ( f.g . )group to a set .we assume that the hidden subgroup is a normal subgroup of of finite index. then the objectives of this section are to demonstrate the following : * every hidden subgroup problem ( hsp ) on an arbitray f.g .group can be lifted to an hsp on a free group of finite rank .* moreover , a solution for the lifted hsp is for all practical purposes the same as the solution for the original hsp . _ thus , one need only investigate qhs algorithms for free groups of finite rank ! _ before we can describe the above results , we need to review a number of definitions .we begin with the definition of a free group : [ universal definition]a group is said to be * free * of finite rank if there exists a finite set of generators such that , for every group and for every map of the set into the group , the map extends to a morphism .we call the set a * free basis * of the group , and frequently denote the group by , .it follows from this definition that the morphism is unique .the intuitive idea encapsulated by this definition is that a free group is an unconstrained group ( very much analogous to a physical system without boundary conditions . ) in other words , a group is free provided it has a set of generators such that the only relations among those generators are those required for to be a group . for example , * is an allowed relation * is not an allowed relation for * is not an allowed relation as an immediate consequence of the above definition , we have the following proposition : let be an arbitrary f.g .group with finite set of generators , and let be the free group of rank with free basis . then by the above definition , the map induces a uniques epimorphism from onto . with this epimorphism ,every hsp on the group uniquely lifts to the hsp on the free group .moreover , if and are the hidden subgroups of the hsps and , respectively , the corresponding hidden quotient groups and of these two hsps are isomorphic . hence , every solution of the hsp immediately produces a solution of the original hsp .we close this section with the defintion of a group resentation , a concept that will be needed in the next section for generalizing shor s algorithm to free groups .let be a group .a * * group presentation** for is a set of free generators of a free group and a set of words in , called * relators * , such that the group is isomorphic to the quotient group , where , called the * consequence * of , is the smallest normal subgroup of containing the relators .the intuition captured by the above definition is that are the generators of , and is a complete set of relations among these generators , i.e. , every relation among the generators of is a * consequence * of ( derivable from ) the relations . for example , * and are both presentations of the free group * and are both presentations of the cyclic group order , where and are integers such that . * is a presentation of the symmetric group on three symbols .the objective of this section is to generalize shor s algorithm to free groups of finite rank of rank 1 constructed by a push onto the cyclic group . in light of this and of the results outlined in the previous section ,it is a natural objective to generalize shor s algorithm to free groups of finite rank . ] .the chief obstacle to accomplishing this goal is finding a correct generalization of the shor transversal {cccc}\mathbb{z}_{q } & \overset{\tau}{\longrightarrow } & \mathbb{z } & \\ n\operatorname{mod}q & \longmapsto & n & \left ( \text{\ } 0\leq n < q\right ) \end{array}\ ] ] unfortunately , there appear to be few mathematical clues indicating how to go about making such a generalization .however , as we shall see , the generalization of the shor transversal to the transversal found in the wandering shor algorithm does provide a crucial clue , suggesting that a generalized shor transversal must be a 2-sided schreier transversal .( see section 7 . )we begin by formulating a constructive approach to free groups : let be a free group with free basis .then a * word * is a finite string of the symbols . a * reduced word * is a word in which there is no substring of the form or .two words are said to be * equivalent * if one can be transformed into the other by applying a finite number of substring insertions or deletions of the form or .we denote an * arbitrary word * by , where each .the * length * of a word is number of symbols that appear in , i.e. , . for example, is a word of length which is equivalent to the reduced word of length .it easily follows that : a free group is simply the set of reduced words together with the obvious definition of product , i.e. , concatenation followed by full reduction .we can now use this constructive approach to create a special kind of transversal of an epimorphism , called a 2-sided scheier transversal : a set of reduced words in a free group is said to be a * 2-sided schreier system * provided * the empty word lies in .* , and * given an epimorphism of the free group onto a group , a * 2-sided schreier transversal * for is a transversal of for which there exists a 2-sided schreier system such that .a 2-sided schreier transversal is said to be * minimal * provided the length of each word is less than or equal to the length of each reduced word in the coset , where denotes the kernel of the epimorpism .the wandering shor algorithm found in section 7 suggests that a correct generalization of the shor transversal must at least have the property that it is a minimal 2-sided schreier transversal .whatetever other additional properties this generalization must have is simply not clear . in , we construct and investigate a number of different qhs algorithms on free groups that arise from the application of various additional conditions imposed upon the minimal 2-sided schreier transversal requirement . in this section ,we only give a descriptive sketch of the simplest of these algorithms , i.e. , a qhs algorithm on free groups with only the minimal 2-sided schreier transversal requirement imposed .let be the free group of finite rank with free basis , and let be an hsp on the free group .we assume that the hidden subgroup is normal and of finite index in .( please note that . )* choose a finite group probe with presentation , where the subscript denotes the epimorphism induced by the map .* choose a minimal 2-sided schreier transversal of the epimorphism .* finally , construct the push our generalized shor algorithm for the free group consists of the following steps : * call qrand to produce a word in close to a word lying in . * with input , use a polytime classical algorithm to determine .( see . ) * repeat steps 1 and 2 until enough relators s are found to produce a presentation of the hidden subgroup , then output the presentation , and stop .obviously , much more needs to be said . for , example, we have not explained how one chooses the relators so that is a good group probe .moreover , we have not explained what classical algorithm is used to transform the words into the relators . for more details , we refer the reader to .in this section , our objective is to factor grover s algorithm into the qhs primitives developed in the previous sections of this paper . as a result, we will show that grover s algorithm is more closely related to shor s algorithm than one might at first expect .in particular , we will show that grover s algorithm is a qhs algorithm in the sense that it solves an hsp , which we will refer to as the * grover hsp . * however, we will then show that the standard qhs algorithm for this hsp can not possibly find a solution .we begin with a question : _ does grover s algorithm have symmetries that we can exploit ? _ the problem solved by grover s algorithm , , , is that of finding an unknown integer label in an unstructured database with items labeled by the integers : given the oracle{ll}1 & \text{if \ } j = j_{0}\\ 0 & \text{otherwise}\end{array } \right.\ ] ] let be the hilbert space with orthonormal basis .grover s oracle is essentially given by the unitary transformation where is inversion in the hyperplane orthogonal to .let denote the hadamard transformation on the hilbert space .then grover s algorithm is as follows : * ( initialization) * loop until * measure with respect to the standard basis to obtain the unknown state with but where is the hidden symmetry in grover s algorithm ?let be the symmetric group on the symbols .then grover s algorithm is invariant under the * hidden subgroup * ,called the * stabilizer subgroup * for , i.e. , grover s algorithm is invariant under the group action moreover , if we know the hidden subgroup , then we know , and vice versa . in other words ,the problem of finding the unknown label is informationally the same as the problem of finding the hidden subgroup .let denote the permutation that interchanges integers and , and leaves all other integers fixed .thus , a transposition if , and the identity permutation if .the set is a complete set of distinct coset representatives for the hidden subgroup of , i.e. , the coset space given by the following complete set of distinct cosets: we can now see that grover s algorithm is a hidden subgroup algorithm in the sense that it is a quantum algorithm which solves the following hidden subgroup problem : * grover s hidden subgroup problem . _ be a map from the symmetric group _ _ to a set _ _ _ with hidden subgroup structure given by the commutative diagram__{ccc}s_{n } & \longrightarrow & \quad\;\;s\\ \nu_{j_{0}}\searrow & & \nearrow\iota\\ & s_{n}/stab_{j_{0 } } & \end{array } \text { \ , } \ ] ] _ where _ _ is the natural surjection of __ on to the coset space _ _ _ , and where _ _ {ccc}\iota:\;s_{n}/stab_{j_{0 } } & \longrightarrow & s\\ \quad\left ( jj_{0}\right ) stab_{j_{0 } } & \longmapsto & j \end{array}\ ] ] _ is the unknown relabeling ( bijection ) of the coset space _ the set _ _ . find the hidden subgroup _ _ bounded probability of error . _ _now let us compare shor s algorithm with grover s . from section 6 ,we know that shor s algorithm , , , solves the hidden subgroup problem with hidden subgroup structure{ccc}\mathbb{z } & \longrightarrow & \quad\;\;\mathbb{z}_{n}\\ \;\;\nu\searrow & & \nearrow\iota\\ & \mathbb{z}/p\mathbb{z } & \end{array}\ ] ] moreover , as stated in section 6 , shor has created his algorithm by pushing the above hidden subgroup problem to the hidden subgroup problem ( called shor s oracle ) , where the hidden subgroup structure of is given by the commutative diagram{ccc}z & \longrightarrow & \quad\;\;z_{n}\\ \qquad\;\;\alpha\searrow\nwarrow\tau & & \nearrow\widetilde{\varphi}=\varphi\circ\tau\\ & z_{q } & \end{array } \text { \ , } \ ] ] where is the natural epimorphism of onto , and where is shor s chosen transversal for the epimorphism .surprisingly , grover s algorithm , viewed as an algorithm that solves the grover hidden subgroup problem , is very similar to shor s algorithm . like shors algorithm , grover s algorithm solves a hidden subgroup problem , i.e. , the grover hidden subgroup problem with hidden subgroup structure{ccc}s_{n } & \longrightarrow & \quad\;\;s\\ \;\;\nu\searrow & & \nearrow\iota\\ & s_{n}/stab_{j_{0 } } & \end{array } \text { \ , } \ ] ] where denotes the set resulting from an unknown relabeling ( bijection ) of the coset space also , like shor s algorithm , we can think of grover s algorithm as one created by pushing the grover hidden subgroup problem to the hidden subgroup problem , where the pushing is defined by the following commutative diagram{ccc}s_{n } & \longrightarrow & \qquad\qquad\;\;s = s_{n}/stab_{j_{0}}\\ \qquad\;\;\alpha\searrow\nwarrow\tau & & \nearrow\widetilde{\varphi}=\varphi\circ\tau\\ & s_{n}/stab_{0 } & \end{array } \text { \ \ , } \ ] ] where denotes the natural surjection of onto the coset space , and where the transversal of given by{ccc}s_{n}/stab_{0 } & \longrightarrow & s_{n}\\ \left ( j0\right ) stab_{0 } & \longmapsto & \left ( j0\right ) \end{array } \text { \ \ .}\ ] ] again also like shor s algorithm , the map given by is ( if ) actually a disguised grover s oracle .for the map can easily be shown to simply to{ll}(j0)stab_{j_{0 } } & \text{if \ } j = j_{0}\\ stab_{j_{0 } } & \text{otherwise\ \ , } \end{array } \right.\ ] ] which is informationally the same as grover s oracle{ll}j & \text{if \ } j = j_{0}\\ 1 & \text{otherwise}\end{array } \right.\ ] ] hence , we can conclude that grover s algorithm is a quantum algorithm very much like shor s algorithm , in that it is a quantum algorithm that solves the grover hidden subgroup problem .however , , this appears to be where the similarity between grover s and shor s algorithms ends . for the standard non - abelian qhs algorithm for can not find the hidden subgroup for each of following two reasons : * since the subgroups are not normal subgroups of , it follows from the work of hallgren et al , that the standard non - abelian hidden subgroup algorithm will find the largest normal subgroup of lying in .but unfortunately , the largest normal subgroup of lying in is the trivial subgroup of . *the subgroups are mutually conjugate subgroups of .moreover , one can not hope to use this qhs approach to grover s algorithm to find a faster quantum algorithm . for zalka shown that grover s algorithm is optimal .the arguments given above suggest that grover s and shor s algorithms are more closely related that one might at first expect .although the standard non - abelian qhs algorithm on can not solve the grover hidden subgroup problem , there does remain an intriguing question : * question .* _ is there some modification of ( or extension of ) the standard qhs algorithm on the symmetric group _ _ that actually solves grover s hidden subgroup problem ? _ for a more in - depth discussion of the results found in this section ,we refer the reader to .in this paper , we have decomposed shor s quantum factoring algorithm into primitives , generalized these primitives , and then reassembled them into a wealth of new qhs algorithms . but as the results found in the previous section suggest , this list of quantum algorithmic primitives is far from complete .this is expressed by the following question : _ where can we find more algorithmic primitives to create a more well rounded toolkit for quantum algorithmic development ? _the previous section suggests that indeed all quantum algorithms may well be hidden subgroup algorithms in the sense that they all find hidden symmetries , i.e. , hidden subgroups .this is suggestive of the following meta - procedure for quantum algorithm development : * explicitly state the problem to be solved . *rephrase the problem as a hidden symmetry problem . *create a quantum algorithm to find the hidden symmetry ._ can this meta - procedure be made more explicit ? _ perhaps some reader to this paper will be able to answer this question .this work is partially supported by the defense advanced research projects agency ( darpa ) and air forche research laboratory , air force materiel command , usaf , under agreement number f30602 - 01 - 2 - 0522 .government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation thereon . this work also partially supported by the institute for scientific interchange ( isi ) , torino , the national institute of standards and technology ( nist ) , the mathematical sciences research institute ( msri ) , the isaac newton institute for mathematical sciences , and the l - o - o - p fund .hallgren , sean , alexander russell , amnon ta - shma , * the hidden subgroup problem and quantum computation using group representations , * proceedings of the thirty - second annual acm symposium on theory of computing , portland , oregon , may 2000 , 627 - 635 .lomonaco , samuel j. , jr . , ( ed . ) ,* `` quantum computation : a grand mathematical challenge for thetwenty - first century and the millennium , '' * proceedings of the symposia of applied mathematics , vol .58 , american mathematical society , providence , rhode island , ( 2002 ) .( 358 pages)(http://www.ams.org / bookstore?fn=20&arg1=whatsnew&item = psapm-58)(http://www.csee.umbc.edu / lomonaco / ams / lecture_notes.html ) lomonaco , samuel j. , jr . , and howard e. brandt , ( eds . ) , * `` quantum computation and information , '' * ams contemporary mathematics , vol . 305 , american mathematical society , providence , ri , ( 2002 ) .( 310 pages)(http://www.csee.umbc.edu/ / ams / special.html ) lomonaco , samuel j. , jr . , and louis h. kauffman , * continuous quantum hidden subgroup algroithms * , spie proceedings on quantum information and computation , vol .5105 , 11 , ( 2003 ) , 80 - 89 .( http://arxiv.org/abs/quant-ph/0304084 ) lomonaco , samuel j. , jr . , and , louis h. kauffman , * quantum hidden subgroup algorithms : the devil is in the details , * 2004 proceedings of spie proceedings on quantum information and computation , ( 2004 ) , 137 - 141 .http://arxiv.org/abs/quant-ph/0403229 mosca , michelle , and artur ekert , * the hidden subgroup problem and eigenvalue estimation on a quantum computer , * proceedings of the 1st nasa international conference on quantum computing and quantum communication , springer - verlag , ( 2001 ) .( http://xxx.lanl.gov/abs/quant-ph/9903071 ) shor ,peter w. , * polynomial time algorithms for prime factorization and discrete logarithms on a quantum computer , * siam j. on computing , 26(5 ) ( 1997 ) , pp 1484 - 1509 .( http://xxx.lanl.gov/abs/quant-ph/9508027 ) | one of the most promising and versatile approaches to creating new quantum algorithms is based on the quantum hidden subgroup ( qhs ) paradigm , originally suggested by alexei kitaev . this class of quantum algorithms encompasses the deutsch - jozsa , simon , shor algorithms , and many more . in this paper , our strategy for finding new quantum algorithms is to decompose shor s quantum factoring algorithm into its basic primitives , then to generalize these primitives , and finally to show how to reassemble them into new qhs algorithms . taking an `` alphabetic building blocks approach , '' we use these primitives to form an `` algorithmic toolkit '' for the creation of new quantum algorithms , such as wandering shor algorithms , continuous shor algorithms , the quantum circle algorithm , the dual shor algorithm , a qhs algorithm for feynman integrals , free qhs algorithms , and more . toward the end of this paper , we show how grover s algorithm is most surprisingly almost a qhs algorithm , and how this result suggests the possibility of an even more complete `` algorithmic tookit '' beyond the qhs algorithms . |
secure network coding offers a method securely transmitting information from the authorized sender to the authorized receiver .cai and yeung discussed the secrecy for the malicious adversary , eve , wiretapping a subset of all channels in the network . using the universal hashing lemma , the papers showed the existence of a secrecy code that universally works for any types of eavesdroppers under the size constraint of . also , the paper discussed construction of such a code . as another attack to information transmission via the network ,the malicious adversary contaminates the communication by contaminating the information on a subset of all channels in the network . using the method of error correction , the papers proposed a method to protect the message from the contamination .the correctness of the recovered message by the authorized receiver is called robustness .now , for simplicity , we consider the unicast setting . when the transmission rate from the authorized sender , alice to the authorized receiver , bob is and the rate of noise injected by eve is , using the result of the papers the paper showed that there exists a sequence of asymptotically correctable code with the rate if the rate of information leakage to eve is less than . however , there is a possibility that the malicious adversary makes a combination of eavesdropping and the contamination .that is , contaminating a part of channels , the malicious adversary might improve the ability of eavesdropping . in this paper , we discuss the secrecy when eve eavesdrops the information on the channels in , and adds artificial information to the information on the channels in sequentially based on the obtained information .we call such an active operation a strategy . when , under this assumption eve is allowed to arbitrarily modify the information on the channels in sequentially based on the obtained information. the aim of this paper is the following .firstly , we show that any strategy can not improve eve s information when the any operations in the network are linear .then , to clarify the necessity of the linearity assumption , we give a counterexample for the non - linear network , in which , there exists a strategy to improve eve s information .this example shows the importance of the assumption of the linearity .also , when the transmission rate from alice to bob is , the rate of noise injected by eve is , and the rate of information leakage to eve is , we discuss a code satisfying the secrecy and the robustness . in the asymptoticsetting , we show the existence of such a secure protocol with the rate . the remaining part of this paper is organized as follows .section [ s2 ] formulates our problem and shows the impossibility of eve s eavesdropping under the linear network .section [ s4 ] gives a counterexample in the non - linear case .section [ s3 ] discusses the asymptotic setting , and show the achievability of the asymptotic rate .we consider the unicast setting .assume that the authorized sender , alice , and the authorized receiver , bob , are linked via a network with the set of edges , where the operations on all nodes are linear on the finite filed with prime power .alice inputs the input variable in and bob receives the output variable in .we also assume that the malicious adversary , eve , wiretaps the information in on the edges of a subset .now , we fix the topology and dynamics ( operations on the intermediate nodes ) of the network .when we assume all operations on the intermediate nodes are linear on , there exist matrices and such that the variables , , and satisfy their relations that is , the matrices and are decided from the network topology and dynamics .we call this attack the _ passive attack_. to address the active attack , we consider stronger eve , i.e. , we assume that eve adds an error in on the edges of a subset . using matrices and ,we rewrite the above relations as which is called the _ wiretap and addition model_. now , to consider the time ordering among the edges in , we assign the numbers to all elements of such that .we assume that the information transmission on each edge is done with this order . in this representation ,the elements of and are arranged in this order .hence , the elements of the subsets and are expressed as and by using two strictly increasing functions and .the causality yields that it is natural that eve can choose the information to be added in the edge based on the information obtained previously on the edges in the subset .that is , the added error is given as a function of , which can be regarded as eve s strategy .we call this attack the _ active attack _ with the strategy .now , we consider the -transmission setting , in which , alice uses the same network times to send the message to bob .alice s input variable ( eve s added variable ) is given as a matrix ( a matrix ) , and bob s ( eve s ) received variable is given as a matrix ( a matrix ) .we assume that the topology and dynamics of the network and the edge attacked by eve are not changed during transmissions .their relation is given as to discuss the secrecy , we formulate a code .let and be the message set and the set of values of the scramble random number .then , an encoder is given as a function from to , and the decoder is given as from to .our code is the pair , and is denoted by .then , we denote the message and the scramble random number by and .the cardinality of is called the size of the code and is denoted by . here, we treat as deterministic values , and denote the pairs and by and , respectively . in the following , we fix . as a measure of the leaked information, we adopt the mutual information between and eve s information and since the variable is given as a function of , we have .since the leaked information is given as a function of in this situation , we denote it by ] .this probabilistic setting expresses the situation that eve can not necessarily choose her position to attack by herself while she knows the position , and chooses her strategy dependently of the position .now , we have the following theorem . for any eve sstrategy , eve s information with strategy and that with strategy can be simulated by each other .hence , we have the equation = i(m;y_e^n)[\phi_n,\bm{k},\bm{h},\alpha].\end{aligned}\ ] ] this theorem shows that the information leakage of the active attack with the strategy is the same as the information leakage of the passive attack .hence , to guarantee the secrecy under an arbitrary active attack , it is sufficient to show the secrecy under the passive attack .we define two random variables and .eve can simulate her outcome under the strategy from , , and .so , we have .conversely , since is given as a function of , , and , we have the opposite inequality . to compare the passive attack with the active attack , we count the number of choices of both attacks . in the passive attack ,when we fix the rank of ( the dimension of leaked information ) , by taking account into the equivalent class , the number of the possible choices is upper bounded by . in the active attack case , this calculation is more complicated . for simplicity, we consider the case with . now, we do not count the choice for the inputs on the edge with because it does not effects eve s information . then , even when we fix the matrices , the number of choices of is where .notice that when . if we count the choice on the remaining edges , we need to multiply on . for general ,the number of choices of is the above analysis discusses the case when eve adds the error .however , eve might replace the information on the edges in by another elements , which is called the _ wiretap and replacement model_. this situation has different network structure . by using matrices , , and different from , , and ,the observations and are given as however , when the set is the same as the set , it is allowed to employ the original model and because any strategy with the model and can be written as a strategy with the original model and .theorem [ t1 ] discusses the unicast case .it can be trivially extended to the multicast case because we do not discuss the decoder .it also can be extended to the multiple unicast case , in which , there are several pairs of sender and receiver . when there are pairs in this setting , the random variables and have the forms and .so , we can apply the discussion of theorem [ t1 ] .to clarify the impossibility of theorem [ t1 ] under the non - linear network , we show a counter example composed of non - linear operations . we consider the network given in fig .[ f1 ] , whose edges are . each edge is assumed to send the binary information .the intermediate node makes the non - linear operation as to send the binary information , we prepare the binary uniform scramble random variable .we consider the following code .the encoder is given as the decoder is given as . since and are given as follows under this code ; the decoder can recover nevertheless the value of .now , we consider the leaked information for the passive attack .eve is allowed to attack two edges of except for the pairs and .the mutual information of these cases are calculated to in this section , we choose the base of the logarithm to be .now , we consider the active attack with of the above four cases .( i ) : : when , eve replaces by . then , because .( ii ) : : when , eve replaces by .then , because .( iii ) : : when or , eve has no good active attack . to see the difference from the linear case ,we adopt a weak secrecy criterion in this section .when is eve s information and for all of eve s possible attack , we say that the code is secure . otherwise , it is called insecure . when eve is allowed to use the above passive attack ,shows that the code is secure .when eve is allowed to use the above active attack , the above calculation shows that the code is insecure .therefore , this example is a counter example of theorem [ t1 ] .our analysis covers the optimal codes . as another encoder, we can consider replacing by , the analysis can be reduced to the presented analysis .other encoders clearly leaks the message to or . in this model , eve canperfectly contaminates the message .when eve take the choice ( i ) , and replace by , bob s decoded message is . under the choice ( ii ) ,eve can perfectly contaminates the message in a similar way .next , under the same assumption as section [ s2 ] , we consider the asymptotic setting by taking account into robustness as well as secrecy .we have assumed that the topology and dynamics of network and the edge attacked by eve are not changed during transmission .now , we assume that eve knows these matrices and alice and bob know none of them because alice and bob often do not know the topology and dynamics of the network the edge attacked by eve .when eve adds the error , there is a possibility that bob can not recover the original information .this problem is called the _ robustness _ , and may be regarded as a kind of error correction . under the conventional error correction ,the error is treated as a random variable subject to the uniform distribution . however , our problem is different from the conventional error correction .since the decoding error probability depends of the strategy .so , we denote it by ] instead of ], we define the conditional rnyi entropy for the joint distribution as which is often denoted by in .when obeys the uniform distribution , we have ( * ? ? ?* theorem 1 ) for ] .theorem [ t1 ] shows = i(\bar{m};y_e^{l_n}|r)[\phi_n,\bm{k},\bm{h},0]$ ] , which does not depend on and depends only on .now , we evaluate this leaked information via a similar idea to the paper . since inequality implies that , proposition [ t4 ] yields that \le \frac{e^{s ( \bar{k}_n\log q - h_{1+s}(m|y_e^{l_n}))}}{s } \nonumber \\ \le & \frac{q^{s ( \bar{k}_n- k_n + { l_n } m_{2})}}{s } \le \frac{q^{-s \lceil \sqrt{l_n}\rceil } } { s}.\end{aligned}\ ] ] we set . since the number of matrices satisfying is bounded by , markov inequality guarantees that the inequality |_{r = r } \le q^{- \lceil \sqrt{l_n}\rceil + m_{2}(m_{{3}}-m_{2})+1 } \label{e20}\end{aligned}\ ] ] holds for any matrix satisfying for any matrix at least with probability . therefore , there exists a suitable hash function such that |_{r = r } \le q^{- \lceil \sqrt{l_n}\rceil + m_{2}(m_{{3}}-m_{2})+1},\end{aligned}\ ] ] which goes to zero as goes to infinity because is a constant .now , we back to the construction of real codes .we choose the sets and to be and , respectively .since the linearity and the surjectivity of implies that for any element , we can define the invertible function from to the domain of , i.e. , such that for any element .this condition implies for .then , we define our encoder as , and the decoder as . the sequence of codes satisfies the desired requirements .we discuss an efficient construction of our code from a code given in corollary [ t2 ] with . a modified form of the toeplitz matrices is also shown to be universal , which is given by a concatenation of the toeplitz matrix and the identity matrix , where is the random seed to decide the toeplitz matrix and belongs to .the ( modified ) toeplitz matrices are particularly useful in practice , because there exists an efficient multiplication algorithm using the fast fourier transform algorithm with complexity . when the random seed is fixed , the encoder for our code is given as follows . by using the scramble random variable , the encoder is given as because .( the multiplication of toeplitz matrix can be performed as a part of circulant matrix .for example , the reference ( * ? ? ?* appedix c ) gives a method to give a circulant matrix . ) .more efficient construction for univeral2 hash function is discussed in .so , the decoder is given as .if we do not apply theorem [ t1 ] in , we have to multiply the number of the choices of the strategy . as the generalization of ,this number is given in , which glows up double - exponentially .hence , our proof of theorem [ t3 ] does not work without use of theorem [ t1 ] .we have discussed the effect by sequential error injection to eve s obtained information . as the result, we have shown that there is no improvement when the network is composed of linear operations . however ,when the network contains non - linear operations , we have found a counterexample to improve eve s obtained information .further , we have shown the achievability of the asymptotic rate for a linear network under the secrecy and robustness conditions when the transmission rate from alice to bob is , the rate of noise injected by eve is , and the rate of information leakage to eve is .the converse part of this rate is an interesting open problem .the works reported here were supported in part by the jsps grant - in - aid for scientific research ( c ) no . 16k00014 and ( b ) no .16kt0017 , the okawa research grant and kayamori foundation of informational science advancement .j. kurihara , r. matsumoto , and t. uyematsu , `` relative generalized rank weight of linear codes and its applications to network coding , '' _ ieee trans .theory _ , vol .61 , no . 7 , pp . 39123936 ( 2013 ) .t. ho , b. leong , r. koetter , m. mdard , m. effros , and d. r. karger , `` byzantine modification detection for multicast networks using randomized network coding , '' _ proc .2004 ieee int .information theory ( isit 2004 ) _ , chicago , il , june / july 2004 , p. 144 .s. jaggi , m. langberg , t. ho , and m. effros , `` correction of adversarial errors in networks , '' _ proc .2005 ieee int .information theory ( isit 2005 ) _ , adelaide , australia , sept . 2005 , pp .1455 - 1459 .s. jaggi , m. langberg , s. katti , t. ho , d. katabi , m. medard , and m. effros , `` resilient network coding in the presence of byzantine adversaries , '' _ ieee transactions on information theory _ , vol .54 , no . 6 , pp .25962603 , ( 2008 ) .s. jaggi and m. langberg , `` resilient network coding in the presence of eavesdropping byzantine adversaries , '' _ proc .2007 ieee int .information theory ( isit 2007 ) _ , nice , france , june 2007 , p. 541m. hayashi and t. tsurumaru , `` more efficient privacy amplification with less random seeds via dual universal hash function , '' _ ieee transactions on information theory _62 , no . 4 , pp .2213 2232 , ( 2016 ) . | in the network coding , we discuss the effect by sequential error injection to information leakage . we show that there is no improvement when the network is composed of linear operations . however , when the network contains non - linear operations , we find a counterexample to improve eve s obtained information . further , we discuss the asymptotic rate in the linear network under the secrecy and robustness conditions . secrecy analysis , secure network coding , sequential injection , passive attack , active attack |
this paper focuses on the average complexity of solving random 3-sat instances using backtrack algorithms .being an np - complete problem , 3-sat is not thought to be solvable in an efficient way , _i.e. _ in time growing at most polynomially with .in practice , one therefore resorts to methods that need , _ a priori _ , exponentially large computational resources .one of these algorithms is the ubiquitous davis putnam loveland logemann ( dpll ) solving procedure .dpll is a complete search algorithm based on backtracking ; its operation is briefly recalled in figure 1 .the sequence of assignments of variables made by dpll in the course of instance solving can be represented as a search tree , whose size ( number of nodes ) is a convenient measure of the hardness of resolution .some examples of search trees are presented in figure [ trees ] . in the past few years, much experimental and theoretical progress has been made on the probabilistic analysis of 3-sat .distributions of random instances controlled by few parameters are particularly useful in shedding light on the onset of complexity .an example that has attracted a lot of attention over the past years is random 3-sat : all clauses are drawn randomly and each variable negated or left unchanged with equal probabilities .experiments and theory indicate that clauses can almost surely always ( respectively never ) be simultaneously satisfied if is smaller ( resp .larger ) than a critical threshold as soon as the numbers of clauses and of variables go to infinity at a fixed ratio .this phase transition is accompanied by a drastic peak in hardness at threshold .the emerging pattern of complexity is as follows . at small ratios , where depends on the heuristic used by dpll , instances are almost surely satisfiable ( sat ) ,see and for recent reviews .the size of the associated search tree scales , with high probability , linearly with the number of variables , and almost no backtracking is present ( figure [ trees]a ) . above the critical ratio , that is when , instances are a.s .unsatisfiable ( unsat ) and proofs of refutation are obtained through massive backtracking ( figure [ trees]b ) , leading to an exponential hardness : with . in the intermediate range , ,finding a solution a.s . requires exponential effort ( ) .the aim of this article is two - fold .first , we propose a simple and intuitive framework to unify the above findings .this framework is presented in section 2 .it is based on the statistical physics notions of dynamical trajectories and phase diagram , and was , to some extent , implicitly contained in the pioneering analysis of search heuristics by . secondly , we present in section 3 a quantitative study of the growth of the search tree in the unsat regime .such a study has been lacking so far due to the formidable difficulty in taking into account the effect of massive backtracking on the operation of dpll .we first establish an exact relationship between the average size of the search tree and the powers of the evolution operator encoding the elementary steps of the search heuristic .this equivalence is then used ( in a non rigorous way ) to accurately estimate the logarithm of the average complexity as a function of , \quad , \ ] ] where denotes the expectation value for given and .the approach emphasizes the relevance of partial differential equations to analyse algorithms in presence of massive backtracking , as opposed to ordinary differential equations in the absence of the latter . in section 4, we focus upon the upper sat regime _i.e. _ upon ratios . combining the framework of section 2 and the analysis of section 3 we unveil the structure of the search tree ( figure [ trees]c ) and calculate as a function of the ratio of the 3-sat instance to be solved . for the sake of clarity and since the style of our approach may look unusual to the computer scientist reader , the status of the different calculations and results ( experimental , exact , conjectured , approximate , ... ) are made explicit throughout the article ..5 cm .5 cmthe action of dpll on an instance of 3-sat causes changes to the overall numbers of variables and clauses , and thus of the ratio .furthermore , dpll reduces some 3-clauses to 2-clauses . a mixed 2+p - sat distribution , where is the fraction of 3-clauses , can be used to model what remains of the input instance at a node of the search tree .using experiments and methods from statistical mechanics , the threshold line , separating sat from unsat phases , may be estimated with the results shown in figure [ diag ] . for , _i.e. _ to the left of point t , the threshold line is given by , as rigorously confirmed by , and saturates the upper bound for the satisfaction of 2-clauses .above , no exact value for is known .note that corresponds to .the phase diagram of 2+p - sat is the natural space in which dpll dynamic takes place .an input 3-sat instance with ratio shows up on the right vertical boundary of figure [ diag ] as a point of coordinates . under the action of dpll , the representative point moves aside from the 3-sat axis and follows a trajectorythis trajectory obviously depends on the heuristic of split followed by dpll ( figure 1 ) .possible simple heuristics are , * _ unit - clause ( uc ) : _ randomly pick up a literal among a unit clause if any , or any unset variable otherwise . * _ generalized unit - clause ( guc ) : _ randomly pick up a literal among the shortest avalaible clauses . * _ short clause with majority ( sc ) : _ randomly pick up a literal among unit clauses if any ; otherwise randomly pick up an unset variable , count the numbers of occurences of , in 3-clauses , and choose ( respectively ) if ( resp . ) . when , and are equally likely to be chosen .rigorous mathematical analysis , undertaken to provide rigorous bounds to the critical threshold , have so far been restricted to the action of dpll prior to any backtracking , that is , to the first descent of the algorithm in the search tree .the corresponding search branch is drawn on figure [ trees]a .these studies rely on the two following facts : first , the representative point of the instance treated by dpll does not `` leave '' the 2+p - sat phase diagram .in other words , the instance is , at any stage of the search process , uniformly distributed from the 2+p - sat distribution conditioned to its clause per variable ratio and fraction of 3-clauses .this assumption is not true for all heuristics of split , but holds for the above examples ( , , ) .analysis of more sophisticated heuristics require to handle more complex instance distributions .secondly , the trajectory followed by an instance in the course of resolution is a stochastic object , due to the randomness of the instance and of the assignments done by dpll . in the large size limit ( ) , this trajectory gets concentrated around its average locus in the 2+p - sat phase diagram .this concentration phenomenon results from general properties of markov chains .let us briefly recall analysis of the average trajectory corresponding to the action of dpll prior to backtracking .the ratio of clauses per variable of the 3-sat instance to be solved will be denoted by .the numbers of 2 and 3-clauses are initially equal to respectively . under the action of dpll , and follow a markovian stochastic evolution process , as the depth along the branch ( number of assigned variables ) increases .both and are concentrated around their expectation values , the densities |u\rangle , |t\rangle , latexmath:[\[{\bf h } = \left ( \begin{array}{c c c } 0 & 0 & 0 \\ \frac 12 & 1 & 0 \\ \frac 12 & 0 & 1 \end{array }\right ) \ \hbox{\rm with } \quad entries can be interpreted as follows .starting from the state , variable will be set through unit - propagation to or with equal probabilities : . once the variable has reached this state , the instanceis violated : .all other entries are null . in particular ,state can never be reached from any state , so the first line of the matrix is filled in with zeroes : .function ( [ funcq ] ) is easily calculated therefore , .indeed , refutation is obtained without any split , and the search tree involves a unique branch of length 1 ( figure [ tree - exemple]a ) . our next example is a 2-sat instance whose refutation requires to split one variable .instance over variables , with a unique refutation tree . the evolution matrix is a matrix with 16 non zero entries , we now explain how these matrix elements were obtained . from the undetermined state ,any of the four clause can be chosen by the heuristic .thus , any of the two literals , has a probability to be chosen : .next , unit - propagation will set the unassigned variable to true , or false with equal probabilities ( [ m22 ] ) .finally , entries corresponding to violating states in eqn ( [ m23 ] ) are calculated according to rule ( [ defmu ] ) .the branch function equals 1 for , 2 for any ; thus , and , in agreement with the associated search tree symbolized in figure [ tree - exemple]b .we now introduce an instance with a non unique refutation tree .instance with variables , and two refutation trees . notice the presence of a ( trivial ) clause containing opposite literals , which allows us to obtain a variety in the search trees without considering more than three variables .the evolution matrix is a matrix with 56 non zero entries ( for the guc heuristic ) , the first split variable is if the last clause is chosen ( probability ) , or or otherwise ( with probability each ) , leading to expressions ( [ m31 ] ) and ( [ m32 ] ) .the remaining entries of are obtained in the same way as explained in example 10 .we obtain , and .therefore , and where the different contributions to and their probabilities are explicitely written down , see figures [ tree - exemple]b and [ tree - exemple]c . , and .grey and black nodes correspond to variables assigned through unit - propagation and split respectively , as in figure [ struct ] .* a*. example 9 : refutation of instance is obtained as a result of unit - propagation .the size ( number of leaves ) of the search tree is . * b*. example 10 : search tree generated by dpll on instance .the black and grey node correspond to the split of and unit - propagation over , or vice - versa .the size of the tree is . * c*. example 11 : search tree corresponding to the instance when dpll first splits variable .the size of the tree is .if the first split variable is or , the refutation search tree of instance corresponds to case * b*. , title="fig : " ] .5 cm let us denotes by the expectation value of a function of the instance over the random 3-sat distribution , at given numbers of variable , , and clauses , . from theorem 6 , the expectation value of the size of the refutation tree is calculation of the expectation value of the power of is a hard task that we were unable to perform for large sizes .we therefore turned to a simplifying approximation , hereafter called dynamical annealing .this approximation is not thought to be justified in general , but may be asymptotically exact in some limiting cases we will expose later on . a first temptation is to approximate the expectation of the power of with the power of the expectation of .this is however too a brutal approximation to be meaningful , and a more refined scheme is needed .clause projection operator consider an instance of the 3-sat problem .the clause vector of a partial state is a three dimensional vector where is the number of undetermined clauses of of type .the clause projection operator , , is the operator acting on and projecting onto the subspace of partial state vectors with clause vectors , \ function .the sum of all state vectors in the spanning basis with clause vector is denoted by .the sum of all state vectors in the spanning basis with clause vector and undetermined variables is denoted by .it is an easy check that is indeed a projection operator : .as the set of partial states can be partitioned according to their clause vectors , we now introduce the clause vector - dependent branch function summation of the s over all gives back function ( [ funcq ] ) from identity ( [ identriv ] ) .the evolution equation for is , where we have made use of identities ( [ closure ] ) and ( [ identriv ] ) .we are now ready to do the two following approximation steps : dynamical annealing ( step a ) substitute in equation ( [ ap2 ] ) the partial state vector and undetermined variables .following step a , equation ( [ ap2 ] ) becomes an approximated evolution equation for , \ ; b ( \vec c , t ) \quad , \ ] ] where the new evolution matrix , not to be confused with , is = \frac { \langle \sigma ( \vec c)| { \bf h } | \sigma _ { n - t}(\vec c ' ) \rangle } { \langle \sigma | \sigma _ { n - t}(\vec c ' ) \rangle } \quad .\ ] ] then , dynamical annealing ( step b ) substitute in equation ( [ ap3 ] ) the evolution matrix with = \frac { \overline{\langle \sigma ( \vec c)| { \bf h } | \sigma _ { n - t}(\vec c ' ) \rangle}}{\overline{\langle \sigma | \sigma _ { n - t}(\vec c ' ) \rangle } } \ ] ] that is , consider the instance is redrawn at each time step , keeping information about clause vectors at time only .let us interpret what we have done so far .the quantity we focus on is , the expectation number of branches at depth in the search tree ( figure [ struct ] ) carrying partial states with clause vector . within the dynamical annealing approximation , the evolution of the s is markovian , \ ; \bar b(\vec c';t ) \ .\ ] ] the entries of the evolution matrix $ ] can be interpreted as the average number of branches with clause vector that dpll will generate through the assignment of one variable from a partial assignment ( partial state ) of variables with clause vector . for the guc heuristic, we find , = { c_3 ' \choose c_3'-c_3 } \ ; \left ( \frac 3{n - t}\right)^{c_3'-c_3 } \ ; \left(1-\frac 3{n - t}\right)^{c_3 } \times \nonumber \\ & & \qquad \qquad \qquad \qquad \sum_{w_2=0}^{c_3'-c_3 } \left ( \frac 1 2 \right)^{c_3'-c_3 } { c_3'-c_3\choose w_2}\times \nonumber \\ & & \left \{(1 - \delta _ { c'_1 } ) \ ; \left ( 1 - \frac1{2(n - t ) } \right)^{c'_1 - 1 } \sum_{z_2=0}^{c_2 ' } { c_2 ' \choose z_2 } \left ( \frac 2{n - t}\right)^{z_2 } \right .\times \nonumber \\ & & \left ( 1- \frac 2{n - t}\right)^{c_2'-z_2 } \sum_{w_1=0}^{z_2 } \left ( \frac 1 2 \right)^{z_2 } { z_2\choose w_1}\ ; \delta_{c_2-c_2'-w_2+z_2 } \;\delta_{c_1-c_1'-w_1 + 1 } + \nonumber \\ & & \delta_{c_1 ' } \sum_{z_2=0}^{c_2'-1 } { c_2'-1 \choose z_2 } \left ( \frac 2{n - t}\right)^{z_2}\ , \left ( 1- \frac 2{n - t}\right)^{c_2'-1-z_2 } \times \nonumber \\ & & \left .\sum_{w_1=0}^{z_2 } \left(\frac 1 2 \right)^{z_2 } { z_2\choose w_1}\ ; \delta_{c_2-c_2'-w_2+z_2 + 1 } \ ; [ \delta_{c_1-w_1 } + \delta_{c_1 - 1-w_1 } ] \right\ } \ , \end{aligned}\ ] ] where denotes the kronecker delta function over integers : if , otherwise .expression ( [ bbra ] ) is easy to obtain from the interpretation following equation ( [ bradp ] ) .let us introduce the generating function of the average number of branches where , through evolution equation ( [ ap3 ] ) for the s can be rewritten in term of the generating function , where is a vectorial function of argument whose components read \quad , \nonumber\\ \gamma_2({\vec y})&=&y_2+\ln\left[1+\frac 2 { n - t } \left ( \frac{e^{-y_2}}{2 } \left(1 + e^{y_1}\right ) -1\right)\right ] \quad , \nonumber \\ \gamma_3 ( { \vec y})&=&y_3+\ln\left[1+\frac 3 { n - t } \left ( \frac{e^{-y_3}}{2 } \left(1 + e^{y_2}\right ) -1\right)\right ] \quad .\end{aligned}\ ] ] to solve equation ( [ eqev ] ) , we infer the large behaviour of from the following remarks : 1 . each time dpll assigns variables through splitting or unit - propagation , the numbers of clauses of length undergo changes .it is thus sensible to assume that , when the number of assigned variables increases from to with very large but e.g. , the densities and of 2- and 3-clauses have been modified by .2 . on the same time interval , we expect the number of unit - clauses to vary at each time step .but its distribution , conditioned to the densities , and the reduced time , should reach some well defined limit distribution .this claim is a generalization of the result obtained by for the analysis of the guc heuristic in the absence of backtracking .as long as a partial state does not violate the instance , very few unit - clauses are generated , and splitting frequently occurs . in other words , the probability that is strictly positive as gets large .the above arguments entice us to make the following asymptotic expression for the generating function for large at fixed ratio , the generating function ( [ gener ] ) of the average numbers of branches is expected to behave as \quad .\ ] ] hypothesis ( [ scalinghyp2 ] ) expresses in a concise way some important information on the distribution of clause populations during the search process that we now extract .call the legendre transform of , \qquad .\label{inversion}\ ] ] then , combining equations ( [ gener ] ) , ( [ scalinghyp2 ] ) and ( [ inversion ] ) , we obtain \quad , \ ] ] up to non exponential in corrections . in other words ,the expectation value of the number of branches carrying partial states with undetermined variables and -clauses ( ) scales exponentially with , with a growth function related to through identity ( [ inversion ] ) .moreover , is the logarithm of the number of branches ( divided by ) after a fraction of variables have been assigned .the most probable values of the densities of -clauses are then obtained from the partial derivatives of : for .let us emphasize that in equation ( [ scalinghyp2 ] ) does not depend on .this hypothesis simply expresses that , as far as non violating partial states are concerned , both terms on the right hand side of ( [ eqev ] ) are of the same order , and that the density of unit - clauses , , identically vanishes . similarly , function is related to the generating function of distribution , where ( ) on the left hand side of the above formula .inserting expression ( [ scalinghyp2 ] ) into the evolution equation ( [ eqev ] ) , we find \frac{\partial \varphi } { \partial y_2 } ( y_2,y_3;t ) \nonumber \\ & + & \frac 3{1-t } \left [ e^{-y_3 } \left ( \frac{1+e^{y_2}}2 \right ) -1 \right ] \frac{\partial \varphi } { \partial y_3 } ( y_2,y_3;t ) \nonumber \\ & + & \ln \left [ 1 + k(y_1,y_2 ) \ ; e^{\psi ( -\infty , y_2,y_3;t ) - \psi(y_1,y_2,y_3;t ) } \right]\end{aligned}\ ] ] where .as does not depend upon , the latter may be chosen at our convenience e.g. to cancel and the contribution from the last term in equation ( [ mdar ] ) , such a procedure , sometimes called kernel method and , to our knowledge , first proposed by , is correct in the major part of the space and , in particular , in the vicinity of we focus on in this paper space ; a complete analysis of this case was carried out by . ] .we end up with the following partial differential equation ( pde ) for , \quad , \ ] ] where incorporates the details of the splitting heuristic+ \frac{c_2}{1-t } \ ; \left ( \frac 32 e^{-y_2 } -2 \right ) \qquad .\ ] ] ] , & = & -y_1(y_2 ) + \frac { 3\ , c_3}{1-t}\ ; \left [ e^{-y_3}\;\left ( \frac{1+e^{y_2}}{2}\right ) -1 \right]\nonumber \\ & + & \frac{c_2}{1-t } \ ; \left ( e^ { -y_1(y_2 ) } -2 \right ) \qquad .\end{aligned}\ ] ] we must therefore solve the partial differential equation ( pde ) ( [ croi2 ] ) with the initial condition , obtained through inverse legendre transform ( [ inversion ] ) of the initial condition over , or equivalently over , .5 cm we can interpret the dynamical annealing approximation made in the previous paragraphs , and the resulting pde ( [ croi2 ] ) as a description of the growth process of the search tree resulting from dpll operation .using legendre transform ( [ inversion ] ) , pde ( [ croi2 ] ) can be written as an evolution equation for the logarithm of the average number of branches with parameters as the depth increases , \qquad .\label{croi}\ ] ] partial differential equation ( pde ) ( [ croi ] ) is analogous to growth processes encountered in statistical physics .the surface , growing with `` time '' above the plane , or equivalently from ( [ change ] ) , above the plane ( figure [ dome ] ) , describes the whole distribution of branches .the average number of branches at depth in the tree equals where is the maximum over of reached in . in other words ,the exponentially dominant contribution to comes from branches carrying 2+p - sat instances with parameters , that is clause densities , .parametric plot of as a function of defines the tree trajectories on figure [ diag ] .the hyperbolic line in figure [ diag ] indicates the halt points , where contradictions prevent dominant branches from further growing .each time dpll assigns a variable through unit - propagation , an average number of new 1-clauses is produced , resulting in a net rate of additional 1-clauses .as long as , 1-clauses are quickly eliminated and do not accumulate .conversely , if , 1-clauses tend to accumulate . opposite 1-clauses and are likely to appear , leading to a contradiction .the halt line is defined through , and reads , \;\frac 1{1-p } \qquad .\ ] ] it differs from the halt line corresponding to a single branch .as far as dominant branches are concerned , an alternative and simpler way of obtaining the halt criterion is through calculation of the probability that a split occurs when a variable is assigned by dpll , from equations ( [ psigen],[mdar ] ) .the probability of split vanishes , and unit - clauses accumulate till a contradiction is obtained , when the tree stops growing . along the tree trajectory, grows thus from 0 , on the right vertical axis , up to some final positive value , , on the halt line . is our theoretical prediction for the logarithm of the complexity ( divided by ) to match the definition used for numerical experiments ; this is done in table 1 ] .equation ( [ croi ] ) was solved using the method of characteristics .using eqn .( [ change ] ) , we have plotted the surface at different times , with the results shown in figure [ dome ] for .values of , obtained for by solving equation ( [ croi ] ) compare very well with numerical results ( table 1 ) .we stress that , though our calculation is not rigorous , it provides a very good quantitative estimate of the complexity .it is therefore expected that our dynamical annealing approximation be quantitavely accurate .it is a reasonable conjecture that it becomes exact at large ratios , where pde ( [ croi2 ] ) can be exactly solved : asymptotic equivalent of for large ratios resolution of pde ( [ croi ] ) in the large ratio limit gives ( for the guc heuristic ) , ^ 2 \ ; \frac 1{\alpha_0 } \quad .\ ] ] this result exhibits the scaling proven by , and is conjectured to be exact . as increases , search trees become smaller and smaller , and correlations between branches , weaker and weaker , making dynamical annealing more and more accurate . ) .dpll starts with a satisfiable 3-sat instance and transforms it into a sequence of 2+p - sat instances .the leftmost branch in the tree symbolizes the first descent made by dpll . above node , instances are satisfiable while below , instances have no solutions . a grey triangle accounts for the ( exponentially ) large refutation subtree that dpll has to go through before backtracking above and reaching .by definition , the highest node reached back by dpll is .further backtracking , below , will be necessary but a solution will be eventually found ( right subtree ) , see figure [ trees]c.,title="fig : " ] .3 cmthe interest of the trajectory framework proposed in this paper is best seen in the upper sat phase , that is , for ratios ranging from to .this intermediate region juxtaposes branch and tree behaviors , see search tree in figures [ trees]c and [ treeinter ] .the branch trajectory , started from the point corresponding to the initial 3-sat instance , hits the critical line at some point g with coordinates ( ) after variables have been assigned by dpll , see figure [ diag ] .the algorithm then enters the unsat phase and , with high probability , generates a 2+p - sat instance with no solution . a dense subtree that dpll has to go through entirely , forms beyond g till the halt line ( left subtree in figure [ treeinter ] ) .the size of this subtree can be analytically predicted from the theory exposed in section 3 .all calculations are identical , except initial condition ( [ initphi ] ) which has to be changed into as a result we obtain the size of the unsatisfiable subtree to be backtracked ( leftmost subtree in figure [ treeinter ] ) . denotes the number of undetermined variables at point . is the highest backtracking node in the tree ( figures [ trees]c and [ treeinter ] ) reached back by dpll , since nodes above g are located in the sat phase and carry 2+p - sat instances with solutions .dpll will eventually reach a solution .the corresponding branch ( rightmost path in figure [ trees]c ) is highly non typical and does not contribute to the complexity , since almost all branches in the search tree are described by the tree trajectory issued from g ( figure [ diag ] ) .we expect that the computational effort dpll requires to find a solution will , to exponential order in , be given by the size of the left unsatisfiable subtree of figure [ treeinter ] .in other words , massive backtracking will certainly be present in the right subtree ( the one leading to the solution ) , and no significant statistical difference is expected between both subtrees .we have experimentally checked this scenario for .the average coordinates of the highest backtracking node , ) , coincide with the computed intersection of the single branch trajectory ( section 2.2 ) and the estimated critical line . as for complexity ,experimental measures of from 3-sat instances at , and of from 2 + 0.78-sat instances at , obey the expected identity and are in very good agreement with theory ( table 1 ) .therefore , the structure of search trees corresponding to instances of 3-sat in the upper sat regime reflects the existence of a critical line for 2+p - sat instances .in this paper , we have exposed a procedure to understand the complexity pattern of the backtrack resolution of the random satisfiability problem ( figure [ sche ] ) .main steps are : 1 .identify the space of parameters in which the dynamical evolution takes place ; this space will be generally larger than the initial parameter space since the algorithm modifies the instance structure .while the distribution of 3-sat instances is characterized by the clause per variable ratio only , another parameter accounting for the emergence of 2-clauses has to be considered .2 . divide the parameter space into different regions ( phases ) depending on the output of the resolution e.g. sat / unsat phases for 2+p - sat .3 . represent the action of the algorithm as trajectories in this phase diagram .intersection of trajectories with the phase boundaries allow to distinguish hard from easy regimes ( figure [ sche ] ) .in addition , we have also presented a non rigorous study of the search tree growth , which allows us to accurately estimate the complexity of resolution in presence of massive backtracking . from a mathematical point of view , it is worth noticing that monitoring the growth of the search tree requires a pde , while odes are sufficient to account for the evolution of a single branch .an interesting question raised by this picture is the robustness of the polynomial / exponential crossover point t ( figure 3 ) .while the ratio separating easy ( polynomial ) from hard ( exponential ) resolutions depends on the heuristics used by dpll ( , ) , t appears to be located at the same coordinates for all three uc , guc , and sc heuristics . from a technical point of view, the robustness of t comes from the structure of the odes ( [ ode ] ) .the coordinates of t , and the time at which the branch trajectory issued from hits the critical line tangentially , obey the equations with .the set of odes ( [ ode ] ) , combined with the previous conditions , gives .this robustness explains why the polynomial / exponential crossover location of critically constrained 2+p - sat instances , which should _ a priori _depend on the algorithm used , was found by to coincide roughly with the algorithm independent , tricritical point on the line .our approach has already been extended to other decision problems , e.g. the vertex covering of random graphs or the coloring of random graphs ( see for recent rigorous results on backtracking in this case ) .it is important to stress that it is not limited to the determination of the average solving time , but may also be used to capture its distribution and to understand the efficiency of restarts techniques .finally , we emphasize that theorem 6 relates the computational effort to the evolution operator representing the elementary steps of the search heuristic for a _ given _ instance .it is expected that this approach will be useful to obtain results on the average - case complexity of dpll at fixed instance , where the average is performed over the random choices done by the algorithm only .we thank j. franco for his constant support during the completion of this work .r. monasson was in part supported by the aci jeunes chercheurs `` algorithmes doptimisation et systmes dsordonns quantiques '' from the french ministry of research .coarfa , c. , dernopoulos , d.d ., san miguel aguirre , a. , subramanian , d. and vardi , m.y .random 3-sat : the plot thickens . in r. dechter , editor , _ proc .principles and practice of constraint programming ( cp2000 ) _ , lecture notes in computer science 1894 , 143 - 159 ( 2000 ) .cocco , s. and monasson , r. trajectories in phase diagrams , growth processes and computational complexity : how search algorithms solve the 3-satisfiability problem , _ phys . rev .* 86 * , 1654 ( 2001 ) ; analysis of the computational complexity of solving random satisfiability problems using branch and bound search algorithms , _ eur .j. b _ * 22 * , 505 ( 2001 ) . cocco , s. and monasson r. exponentially hard problems are sometimes polynomial , a large deviation analysis of search algorithms for the random satisfiability problem , and its application to stop - and - restart resolutions , _ phys . rev .e _ * 66 * , 037101 ( 2002 ) .crawford , j. and auton , l. experimental results on the cross - over point in satisfiability problems , _ proc .11th natl .conference on artificial intelligence ( aaai-93 ) , _ 2127 , the aaai press / mit press , cambridge , ma ( 1993 ) ; _ artificial intelligence _ * 81 * ( 1996 ) .gent , i. , van maaren , h. and walsh , t. ( eds ) .sat2000 : highlights of satisfiability research in the year 2000 , _ frontiers in artificial intelligence and applications _63 , ios press , amsterdam ( 2000 ) .gu , j. , purdom , p.w . , franco , j. and wah , b.w .algorithms for satisfiability ( sat ) problem : a survey ._ dimacs series on discrete mathematics and theoretical computer science _ * 35 * , 19 - 151 , american mathematical society ( 1997 ) .mitchell , d. , selman , b. and levesque , h. hard and easy distributions of sat problems , _ proc . of the tenth natl .conf . on artificial intelligence ( aaai-92 )_ , 440 - 446 , the aaai press / mit press , cambridge , ma ( 1992 ) .monasson , r. , zecchina , r. , kirkpatrick , s. , selman , b. and troyansky , l. determining computational complexity from characteristic phase transitions. _ nature _ * 400 * , 133137 ( 1999 ) ; 2+p - sat : relation of typical - case complexity to the nature of the phase transition , _ random structure and algorithms _ * 15 * , 414 ( 1999 ) . | an analysis of the average - case complexity of solving random 3-satisfiability ( sat ) instances with backtrack algorithms is presented . we first interpret previous rigorous works in a unifying framework based on the statistical physics notions of dynamical trajectories , phase diagram and growth process . it is argued that , under the action of the davis putnam loveland logemann ( dpll ) algorithm , 3-sat instances are turned into -sat instances whose characteristic parameters ( ratio of clauses per variable , fraction of 3-clauses ) can be followed during the operation , and define resolution trajectories . depending on the location of trajectories in the phase diagram of the 2+p - sat model , easy ( polynomial ) or hard ( exponential ) resolutions are generated . three regimes are identified , depending on the ratio of the 3-sat instance to be solved . lower sat phase : for small ratios , dpll almost surely finds a solution in a time growing linearly with the number of variables . upper sat phase : for intermediate ratios , instances are almost surely satisfiable but finding a solution requires exponential time ( with ) with high probability . unsat phase : for large ratios , there is almost always no solution and proofs of refutation are exponential . an analysis of the growth of the search tree in both upper sat and unsat regimes is presented , and allows us to estimate as a function of . this analysis is based on an exact relationship between the average size of the search tree and the powers of the evolution operator encoding the elementary steps of the search heuristic . satisfiability , analysis of algorithms , backtrack . |
continued evolution in technology has made computing devices an integral part of one s life .the most common manifestation of this is the ` mobile phone ' .modern technology has transformed the mobile phone from a mere communication device to a versatile computing device .these hand - held devices have enabled us not only to access information , but also to provide information to others on the move .modern mobile phones , equipped with powerful sensors , have endowed capabilities to provide and create near real - time information .this real - time information is useful for oneself and for others .an established approach for sharing and provision of information and creating useful applications in a distributed environment is service oriented architecture ( soa ) . realizing soa over mobile devices has the potential to convert mobile phones owned by common people from mere _ information subscribers _ to _ information providers _ and beyond .the major advantage of this is that it can be used in scenarios where there is little or no preexisting infrastructure .examples of such scenarios include war - front , post - disaster relief management . in such scenarios ,mobile based soa has the potential to enable ground teams to provide runtime information to commanding units , help teams at disaster sites to exchange data , analyse damage and examine various statistics using mobile devices . in such systems of soa over mobile devicesall three elements of the soa triangle : service providers , service consumers , and service registries are realised over mobile devices .web services are the proven way towards implementation of a `` service oriented architecture '' .advancement in mobile device technology has motivated researchers to explore the possibilities of effectively hosting web services over mobile devices , and thereby trying to realize service oriented systems in mobile environments. there has been substantial work towards enabling mobile devices to host web services .an important aspect of service oriented systems , `` service discovery '' , however , remains a challenge in mobile environments .there is literature available on service discovery for distributed environments , but one catering specifically to mobile environments is still lacking .several challenges specific to hosting web services over mobile devices need to be taken into account in such service discovery mechanisms .these include , but are not limited to battery and network constraints , limited computational power of mobile devices .moreover , such dynamic mobile services are prone to uncertainty ( owing to network outage , battery issues , physical damage ) and frequent changes in functionality ( primarily owing to the change of context ) , and hence make frequent service updates a necessity to effectively function as web - services .the role of the _ service registry _ therefore , becomes one of prominence to properly manage such dynamism .traditional service registry solutions for web - services such as uddi , ebxml , can not be directly utilised in such environments that require frequent updates .what contributes to this is the exhaustive data model of such registry offerings that is hard to analyze and parse for mobile devices at run - time . to the best of our knowledge ,the current work is the first attempt to comprehensively investigate these issues and design a dynamic service registry that facilitates service discoveries in mobile environments . as mentioned earlier, the ultimate aim is to realize a service oriented architecture over mobile devices without involving high end servers .hence , the proposed architecture provides all registry related information and operations using mobile devices itself , without requiring high - end computers or high management costs .further , in order to support scalability , fault tolerance , and fault localization , we propose a distributed and category based service registry . to demonstrate the feasibility of the approach, we have engineered a prototype deployment .this includes heterogeneous and loosely coupled mobile devices deployed in a collaborative manner to manage the service registry along with native hosted services .we also compare the proposed approach with the traditional uddi system for managing service registry from the perspective of mobile devices .the evaluation shows propitious results in favour of our approach wherein the latter is shown to have acceptable battery requirements , low data communication costs , promising scalability , and little or no hindrance to the working of native applications of mobile devices . this work is a significant extension of our previous work . in our previous work , we had introduced an xmpp based model to maintain a service registry for mobile environments .the main focus of the work was to introduce the registry architecture and the communication mechanism followed during service discovery from the registry . in the presented work, we provide a holistic service registry framework that makes use of xmpp based service registry framework at the core .we defined the roles for mobile devices involved in the service registry framework to provide a scalable mobile registry solution .we have further extended the service registry operations to cater the specific needs of the mobile environment .we further provide detailed descriptions of the various registry operations that facilitate the realisation of a dynamic mobile service registry .we further evaluated the proposed approach by realizing it through a working prototype and deployed it over mobile devices of volunteers .we further present a detailed literature survey that covers various categories of service registries .the rest of the paper is organized as follows : section [ sec : motive ] presents a motivation and requirements for the novel approach .section [ sec : arch ] provides details of the proposed approach and design concepts .various registry operations are discussed in section [ subsec : operation ] .prototype implementation details and inline comparison with uddi are presented in section [ sec : implementation ] .section [ sec : evaluation ] includes notes on the experimental evaluation of the approach .this is followed by section [ sec : related ] that presents a survey of related work .finally , section [ sec : conclude ] concludes the paper with a brief discussion on future possibilities .kotler et al . suggested services as `` activities or benefits offered for sale that are essentially intangible and do not result in the ownership of anything '' .a mobile service defined in this work is a service that is offered from mobile phones of providers ; this may also include information provided by the mobile sensors , third party software , or human users .this allows different machines to exchange information with each other over a network , without necessarily requiring a user interface . in general ,the service may be a component or sub - part of the web application that is usually used by human users .for example , a chatting web application provides gui to human users to communicate with another human .while a presence service embedded in the web application detects the presence of other machines , this presence service does not require any human intervention ._ alice is a high risk cardiovascular patient .recently , she got an ecg sensor implanted in her body that monitors her cardiovascular health and provides statistics and information as a mobile service via her mobile phone .this service can be consumed by her cardiologist and she can be provided with proper prescriptions as per her current health .one day she had a sudden cardiac arrest on her way to another city .alarming variations in her ecg signals were observed by the service on her mobile device and the service discovered the nearest ambulance through the latter s exposed mobile service .further , her mobile service automatically provided access to her latest ecg signals to the ambulance support medical staff and enabled them to prepare well in advance for the patient .the ambulance was able to discover her current location through another service on her mobile device that provided gps coordinates .further , when the ambulance was on its way , the ambulance s mobile service provided the doctors at the nearest hospital with the latest information on the situation . simultaneously , the hospital was able to make use of alice s ecg mobile service to gather her ecg history and prior to her arrival the doctors at the hospital had a chance to study her medical profile and case in detail . on its way to the hospital, the ambulance was able to make use of the services exposed by other travelers on their respective mobile devices to avoid the busy route and opt for the path with less traffic .meanwhile , the insurance company was contacted by alice s mobile service and her hospital information was provided , so that the financial aspect could be taken care of even before her arrival .alice s cardiologist was also able to provide details of his / her prescriptions via his / her mobile service to the doctors in the hospital so that the latter could learn about her medications and allergies if any . _ with rapid advancements in mobile technologies and wireless networking ,mobile devices have become perhaps the most suitable and economical solutions for the provision of dynamic , transient , contextual , personalized services .these mobile provisioned services can make the service access handy and convenient for service consumers .further , the provisioning of mobile services is an economical solution that requires little or no pre - existing infrastructure . in the discussed scenario , alice , her cardiologist , the ambulance, hospital staff can make use of each other s mobile services in critical situations and can provide assistance to alice .this explains the importance of mobile services ; subsequently , however , the above scenario also raises a question : `` how is the mobile service consumer able to discover the appropriate service among such large number of services and that too in an uncertain environment as a mobile environment ? '' .web services hosted on mobile devices are mainly useful for sharing contextual , personal , proximal information . mobile devices in such environments are mostly distributed arbitrarily and make service discovery and management of service registry a cumbersome process . in this section ,we discuss the need for a novel service registry architecture for mobile environments . in order to provide an effective service registry for mobile environments , two approaches are possible .the first is a classical centralized service registry approach where all information on the available mobile services is maintained at one place and this is usually over a powerful computing device ; the second approach is a decentralized service registry approach . here, the registries are maintained by a system of distributed nodes in such a way that each node caters to a fraction of the services and there is a large degree of redundancy . between these two approaches ,the decentralised service registry approach appears to be more appropriate for mobile environments .there are several reasons for this such as the issue of a single - point - of - failure in the case of a centralised system , the lack of a definite guarantee of continuous reliable connections between mobile devices and the central server , the difficulties of rapid and regular updates in large centralised registries thus giving rise to obsolete information and so on .there are , of course , drawbacks in the decentralised system as well .it is these drawbacks that we will discuss and attempt to overcome in the rest of this paper .cloud offloading is another approach that is often used for facilitating services over mobile devices . in cloud offloading , the service logic ,usually , resides on the cloud and the mobile devices may work as the proxy for these services .the associated concerns of cloud offloading ( significant network delay and latency , rigid sla requirements etc .) however do not make it a potential candidate for dynamic mobile service registry .sanaei et.al . discuss these challenges in detail .a decentralised service registry may be realised using either traditional service registry approaches that are commonly used in legacy wired systems ( such as uddi , ebxml ) or a new approach especially catering to the vagaries of mobile environments may be adopted .though possible , adopting traditional registry approaches such as uddi from w3c is ill suited to dynamic mobile environments .the traditional registry architecture comprises uddi data entities ( businessentity , businessservice , bindingtemplate , tmodel , publisherassertion , subscription ) , various uddi services and api sets , uddi nodes for supporting node api set , uddi registries .such a base architecture is quite ` heavyweight ' and makes it difficult to host uddi over mobile and resource constrained devices .further , services offered over mobile devices tend to behave in an anarchic manner ; as the changes in the functional , non - functional , other aspect of the services may be quite frequent owing to regular change in context and networking environment of the device .this requires frequent updates to the service registry .uddi , on the other hand , is designed around concepts of soap / wsdl , heavyweight technologies that make frequent updates a cumbersome process.as a consequence , the information on the uddi registry quickly becomes obsolete . a new approach ,therefore , is imperative for maintaining an effective registry system for mobile environments . before we get into detailed discussions on the proposed approach , here is a quick point - wise summary of the requirements for effective service registries for mobile environments .this is along the lines of dustdar et al . who did something similar for articulating general requirements for web service registries ._ r1 : management of transient web services : _ the very nature of mobile devices makes hosted web services repeatedly and randomly enter and leave the network .a service registry should be such that it supports such dynamic and frequent arrival and departure of service providers ._ r2 : lightweight : _ a service registry designed for mobile environments should be lightweight . a lightweight service registry would complement the power ( i.e. battery ) and computational constraints of mobile devices . furthermore , a lightweight service registry is agile and is easier to integrate with diversified mobile environments ._ r3 : minimum communication overhead : _ given the battery and network constraints in mobile devices , emphasis should be towards a registry system with minimum communication overhead ._ r4 : distributed service registry : _ as the number of mobile devices ( and therefore potential web services over these ) are increasing exponentially , a centralized service registry system has limited utility and gets outdated very quickly . hence , a distributed service registry system is required to support scalability ._ r5 : enabling run time search : _ an important enabler of mobile based service oriented systems is support for run time search .this is necessary owing to the frequent arrival of new and often more competent services and/or failure of existing services .conforming to the above points could potentially ensure a service registry suitable for mobile environments .in mobile based soa environments , each mobile device can perform the functionalities of both a service provider and a service consumer . as a service consumer, a mobile device discovers web - services and invokes them after negotiating with the providers . as a service provider , a mobile device hosts services and publishes hosted services with service registries .however , as stated earlier , an effective mobile registry to publish and discover such mobile services in dynamic environments is still lacking .the mobile services are provided by mobile devices and may be consumed by another mobile device in a peer to peer manner .our approach suggests a service registry system that comprises a light - weight registry server at each participating mobile device .the registry at each mobile device contains minimal information that is _ just _ sufficient to uniquely identify the registered entity .a registry server ( at each mobile device ) manages either of two types of registries : 1 .service registry : registered mobile services are managed in the service registry . the service registry contains an entry for each service as : _ service name , service access point , service i d , service description , service groups , availability , service location , service provider , other service information_. 2 . group registry : registered services are categorized into service groups . these service groups are managed in a group registry . the group registry contains the following information for a service group : _ group name , group domain , group description , registrant , groupid , group access point , other group information_. the organization of registries , as proposed , is shown in figure [ fig : regent ] .the service information that is just enough to identify a registered service and that is less likely to change is kept in the registries and service information that is more likely to change but does not affect the discovery process of the service is kept in the vicinity of the provider .the service binding description , and contextual descriptions are provider specific and are likely to change in the mobile environment .therefore , these descriptions of the service are kept in close vicinity of the mobile service provider .close vicinity here implies that the description is hosted on the same mobile device as the service or a third party repository , where these descriptions can be updated rapidly .we define several registry related operations in section [ subsec : operation ] that are performed using xml streams .these xml streams are inspired by xmpp ( extensible messaging and presence protocol ) , a well known and established communication protocol .xmpp is already in wide use in mobile environments in several instant messaging applications .the proposed approach provides service registry operations that facilitate effective discovery of a service .details like the non - functional descriptions and quality of service values of the services have deliberately been kept out of the proposed system to make it as lightweight as possible and hence suitable for mobile environments .there are four primitive steps in the proposed mobile service registry approach that are presented in figure [ fig : approach ] : \1 ) _ mobile service registry access point is retrieved : _ as shown in figure [ fig : approach].a , a registry requester ( represented by a mobile icon with * m * ) accesses the public registry to retrieve the access point details of mobile service registry .2 ) _ mobile service registry is accessed via navigator nodes : _ as shown in figure [ fig : approach].b , the navigator nodes ( represented by mobile icons with * n * ) are contacted for the `` group registry '' .this group registry contains the list of service groups .service group of the required service provider is discovered in the group registry .3 ) _ service group is contacted via registry nodes : _ as shown in figure [ fig : approach].c , the registry nodes ( represented by mobile icon with * r * ) are accessed via groupid for retrieving the service provider s information .`` service registry '' is traversed for the required service provider .4 ) _ the service provider is contacted : _ as shown in the figure 2.d , finally , the required service provider is discovered and it is contacted for service negotiation and service binding . these steps are discussed in detail in the next subsection .we first start with the roles performed by the mobile devices .a mobile device potentially performs the following roles ( as shown in figure [ fig : approach].a ) : navigator node and registry node .navigator nodes are the entry points for the mobile registry architecture as shown in figure [ fig : approach ] .these navigator nodes are accessible by service consumers via public access points .we have devised the mobile registry architecture as a service itself .the mobile registry and its public access points can be registered with any global public registries just to make them globally discoverable ( as shown in figure [ fig : approach].a ) .the motive to use global public registry is to provide the access point details of the mobile registry architecture ; mobile devices would need to use the global public registry just once to retrieve the access point details of the architecture .( these global public registries could be any existing service registries as discussed in section [ sec : related ] .these registries are assumed to be well in place , hence is not discussed in detail .the global public registry is not suitable for mobile services , the reasons are already discussed in section [ subsec : need ] . ) there can be multiple navigator nodes connected to the public access points via a common access channel , as shown in figure [ fig : approach].b .the common access channel can be viewed as a communication bus that enables various mobile devices to communicate .the common access channel gives mobile devices the liberty to join and leave the network at any time without disturbing other navigator nodes .whenever a new mobile device joins as a navigator node , the group registry is updated / downloaded via the access channel and the shared domain ontology becomes accessible .the idea of a common access channel can be realized using existing networking technologies as suggested in rfc1112 and rfc5771 .navigator nodes are the mobile devices that manage the group registry ( refer figure [ fig : regent ] ) .as discussed in the earlier section , the group registry manages service groups .these service groups are uniquely identified by group identifiers or groupid .the group registry comprises the list of service groups present in the network along with the respective group identifiers and other group details .the navigator nodes are further responsible for categorizing the registered services into the various service groups and to navigate the service providers to the assigned service groups .navigator nodes rely on existing ontological approaches to categorize services on the basis of their domains of offering ( or offered service type ) .all navigator nodes have access to the shared domain ontologies for categorization of the offered services . whenever a service provider needs to register itsoffered service , the group registry is referred first for matching the service with its service group . in case the service offered by the provider does not belong to any of the existing service groups , a new service group is created by referring the domain ontology and updating the group registry with a new group entry .we have used an existing classification method for classifying services into groups .the motivation behind adopting this method is that the method does not require a training set for classification and dynamic changes to the classification parameters is possible without having to retrain the classifier .this is of particular importance in mobile environments as it provides run time updates to the domain ontology without disturbing the classifiers .the service group classification method follows generic steps .first , the mapping criteria are parsed from user supplied service description which is in the textual format . then, the domain ontology is mapped to the mapping criteria .the process calculates a matching categorization score of the service description with the ontological context as defined by allahyari et al . .for example , the mobile service providers hosting services for : a. ) doctor s rating and b. ) hospital building floor map , share the same service group _ hospital _ , hence they are identified by the same _groupid_. however a new mobile service provider offering contact information of pizza outlets would fall in a separate service group .we do not dwell upon the classification approach in this paper .the interested reader is referred to for more details on this .registry nodes are the mobile devices that manage the service registries ( refer figure [ fig : regent ] ) .as discussed in the previous section , a service registry manages the registered services that are uniquely identified by service identifiers .this enables a service provider to provide multiple services over a single mobile device .the service registry comprises the list of registered services in a service group , their availability information along with the service details that are just sufficient to manage and identify the registered services .of these details , the real time availability information of the registered services is what mainly contributes to overcoming the uncertainty of mobile environments .this availability information managed at the registry node gives the much needed reliability to the services hosted on mobile devices .registry nodes are responsible for managing the up - to - date service registry , responding to service registry related queries , and performing registry related operations .the service group can be seen as an overlay group of registered mobile service providers and registry nodes that are identified by the groupid .we have devised this group identifier as a multi - cast address for the service group members .the requests sent to the service group are received by all the member mobile service providers , however , only the group member acting as the registry node responds to the requests ( as shown in figure [ fig : approach].c ) .to improve query response time , a replica of the service registry that is retrieved from the registry node is managed at all the mobile devices hosting services in the service group .selective updates are performed to keep the local replica updated . during service discovery ,the local replica is first referred to , in case discovery fails at the local replica then the registry node is contacted . maintaining replicas of this kinddo add a little overhead to the architecture but in the larger context the local replicas reduce traffic meant for discovery over the network substantially .further , the local replicas ease the transition of service providers into full fledged registry nodes in the eventuality of a registry node failure ( details on this in the subsequent subsection ) .mobile devices acting as navigator nodes or registry nodes can also depict uncertain behavior and are prone to failure .hence , in the case of existing registry node failure , a new registry node can be elected from the member mobile service providers , without compromising on the consistency of other service groups or navigators .( the same approach is also applied to navigator nodes . )heartbeat operations are used to detect registry node failure .any mobile service provider can become a registry node by participating in an election and declaring its candidature .our method of registry node election is inspired by the leader election problem of distributed computing .the algorithm is discussed in more detail in section [ subsec : mobops ] ._ potential service consumers _ use the proposed mobile registry architecture for discovering their desired services .this service discovery is a three step process . in the first step as shown in figure [ fig : approach].b, the service consumer sends the discovery request to the mobile registry architecture via a well known access point or uri ( uniform resource identifier ) for the desired service .this request for service discovery is first handled by the navigator node .the navigator node searches its group registry and provides the matching service group details along with the groupid for the requested service .groupid _ acts as a multi - casting address for all the service providers that belong to the group . in the second step as shown in figure [ fig : approach].c , the service consumer contacts the service group using the groupid of the required service .the registry node of the service group responds with the available matching services and their corresponding service details . in the third stepas shown in figure [ fig : approach].d , service consumer contacts the mobile service provider offering the required service to retrieve the technical description and performs the service negotiation for service access . _ mobile service providers _ use the proposed mobile registry architecture for registering their offered services .the service registration process primarily comprises two steps .first , the mobile service provider sends a service registration request to the mobile registry architecture via the well known access points .the navigator nodes first handle the registration request and respond with the matching service group along with the groupid and other group details registered in the group registry .second , the mobile service provider contacts the service group and registers its service with the service registry of the registry node . alternatively ,if there is no matching service group in the group registry , the navigator node creates a new service group . in this case, the registrant mobile service provider becomes the registry node of the newly formed service group .in this section , we describe the operations and functionalities that provide registry operations such as registration , discovery , service updates , and service binding in the proposed architecture .an inline comparison with uddi is also presented .the following registry operations make use of several components discussed in an earlier work of ours . in the proposed approach, we have two types of registrations : 1 ) group registration ( at navigator node ) 2 ) service registration ( at the registry node ) .these registrations are shown in the figure [ fig : registerproc ] . _group registration : _ the service group is registered in the group registry at the navigator node . a mobile service provider ( registry client )contacts the mobile registry framework via the navigator node to register the service .however , if a matching service group is not yet registered in the group registry , a new service group registration request is initiated .the mobile service provider sends a group registration request to the navigator node along with the service details .hereafter , the navigator refers the domain ontology and based on the service details , a matching group is mapped .this matched service group is updated in the group registry along with its groupid .the registry is then shared among all the navigator nodes .subsequent to the successful registration process , a `` groupid '' is sent to the newly registered mobile device ( in a ` result ' type iq stanza ) . at this pointthe registrant is the registry node of the newly formed service group .figure [ fig : registerproc ] shows the group registration process ._ service registration : _ services are registered in the service registry at the registry node . the mobile service provider fetches the matching service group at the navigator node and contacts the registry node of the matching service group via the groupid .hereafter , the registry node receives service related information from the mobile service provider and generates a serviceid for the new service .the registration process is completed when the information of the new service is updated at the service registry of the registry node .this updated registry is made available to the service providers in the service group for provider initiated updates ( pull based updates ) . upon successful registrationa `` serviceid '' is sent to the newly registered mobile service provider ( in a ` result ' type iq stanza ) .figure [ fig : registerproc ] shows the service registration process .+ _ web service registration in uddi : _ herewe quickly discuss the registration process in the traditional uddi registry system so that one can appreciate the significance of the proposed approach . in uddi, registration is done using the _ publisher apis set _ exposed by the uddi , such as save_service , save_business , save_binding , save_t_model .these apis are used to save detailed information on the web service , which may not be necessary in case of the mobile based web services .moreover this information would tend to become heavy for mobile devices to process or transport .a typical uddi registry primarily consists of the following information : businessentity , businessservice , bindingtemplate and tmodels for a registered service .the information is passed on by the web - service provider to the uddi registry through publisher apis ( the information is transported as xml tags , we are not showing the xml for the uddi structure owing to space constraints here . for the benefit of interested readers, we have uploaded details on this at : http://goo.gl/cn8vap ) .deploying such a uddi registry over a mobile device would tend to become heavy owing to the limited computational power and network constraints in mobile devices .the uddi registry would also significantly lag behind in managing the dynamic nature of mobile devices .two possible cases are included in the prototype : 1 .discovery initiated by the registered service provider , 2 .discovery initiated by an external mobile service consumer . _ discovery by registered service provider : _ the registered service provider first matches the service group of the required service with the local replica of the service registry . if the group matches then it fetches the required service locally . in casethe service is found locally , a selective update is performed from the registry node to obtain information on the latest availability status of the service . in casethe service is not found locally , the query is propagated to the registry node and subsequently the local replica is updated with the latest service registry status ._ discovery by external service consumer : _ the service consumer first contacts the navigator node and retrieves the matching service group information .subsequent to this the service discovery is forwarded to the matching service group .afterwards , the registry node of the service group responds with the matching service information .the registered service provider from the other service group also follows this process . figure [ fig : registerdisc ] shows the service discovery process . _service discovery in uddi : _web service discovery in a uddi registry is done via public inquiry apis of the uddi , such as find_service , find_binding , find_business , find_tmodel .the service discovery is performed centrally by the uddi registry server , which requires high computational capability .this is because the consumer requests the uddi registry server which in turn does the query search centrally and responds to the consumer with the results .the complexity and structured nature of the uddi data structure would makes searching tedious were it adopted in a mobile environment .though traditional uddis enable consumers to query the registry and are effective in a centralized system , they are ill suited to the mobile environments that are mostly distributed . web service binding information is necessary to call a particular web service .it includes the technical information on a web service , such as the access endpoint , required parameter values , return type etc . in the proposed architecture , binding information is exchanged directly between the service consumer and the service provider ( as shown in the figure [ fig : approach].d ) .the service provider can provide the wsdl / wadl document or it s global url in the binding information as well .the functional description of the service is kept in the close vicinity of the service provider .the reason being that the mobile services tend to change frequently and this might result in a change in the technical description of the same .therefore , a proximal location of the functional description facilitates mobile service providers to readily change the service operations dynamically without violating the service registry information ( figure [ fig : servicebind ] shows the service binding process ) .furthermore , other types of descriptions viz .non - functional , contextual , business descriptions are usually present on the same device as the service provider to keep descriptions up - to - date without increasing traffic over the mobile registry architecture ..summary of operations performed in proposed approach and uddi . [ cols="<,<,<",options="header " , ] in our third experiment , we sent ( 50 * 4= ) 200 new service registration requests from other mobile devices and virtual instances to the registry architecture .figure [ fig : total ] depicts the total response time behavior for a service provider .also noteworthy here is the fact that the mobile devices were continuously moving with the volunteers , hence the devices were randomly joining and leaving the network .therefore , we observed a few outliers in the response time behavior .we concluded that the average service registration time ( including outliers ) is near 5 seconds which seems acceptable for practical purposes .the fourth experiment evaluated the effect of directory size on the discovery time .we registered multiple services in a service group on the registry node . in order to test the scalability of our prototype , we ran service discovery operations for various directory sizes : 500 , 1000 , 5000 , 10000 , 50000 , 100000 .we discovered that the discovery time increases as the size of the directory increases .however , in spite of this even with 100000 registered services the query response time was under 1 second which is acceptable for all practical purposes .we sent service discovery requests from four mobile devices and virtual instances to varied numbers of registered services and the response time was calculated for these requests .figure [ fig : scale1 ] shows the average response time for these discovery requests for various registry sizes . through the whole experiment , the mobile device acting as the registry node was moving continuously within the network .further , the size of the registry increased linearly with the increasing number of registered services .even with thousands of services registered , the registry size was under 10 mb ( shown in figure [ fig : scale2 ] ) .it should be noted here that we made use of sqlite for implementing the service and group registries over android mobile devices .our fifth experiment aimed at evaluating the feasibility of the proposed approach when running simultaneously with other phone activities . in this experiment, we conducted a comparative study on the difference in response time for the service discovery requests a ) when a volunteer was answering a phone call and , b ) when the device was idle .for this experiment , we sent 50 requests for group discovery to the volunteer s mobile device .first , we observed the response behavior when the device was idle in the volunteer s pocket . during this phase the volunteer was randomly moving within the networkthe response time behavior during this period is shown in figure [ fig : wocall ] .next , we made a phone call on the volunteer s mobile phone and observed the response time during the call .the results demonstrate that there is a change in response time , but this change is well within acceptable limits .response time behavior during the call is shown in figure [ fig : call ] .there is an initial peak in the response time when the call is made . from the initial results ,we conclude that the mobile device takes a little time to respond to the first discovery request .this is due to the fact that some processing time is required to _ awaken _ the sleeping mobile application .we feel , therefore , that _ piggybacking of incoming requests _ could be a good approach to reduce the energy overhead .the sixth experiment involved an evaluation of the reliability of the approach .we toggled the availability status of the service provider from _ available _ to _ unavailable _ and back to _ available _ with a time difference of 10 seconds .these toggles were repeated 120 times at the registry node . during this time, we continuously probed the registry node from the service consumers to get the status of the service provider .the initial results shown by the experiment have less than 1% false negative , where false negative implies - registry node returns availability information as _ available _ when the service provider has updated it to _ unavailable _ and vice versa .although the scale of these experiments was limited , the results were promising. it will be interesting to explore the performance of this approach for a much larger number of service providers , and registry clients and for much longer durations ( a few days ) .( interested readers may refer to https://goo.gl/4vg895 for further details about the architecture and experimental setup . )nonetheless , we present the speculated trend of our experiments with a large number of devices : 1 .effects on battery : with increasing number of devices there would be more numbers of registry requests and responses ; hence more battery power would be needed . however , in such scenarios the service groups and navigator nodes would play a crucial role .more specialized service groups would be formed as suggested in section [ subsec : concept ] , this would keep the number of devices in a service group within limits in turn keeping the battery usage in check ( irrespective of any number of registered services ) .effects on data exchange : with increasing number of devices there would be more exchange of data .however , the split service groups would keep a check on the data exchange .further , keeping in view the current data usage by modern smartphones , the data usage by large number of devices should be in acceptable limits .effects on service registration time : we believe that with increasing number of devices and service registrations , the response time would increase but would be well within the accepted range . also , this is a onetime process for a service, it would be acceptable for all practical purposes .effects on service discovery time : figure [ fig : scale1 ] shows the discovery time for a large number of registered services varying from one thousand to a few hundreds thousand .this is still within acceptable limits .5 . effects on registry size : the registry size will increase linearly for increasing number of register services .trend is shown in the figure [ fig : scale2 ] . while other experiments are dependent on the individual mobile device without depending on the other device s behavior .further , the effects of other contextual parameters on increasing number of devices ( e.g. network , isp , terrain , carrier type etc . ) would be interesting to look into but unfortunately these are not within the scope of current focus .the proposed architecture caters to the issues of providing a dynamic service registry in mobile environments .it manage to incorporate the requirements outlined in section [ sec : req ] ._ r1 : management of transient web services : _ the approach effectively manages the availability information on each registered web service and is capable of dynamically updating it .this helps in satisfying requirement _ r1 _ and keeping the service registry up - to - date irrespective of random entry and exit of transient web services ._ r2 : lightweight : _ the architecture seeks information that is just enough to uniquely identify and manage a web service from the registrant mobile device for registry related operations .the registries manage information that is less likely to change and whatever change does happen is updatable via a watchdog process .this keeps the registry architecture as lightweight as possible and thus satisfies requirement _r2_. _ r3 : minimum communication overhead : _ we have made use of just three xml stanzas in order to minimize communication overhead , satisfying requirement _r3_. during our experiments , we observed the exchange of just a few kilobytes of data for hundreds of request transfers .this small overhead also helped us minimize battery utilization , as reflected in table [ table : power ] and [ table : network ] .furthermore , the approach uses just one stanza `` iq stanza '' for easier service registration and de - registration ._ r4 : distributed service registry : _ the proposed approach manages the service registry over dispersed registry nodes and navigator nodes . thus satisfying requirement _ r4 _ and improving fault localization . these features ,contribute to a light weight and autonomous service registry effective for mobile environments ._ r5 : run time search : _ the dynamic availability information , minimal data transfer , and faster response time helped in performing effective run time searches and in the process satisfied requirement _r5_. further , the proposed dynamic mobile service registry is compatible with existing uddi and other registries .the dynamic registry can register external uddi and other service registries as one of the registered services along with their access points .contrary the proposed registry can be registered with other registries and uddi as any of the registered service .there are certain important assumptions that have been made in the proposed service registry framework .these are listed as follows : 1 ) the load balancing of the incoming registry requests is assumed to be handled at the device level. there could be a threshold capacity limit , at each device depending on its hardware capacity . on exceeding this capacity limit ,the incoming requests are not entertained and these surplus requests are handled by other registry nodes that have access to the common access channel .2 ) there is no reward system suggested in the current state of the work .hence , it is assumed that the mobile device owners are self - motivated to provide their respective devices for serving as the registry .we may look into a system of rewards / incentive as part of future work .3 ) the updates are performed to keep the local registry replica updated ( at the navigator / registry nodes ) in an automated manner without manual intervention. a few important limitations of the current work are listed as follows : a ) the current design of the framework does not deal with the privacy issues of the mobile phone users associated with service registry and service provisioning .b ) the current design of the mobile registry does not handle the qos ( quality of service ) aspect of the mobile web service .the qos may be handled by providing a link to the data server ( external to the framework ) that could have qos and other details on the service description .c ) though we have a system in place to detect the unavailability of registry nodes through heartbeat signals , the behavior of the mobile device owner , poor network connectivity , and physical damage to the mobile device could result in the abrupt unavailability of the registry / navigator node .these have not been dealt with in this work .d ) finally , the current evaluation of the approach was conducted within a supervised lab environment .the registry power and data requirements , service discovery performance , and other results were therefore well within acceptable range. there could be slight variations in these if the experiments were carried out at a much larger scale including thousands of mobile devices .service discovery is an important aspect of service - oriented architecture .two types of approaches are primarily adopted for service discovery : registry based approach and registry - less approach .the registry - less approach usually makes use of overlay networks , hash tables , and other broadcasting / multi - casting techniques . in the proposed work ,we perform service discovery using the former i.e. `` registry based approach '' .we surveyed existing literature from the late 90 s .we classified the registry based approach into two broad categories : the centralized registry approach and the distributed registry approach .these two can further be classified into those for mobile environments and those for non - mobile environment .our survey includes works from the areas of soa , peer - to - peer networking , mobile ad - hoc networking .the centralized service registry approach is used in several popular technologies : service location protocols , sun s jini architecture , service discovery services , microsoft s universal plug and play ( upnp ) .these service discovery infrastructures rely on a central registry for discovering capable services .the service information is stored at a centralized registry .all registry related operations are performed by a single entity .one well known example of centralized service registry architecture for web services is uddi ( universal description discovery and integration ) .uddi is not the service registry itself .however , uddi is the specification of a framework for describing web - services , registering web - service , and discovering web - service .several data structures and apis have been published for describing , registering and querying web - services through the uddi .the ebxml ( electronic business xml ) standard is another example of a centralized web service registry architecture .hoschek presented a grid based hyper registry for web services in peer - to - peer networks .the registry is an xquery based centralized database that manages dynamic distributed contents .juric et al. proposed an extension to the uddi for incorporating version support for services .they presented modifications to the category tag of business service and tmodel of the uddi infoset with the intent to introduce service interface versions in uddi and wsdl .bernstein and vij proposed the use of xmpp for intercloud topology , security , authentication and service invocations . in some waysthis work is similar to ours .however , our work focuses on registry management in mobile based service oriented architecture .we focus on managing the service registry in a distributed manner over resource constrained mobile devices .seto et al . proposed a service registry for ubiquitous networks to dynamically discover service resources .their registry divides the service operation into the source , transformation , and sink , specifies physical meta - data to manage devices , and associate a keyword with it .feng et al . proposed a registry framework to include interoperability among various semantic web service models .for this , they made use of registry meta model for interoperability-7 and several mapping rules for handling semantic mismatch between services .feng et al . proposed a service evolution registry where providers can register their service evolution information and consumers can be sent an alert regarding the service evolution .their work manages service versions along with their dependencies and discrepancies .more recently presents the idea of using object relational databases in the service registry .the registry extends the search to include search on the basis of service commitments and service expectations .some existing work discusses the possibility of centralised service registries for mobile environments .diehl et al . talk about centralized service registries that store the service domain , service types , location and access rights to manage service mobility and adoption of services in wireless networks .beck et al . propose an adaptable service framework for mobile devices that relies on a central service registry for dynamic service registration and discovery .doulkeridis et al . discuss the idea of managing contextual information in service registries .the main focus of the work is to ease service discovery in mobile environments by maintaining context aware service registries .deepa and swamynathan talk about a directory based architecture that make use of two integrated architectures : backbone - based and cluster based .their work was intended to facilitate service discovery in mobile ad - hoc networks and to achieve improved network traffic , response time , and hit ratio .chen et al . and sivashanmugam et al .+ discuss a few initial approaches to maintain uddi in a federated environment .one approach supports qos based discovery from requesters and provides an aggregated result from the federated registries. the other approach suggests the use of various metadata and ontologies to manage uddi in a federated environment .verma et al. propose meteor - s wsdi , that focusses on providing registries in distributed and federated environments .extended registries ontology was used to provide access to these distributed registries and organizing them in domain based categorization .the approach discussed in presents a distributed service registry for grid application .the approach utilises xpath queries and ontological trees for domain based service discovery .baresi and miraz talk about an approach to enable heterogeneous federated registries to exchange service information .the approach is based on the publish and subscribe model .ad - uddi is a distributed registry architecture that adopts an active monitoring mechanism .the approach extends uddi to incorporate automated service updates in a federated registry environment .treiber and dustdar propose an active web service registry that make use of atom news formats .rss software is used in the approach to form an active distributed registry .shah et al . also propose an rss - based distributed service registry in the move to achieve global soa .the proposed registry is intended to provide dynamic discovery using rss and tries to resolve synchronization issues in rss .jaiswal et al . introduce a decentralized registry using the chord protocol for peer - to - peer environments .the registry comprises distributed hash tables of web - service names and web - service ips .their method claims to cater to demand driven web - service provisioning .another direction in distributed service registry systems is one meant for cloud environments and .lin et al . present a hadoop - based service registry for the cloud environment .the work proposes geographical knowledge service registries that are designed to simplify service registration , improve discovery and other registry operations for cloud services .elgazzar et al . propose to manage a local service registry at the provider s site for the offered services .this local service registry is managed in a distributed manner and has two types of services : local and remote .the paper proposes discovery - as - a - service in the cloud environment .das gupta et al . in a more recent paper , discuss about the possibility of a federated registry system for p2p networks .the work makes use of multi - agent based distributed service discovery for non - deterministic and dynamic environments .they propose that super peer nodes manage the distributed service registry and other peers register their services with these registries .zhang et al . discuss the integration of peer to peer technology with soa .they talk about self - organizing , semi - structured p2p frameworks for support and propose to use a private service registry at each peer for discovering the manufacturing services .the local private registry of the peer is traversed first to discover a service .handorean and roman propose probably the first work that discusses the possibility of distributed service registries in mobile environments . in their approach ,the availability of services is shown in the registry along with an atomic update facility to maintain consistency .konark is a distributed xml based registry that has a tree structure .a top - down approach is used for the tree , with generic classification of services at the top and specific classification at the bottom .every node maintains a service registry , where it stores information about its own services and also about services that other nodes provide .the approach provides a semantic service registry and enables servers as well as clients to register and discover services in a distributed manner .schmidt and parashar present a distributed hash table based approach for distributed registries in peer to peer networks .the approach supports service discovery based on keywords , wild cards on an internet scale .indexing of keywords associated with the web service description document is managed at the peers .tyan and mahmoud discuss an approach for service discovery in mobile ad - hoc environments .the work considers a registry as a tree and makes this registry available to every node in the network .the approach makes use of a location aware routing protocol and divides the network into hexagonal grids with a gateway for each that have the service registry .golzadeh and niamanesh discuss an approach for a service registry system for mobile ad - hoc networks .the approach divides the manet into clusters with one head for each cluster that acts as a directory for the cluster .the head node has two types of registries : the service provider s registry and the other head node registry .a decentralized service registry for location based web services is discussed in .the approach is relied on cellular network system and base transceiver station for retrieving the local registry address .these addresses are broadcasted by base station and interested mobile devices download the registry address for location based services .one of the latest works by jo et al . makes use of bloom filter to manage distributed service registry for mobile iot environments .proposed work uses hierarchical bloom filters for reducing message exchanges among registries in move to find available services .although there is a good amount of work that has been done towards developing effective service registries , an architecture that enables mobile devices to host mobile service registries and that takes the distinct features of mobile environment into account such as intermittent connectivity , dynamic nature , frequent service description changes is still lacking .our work focuses on registry management in mobile based service oriented architecture .we focus on managing the service registry in a distributed manner on resource constrained mobile devices .this service registry contains minimal information about the registered services in a manner that is just enough to uniquely discover the services .the proposed work is an extension of some of our previous work .in this paper we looked into , perhaps , the most challenging aspect of implementing an soa in mobile environments : effective service registries .our studies show that traditional approaches for implementing service registries ( such as uddi ) can not be directly adopted in mobile environments , given the dynamic , volatile and uncertain nature of such environments . a novel approach to manage service registries ` solely ' over mobile devices was proposed that effectively addressess issues specific to mobile environments and enables run time service discoveries .we evaluated the approach by developing a prototype and deploying it over real mobile devices . to emulate real world usage as closely as possible we requested volunteers to deploy our prototype on their personal mobile devices and continue doing their routine tasks .the experimental results indicate that the proposed solution is an effective enabler for soa in mobile environments .we performed several experiments to confirm the efficacy of the prototype across several parameters such as : timely performance , battery consumption , effect of the random / nomadic behaviour of people carrying mobile devices , conflict with native mobile apps , reliability .future work in this direction would be towards mobile service registries that focus on qos factors unique to mobile environments .future work will also tackle security related issues , dynamic service group splitting in mobile service registries .further , the availability of the service registries will be a prime focus .we would like to thank tanveer ahmed and dheeraj rane for their valuable insights.the work was supported by the ministry of human resource development - government of india .54 natexlab#1#1[1]`#1 ` [ 2]#2 [ 1]#1 [ 1]http://dx.doi.org/#1 [ ] [ 1]pmid:#1 [ ] [ 2]#2 , , & ( ) . . in _ _ ( pp . ) . ., , , & ( ) . .( ed . ) . ., & ( ) . . in _ _ ( pp . ) . . , & ( ) .. in _ _ ( pp . ) . volume of _ _ . . , , & ( ) .in _ _ mobide 99 ( pp . ) . : . , &( ) . . in _ _ ( pp . ) . ., , , , & ( ) . . , _ _ , ., , , & ( ) . . in _ _ . , , , , & ( ) .. in _ _ ( pp . ) . . . , , , & ( ) . . ,_ , . . , & ( ) .. in _ _ ( pp . ) . ., , , , , & ( ) . . in _ _ ( pp . ) . . , , & ( ) .. in _ _ ( pp . ) . volume of _ _ . , & ( ) . . ,, , & ( ) . . in , & ( eds . ) , _ _ ( pp . ) . volume of _ _ . . , & ( ) . . , _ _ , . ., , & ( ) . . , _, , & ( ) . . in _ _ ( pp . ) . ., , , , , , & ( ) . . in _ _ ( pp . ) . . , & ( ) .. in _ _ ( pp . ) . .( ) . . , _, , , & ( ) . . in _ _ ( pp . ) . . , & ( ) .. in _ _ coordination 02 ( pp . ) . : ., , , & ( ) . . in _ _ ( pp . ) . volume . .( ) . . in _ _ ( pp . ) . ., , , & ( ) . . in _ _ ( pp . ) . . , , & ( ) .. in _ _ ( pp . ) . volume of _ _ . . , , , & ( ) . . ,, , , & ( ) . . . , & ( ) . . in _ _ ( pp . ) . volume of _ _ . ., , , & ( ) . . in _ _ ( pp . ) . .( ) . . , _[ arxiv : http://dx.doi.org/10.1137/0215074 ] ., , , & ( ) . . , _, , , , & ( ) . . , __ , . . , , , , , , , , , & ( )( ) . . in _ _ ( pp . ) . . , , , & ( ) . . , _ _ ,. internet engineering task force ( ietf ) proposed standard . ( ) . .internet engineering task force ( ietf ) proposed standard . , , , &( ) . . , _& ( ) . . , _ _ , . ., , & ( ) . . in _ _ iiwas 11 ( pp . ) ., , , & ( ) . . in __ ( pp . ) ., , & ( ) . . in _ _ ( pp . ) . ., , & ( ) . . in _ _ ( pp . ) . ., , , & ( ) . . in _ _ ( pp . ) . .. , & ( ) . . ,_ , . . , & ( )_ , . . , , , , , & ( ) . . , __ , . . , & ( ) .. in _ _ . .( ) . . , _, , , , , , & ( ) .. in _ _ codes / isss 10 ( pp . ) .http://doi.acm.org/10.1145/1878961.1878982 . ., , , & ( ) . . , _ | advancements in technology have transformed mobile devices from being mere communication widgets to versatile computing devices . proliferation of these hand held devices has made them a common means to access and process digital information . most web based applications are today available in a form that can conveniently be accessed over mobile devices . however , web - services ( applications meant for consumption by other applications rather than humans ) are not as commonly provided / consumed over mobile devices . facilitating this and in effect realizing a service - oriented system over mobile devices has the potential to further enhance the potential of mobile devices . one of the major challenges in this integration is the lack of an efficient service registry system that caters to issues associated with the dynamic and volatile mobile environments . existing service registry technologies designed for traditional systems fall short of accommodating such issues . in this paper , we propose a novel approach to manage service registry systems provided ` solely ' over mobile devices , and thus realising an soa without the need for high - end computing systems . the approach manages a dynamic service registry system in the form of light weight and distributed registries . we assess the feasibility of our approach by engineering and deploying a working prototype of the proposed registry system over actual mobile devices . a comparative study of the proposed approach and the traditional uddi ( universal description , discovery , and integration ) registry is also included . the evaluation of our framework has shown propitious results in terms of battery cost , scalability , hindrance with native applications . service - oriented systems ; mobile computing ; mobile web services ; web service registry ; web service discovery . |
in many multimedia applications , a stream of data packets is required to be sequentially encoded and decoded under strict latency constraints .for such a streaming setup , both the fundamental limits and optimal schemes can differ from classical communication systems . in recent years, there has been a growing interest in the characterization of fundamental limits for streaming data transmission . in ,coding techniques based on tree codes were proposed for streaming setup with applications to control systems . in ,khisti and draper established the optimal diversity - multiplexing tradeoff ( dmt ) for streaming over a block - fading multiple - input multiple - output channel . in ,the same authors proposed a coding technique using finite memory for streaming over discrete memoryless channels ( dmcs ) that attains the same reliability as previously known semi - infinite coding techniques with growing memory . in ,the error exponent was studied in a streaming setup of distributed source coding .we note that these prior works assumed that the code operates in the large deviations regime in which the rate is bounded away from capacity ( or the rate pair is strictly inside the optimal rate region for compression problems ) and the error probability decays exponentially as the blocklength increases .other interesting asymptotic regimes include the central limit and moderate deviations regimes .let denote the blocklength of a single message henceforth .in the central limit regime , the rate approaches to the capacity at a speed proportional to and the error probability does not vanish as the blocklength increases . in the moderate deviations regime ,the rate approaches to the capacity strictly slower than and the error probability decays sub - exponentially fast as the blocklength increases . for block coding problems ,both regimes have received a fair amount of attention recently . these worksaim to characterize the fundamental interplay between the coding rate and error probability .the most notable early work on channel coding in the central limit regime ( also known as second - order asymptotics or the normal approximation regime ) is that of strassen , who considered dmcs and showed that the backoff from capacity scales as when the error probability is fixed .strassen also deduced the constant of proportionality , which is related to the so - called _ dispersion _hayashi considered dmcs with cost constraints as well as discrete channels with markovian memory . refined the asymptotic expansions and also compared the normal approximation to the finite blocklength ( non - asymptotic ) fundamental limits . for a review and extensions to multi - terminal models ,the reader is referred to . for the moderate deviations regime , he _et al . _ considered fixed - to - variable length source coding with decoder side information .altu and wagner initiated the study of moderate deviations for channel coding , specifically dmcs .polyanskiy and verd relaxed some assumptions in the conference version of altu and wagner s work and they also considered moderate deviations for additive white gaussian noise ( awgn ) channels .however , this line of research has not been extensively studied for the streaming setup . to the best of our knowledge , there has been no prior work on the streaming setup in the moderate deviations and central limit regimes with the exception where the focus is on source coding . in this paper, we study streaming data transmission over a dmc in the moderate deviations and central limit regimes .our streaming setup is illustrated in fig .[ fig : streaming ] . in each block of length ,a new message is given to the encoder at the beginning , and the encoder generates a codeword as a function of all the past and current messages and transmits it over the channel .the decoder , given all the past received channel output sequences , decodes each message after a delay of blocks .this streaming setup introduces a new dimension not present in the block coding problems studied previously .in the special case of , the setup reduces to the block channel coding problem .if , however , there exists an inherent tension in whether we utilize a block only for the fresh message or use it also for the previous messages with earlier deadlines .it is not difficult to see that due to the memoryless nature of the model , a time sharing scheme will not provide any gain compared to the case of .a natural question is whether a joint encoding of fresh and previous messages would improve the performance when .our results indicate that the fundamental interplay between the rate and error probability can be greatly improved when delay is allowed in the streaming setup . in the moderate deviations regime, the moderate deviations constant is shown to improve over the block coding or non - streaming setup by a factor of . in the central limit regime , the second - order coding rate is shown to improve by a factor of approximately for a wide range of channel parameters .for both asymptotic regimes , we propose coding techniques that incorporate a joint encoding of fresh and previous messages . for the moderate deviations regime ,we propose a coding technique in which , for every block , the encoder jointly encodes all the previous and fresh messages and the decoder re - decodes all the previous messages in addition to the current target message . for the error analysis of this coding technique , we develop a refined and non - asymptotic version of the moderate deviations upper bound in ( * ? ? ?* theorem 3.7.1 ) that allows us to uniformly bound the error probabilities associated with the previous messages . on the other hand , for the central limit regime, we can not apply such a coding technique whose memory is linear in the block index . in the error analysis in the central limit regime, we encounter a summation of constants as a result of applications of the central limit theorem .if the memory is linear in the block index , this summation causes the upper bound on the error probability to diverge as the block index tends to infinity .hence , for the central limit regime , we propose a coding technique with _ truncated _ memory where the memory at the encoder varies in a periodic fashion .our proposed construction judiciously balances the rate penalty imposed due to the truncation and the growth in the error probability due to the contribution from previous messages . by analyzing the second - order coding rate of our proposed setup ,we conclude that the channel dispersion parameter also decreases approximately by a factor of for a wide range of channel parameters .furthermore , we explore interesting variants of the basic streaming setup in the moderate deviations regime .first , we consider a scenario where there is an erasure option at the decoder and analyze the undetected error and the total error probabilities , extending a result by hayashi and tan .next , by utilizing the erasure option , we analyze the rate of decay of the error probability when a variable decoding delay is allowed .we show that such a flexibility in the decoding delay can dramatically improve the error probability in the streaming setup .this result is the analog of the classical results on variable - length decoding ( see e.g. , ) to the streaming setup . finally , as a simple example for the case where the message rates are not constant , we consider a scenario where the rate of the messages in odd block indices and the rate of the messages in even block indices are different and analyze the moderate deviations constants separately for the two types of messages .this setting finds applications in video and audio coding where streams of data packets do not necessarily have a constant rate .the rest of this paper is organized as follows . in section [ sec :model ] , we formally state our streaming setup .the main theorems are presented in section [ sec : main ] and proved in section [ sec : proof ] . in section [ sec : extension ] , the moderate deviations result for the basic streaming setup is extended in various directions .we conclude this paper in section [ sec : conclusion ] .the following notation is used throughout the paper .we reserve bold - font for vectors whose lengths are the same as blocklength . for two integers and , ] , denotes the vector and denotes } ] .this notation is naturally extended for vectors , random variables , and random vectors . for an event denotes the indicator function , i.e. , it is 1 if is true and 0 otherwise . and denote the ceiling and floor functions , respectively .for a dmc and an input distribution , we use the following standard notation and terminology in information theory : * information density : where denotes the output distribution .we note that depends on and but this dependence is suppressed .the definition can be generalized for two vectors and of length as follows : * mutual information : \\ & = \sum_{x\in \mathcal{x}}\sum_{y\in \mathcal{y } } p(x)w(y|x)\log\frac{w(y|x)}{pw(y)}.\end{aligned}\ ] ] * unconditional information variance : .\end{aligned}\ ] ] * conditional information variance : .\end{aligned}\ ] ] * capacity : where denotes the probability simplex on . *set of capacity - achieving input distributions : * channel dispersion where is from ( * ? ? ?* lemma 62 ) , where it is shown that for all .consider a dmc .a streaming code is defined as follows : [ def : basic ] an -streaming code consists of * a sequence of messages each distributed uniformly over ] has been already decoded at the end of block .nevertheless , the decoder re - decodes at the end of , because the decoder needs to decode to decode and the probability of error associated with becomes lower ( in general ) by utilizing recent channel output sequences .] let denote the estimate of at the end of block .the decoder decodes sequentially from to as follows : * given } ] , let .in is defined in terms of and .this dependence is suppressed henceforth . ] if there is none or more than one such , let . *if , repeat the above procedure by increasing to . if , the decoding procedure terminates and the decoder declares that the -th message is .now , fix an arbitrary . by applying the chain of inequalities ( * ? ? ?* eq . ( 53)-(56 ) ) , we have ^+\right\}\cr & \leq \mathbbm{1}\left\{\sum_{l=1}^{n(t_k - j+1 ) } i(x_l;y_l)\leq ( t_k - j+1)n(c-\lambda\rho_n)\right\ } + \exp\left\{-(t_k - j+1)n(1-\lambda)\rho_n\right\}. \label{eqn : spr}\end{aligned}\ ] ] combining the bounds in and , we obtain for sufficiently large , where is some non - negative constant dependent only on the input distribution and channel statistics and is from the moderate deviations upper bound in lemma [ lemma : nonasymptotic_md ] , which is relegated to the end of this subsection . also see remark [ rem : md ] .now , we have \cr & \leq \sum_{j=1}^k \left ( \exp\left\{-(t_k - j+1)n\rho_n^2\lambda^2\left(\frac{1}{2v}-\lambda \rho_n \tau\right)\right\}+\exp\left\{-(t_k - j+1)n(1-\lambda)\rho_n\right\ } \right)\\ & \leq \sum_{j = t}^{t_k } \left ( \exp\left\{-jn\rho_n^2\lambda^2\left(\frac{1 } { 2v}-\lambda \rho_n \tau\right)\right\}+\exp\left\{-jn(1-\lambda)\rho_n\right\}\right ) \\ & \leq \frac{\exp\left\{-tn\rho_n^2\lambda^2\left(\frac{1 } { 2v}-\lambda \rho_n \tau\right)\right\}}{1-\exp\{-n\rho_n^2\lambda^2\left(\frac{1 } { 2v}-\lambda \rho_n \tau\right)\}}+\frac{\exp\left\{-tn(1-\lambda)\rho_n\right\}}{1-\exp\left\{-n(1-\lambda)\rho_n\right\ } } \label{eqn : md_sumup}\end{aligned}\ ] ] for sufficiently large , which leads to \leq -\frac{t\lambda^2}{2v}.\end{aligned}\ ] ] finally , by taking , we have \leq -\frac{t}{2v}.\end{aligned}\ ] ] hence , there must exist a sequence of codes that satisfies , which completes the proof .the following lemma used in the proof of theorem [ thm : md ] corresponds to a non - asymptotic upper bound of the moderate deviations theorem ( * ? ? ?* theorem 3.7.1 ) , whose proof is in appendix [ appendix : finite_md ] . [lemma : nonasymptotic_md ] let be a sequence of i.i.d .random variables such that = 0 ] , and its cumulant generating function ] is finite . for a sequence satisfying the moderate deviations constraints , i.e. , and , the following bound holds : for sufficiently large .[ rem : md ] let us comment on the assumption in lemma [ lemma : nonasymptotic_md ] that is finite . in our application, then , we have \\ & = -s i(x_1;y_1 ) + \log \e \left [ \big(\frac{w(y_1 |x_1)}{p_xw(y_1 ) } \big)^s\right ] . \end{aligned}\ ] ] by differentiating thrice , we can show that is continuous in . restricting to ] with a period of blocks , after an initialization step of the first blocks .let us first describe our message - codeword mapping rule for the case of and , which is illustrated in fig .[ fig : mapping ] . for the first nine blocks , the encoder maps all the previous messages to a codeword .since the maximum encoding memory is nine in this example , we _ truncate _ the messages that are mapped to a codeword on and after the tenth block , so that the encoding memory is periodically time - varying from four to nine with a period of six blocks .for instance , let us consider the first period from the tenth block to the fifteenth block . in the tenth block , the encoder maps the messages to a codeword , thus ensuring that the encoding memory is four . in block ] and , generate in an i.i.d .manner according to . in block ] . on the other hand, we note that our message - codeword mapping rule is also periodic in the ( vertical ) axis of message index .we can group the messages according to the maximum block index to which a message is mapped .let for denote the -th group of messages that are mapped to a codeword up to block , which is illustrated in fig .[ fig : mapping ] for the example of and .this grouping rule is useful for describing the decoding rule .the decoding rule of at the end of block is exactly the same as that for the moderate deviations regime .hence , from now on , let us focus on the decoding of for at the end of block . at the end of block ,the decoder decodes not only , but also all the messages in the previous group and the previous messages in the current group , for ] , among the channel output sequences from block to block , we utilize the channel output sequences in which the -th message is involved . according to the above rules , the blocks to be considered for the decoding of messages are as follows : a. for , blocks are involved is if , and it is otherwise .in other words , the last block index to which the messages in are involved is . ]indexed from 10 to , b. for for ] , blocks indexed from to . in particular , since the pairs of the first block index and the last block index to be considered for the decoding of messages are the same , we decode simultaneously . by keeping this in mind , our decoding procedure for for the example of , and is formally stated as follows : a. if there is a unique index vector } ] , let }=g_{[7:10]} ] , let }=(1,\cdots , 1) ] , the decoder chooses according to the following rule . if there is a unique index that satisfies }(\hat{g}_{t_k,[7:j-1 ] } , g_{[j:\nu_k ] } ) , \mathbf{y}_{[j:\nu_k ) ] } ) & > ( \nu_k - j+1 ) \cdot \log m_n \label{eqn : dec_rule_cl2_toy}\end{aligned}\ ] ] for some } ] , the decoder chooses according to the following rule . if there is a unique index that satisfies }(\hat{g}_{t_k,[7:j-1 ] } , g_{[j : t_k ] } ) , \mathbf{y}_{[j : t_k ] } ) & > ( t_k - j+1 ) \cdot\log m_n \label{eqn : dec_rule_cl3_toy}\end{aligned}\ ] ] for some } ] , let }=g^b ] .b. the decoder sequentially decodes from to as follows : * given } ] , let . if there is none or more than one such , let .* if , repeat the above procedure by increasing to . if , proceed to the next decoding procedure .c. the decoder sequentially decodes from to as follows : * given } ] , let .if there is none or more than one such , let .* if , repeat the above procedure by increasing to .if , the whole decoding procedure terminates and the decoder declares that the -th message is . by exploiting the symmetry of the message - codeword mapping rule , the decoding rule for proceeds similarly .we note that the superscript in each error event represents the decoding step in which the error event is involved .now , we have & \leq \pr(\mathcal{e}_{k}^{\mathrm{(i)}})+\pr ( \tilde{\mathcal{e}}_{k}^{\mathrm{(i)}})+ \sum_{j = b+1}^{a - b+1 } \pr(\mathcal{e}_{k , j}^{\mathrm{(ii ) } } ) + \sum_{j = b+1}^{a - b+1 } \pr ( \tilde{\mathcal{e}}_{k , j}^{\mathrm{(ii ) } } ) \cr & \quad \quad \quad\quad \quad+\sum_{j = a - b+2}^{k } \pr ( \mathcal{e}_{k , j}^{\mathrm{(iii ) } } ) + \sum_{j = a - b+2}^{k } \pr ( \tilde{\mathcal{e}}_{k , j}^{\mathrm{(iii ) } } ) .\label{eqn : first_case_sum}\end{aligned}\ ] ] let us bound each term in the rhs of ( [ eqn : first_case_sum ] ) .first , is upper - bounded as follows : }(g^{\alpha } ) , \mathbf{y}_{[b:\alpha]})\leq \alpha\cdot \log m_n\right)\\ & \leq \pr\left(\sum_{l=1}^{n(\alpha - b+1 ) } i(x_l;y_l)\leq \alpha \cdot \log m_n\right)\\ & \overset{(a)}{\leq } \pr\left(\sum_{l=1}^{n(\alpha - b+1 ) } i(x_l;y_l)\leq ( \alpha - b+1)(nc - l\sqrt{n})\right)\\ & \overset{(b)}{\leq } q\left(\frac{\sqrt{\alpha - b+1}}{\sqrt{v}}l\right)+\frac{\tau_1}{\sqrt{(\alpha - b+1)n } } \label{eqn : berry_1}\end{aligned}\ ] ] for some non - negative constant that is dependent only on the input distribution and the channel statistics , where s are i.i.d .random variables each generated according to , is from the choice of in , and is from the berry - esseen theorem ( e.g. , ) . similarly , we can show and next , is upper - bounded as follows : }(g^{\alpha } ) , \mathbf{y}_{[b:\alpha ] } ) > \alpha \cdot \log m_n \mbox { for some } g^{\alpha } \mbox { such that } g^{b } \neq g^b)\\ & \leq m_n^{\alpha } \cdot \pr\left(\sum_{l=1}^{n(\alpha - b+1 ) } i(x_l;\bar{y}_l ) >\alpha\cdot \log m_n\right)\\ & \overset{(a)}{= } m_n^{\alpha}\cdot \e\left[\exp\left\{-\sum_{l=1}^{n(\alpha - b+1 ) } i(x_l;y_l)\right\}\right .\cdot \left.\mathbbm{1}\big\{\sum_{l=1}^{n(\alpha - b+1 ) } i(x_l;y_l)>\alpha \log m_n\big\}\right]\\ & \overset{(b)}{\leq } \frac{\tau_2}{\sqrt{(\alpha - b+1)n } } \ ] ] for some non - negative constant that is dependent only on the input distribution and channel statistics , where s are i.i.d .random variables each generated according to , is due to an elementary chain of equalities given in appendix [ appendix : chain ] , and is from ( * ? ? ?* lemma 47 ) .similarly , we can show and by substituting the above bounds into the rhs of , we obtain \cr & \leq \sum_{j = b}^{a - b+1 } \left(q\left(\frac{\sqrt{\alpha - j+1}}{\sqrt{v}}l\right)+\frac{\tau_1+\tau_2}{\sqrt{(\alpha - j+1)n } } \right)\cr & \qquad\qquad+\sum_{j = a - b+2}^{k } \left(q\left(\frac{\sqrt{t_k - j+1}}{\sqrt{v}}l\right)+\frac{\tau_1+\tau_2}{\sqrt{(t_k - j+1)n}}\right ) \\ & \leq \sum_{j=\alpha - a+b}^{\alpha - b+1 } \left(q\left(\frac{\sqrt{j}}{\sqrt{v}}l\right)+\frac{\tau_1+\tau_2}{\sqrt{jn } } \right)+\sum_{j = t}^{t_k - a+b-1 } \left(q\left(\frac{\sqrt{j}}{\sqrt{v}}l\right)+\frac{\tau_1+\tau_2}{\sqrt{jn}}\right ) \label{eqn : sumall}\\ & \overset{(a)}{\leq } \sum_{j = b}^{a - b+1 } \left(q\left(\frac{\sqrt{j}}{\sqrt{v}}l\right)+\frac{\tau_1+\tau_2}{\sqrt{jn } } \right)+\sum_{j = t}^{a - b+t } \left(q\left(\frac{\sqrt{j}}{\sqrt{v}}l\right)+\frac{\tau_1+\tau_2}{\sqrt{jn}}\right),\label{eqn : sumall_ub}\end{aligned}\ ] ] where is because if , which implies , the rhs of is upper - bounded as follows : and if , which implies , the rhs of is upper - bounded as follows : now , the rhs of is bounded as follows : where is from lemma [ lemma : sum_integral ] ( with the identification of ) , which is relegated to the end of this subsection , and is obtained by applying similar steps as in the proof of corollary [ coro : cl_asymp ] .can be obtained by replacing by in the rhs of . ]now let us choose and for . by substituting this choice of and into the rhs of and the rhs of, we obtain and \leq \sum_{j = t}^{\infty } q\left(\frac{\sqrt{j}}{\sqrt{v}}l\right)+o(n^{-\delta/2 } ) , \label{eqn : ensemble_cl}\end{aligned}\ ] ] respectively . due to the symmetry of the decoding procedure , the bound holds for for . for , by defining the error events in the same way as for the moderate deviations regime and then applying similar bounding techniques used in the above , it can be verified that &\leq \sum_{j = t}^{t_k } q\left(\frac{\sqrt{j}}{\sqrt{v}}l\right)+\frac{\tau_1+\tau_2}{\sqrt{jn}}\\ & \leq \sum_{j = t}^{a - b+t } q\left(\frac{\sqrt{j}}{\sqrt{v}}l\right)+\frac{\tau_1+\tau_2}{\sqrt{jn } } \\ & \leq \sum_{j = t}^{\infty } q\left(\frac{\sqrt{j}}{\sqrt{v}}l\right)+o(n^{-\delta/2}).\end{aligned}\ ] ] hence , there must exist a sequence of codes that satisfies and , which completes the proof .the following basic lemma is used in the proof of theorem [ thm : cl ] , whose proof is omitted .[ lemma : sum_integral ] assume two integers and such that .if is monotonically decreasing and integrable on ] , * a sequence of encoding functions that maps the message sequence to the channel input codeword , and * a sequence of decoding functions that maps the channel output sequences to a message estimate or an erasure symbol , that satisfies i.e. , the total error probability does not exceed , and i.e. , the undetected error probability does not exceed .the following theorem presents upper bounds on the undetected error and the total error probabilities .the proof of this theorem is provided in appendix [ appendix : erasure ] .[ thm : erasure ] consider a dmc with and any sequence of integers such that , where and .for any , there exists a sequence of -streaming codes with an erasure option such that theorem [ thm : erasure ] indicates that for our proposed scheme , the undetected error probability decays much faster than the total error probability , i.e. , the exponent of the undetected error probability is the order of , whereas that of the total error probability is the order of .we note that when and for and , theorem [ thm : erasure ] reduces to ( * ? ? ?* theorem 1 ) . in the streaming setup, both the exponents of the total error and the undetected error probabilities improve over the block coding or non - streaming setup in ( * ? ? ?* theorem 1 ) by factors of .we note that the decoding delay is assumed to be fixed to up to this point . in this subsection, we relax this constraint by requiring the _ average _ decoding delay not to exceed .a streaming code with average delay constraint is defined as follows : an -streaming code with average delay constraint consists of * a sequence of messages each distributed uniformly over ] that satisfies and }{n}\leq t,\end{aligned}\ ] ] where for denotes the ( random ) decoding delay of the -th message . is required to be decoded at the end of every block on and after the -th block in this definition .one may wonder why the decoder does not stop decoding after it outputs an estimate of , not an erasure .we note that our definition includes such a operation as a special case by letting the decoder simply fix the estimate of once it outputs a message estimate . ] for block channel coding with feedback , it is known that the error exponent can be significantly improved by allowing variable decoding delay , e.g. , . for streaming setup , the following theorem , which is proved in appendix [ appendix : variable ] , shows that such an improvement can be obtained in the absence of feedback .[ thm : variable ] consider a dmc with and any sequence of integers such that , where and .for any , there exists a sequence of -streaming codes with average delay constraint such that we note that the exponent of the error probability is of the order ( instead of as in ) , and hence it is improved tremendously by allowing variable decoding delay .we note that the rates of the messages are assumed to be fixed across time thus far . in many practical streaming applications ,however , a stream of data packets does not have a constant rate .for example , in the mpec standard for video coding , i frames have higher rates than p frames in general .similarly , in audio coding , voice packets have higher rates than silent packets . in this subsection , to obtain useful insights when the message rates vary across time , we assume a simple example where the rate of the messages in odd block indices and the rate of the messages in even block indices are different . a streaming code with alternating message rates is defined as follows : an -streaming code with alternating message rates . ]consists of * a sequence of messages where message for is distributed uniformly over ] , * a sequence of encoding functions that maps the message sequence \ } \cup \{g_{2j } : j\in [ 1:\lfloor k/2 \rfloor ] \ }\in \mathcal{g}_1^{\lceil k/2 \rceil}\times \mathcal{g}_2^{\lfloor k/2 \rfloor } ] .it is easy to check that , = 0 ] .now , we take plugging this into and yields where the final inequality holds for all sufficiently large since and as and thus .the following chain of equalities is used in the proof theorem [ thm : cl ] .\label{eqn : chain4}\end{aligned}\ ] ]consider a dmc with and any sequence of integers such that , where and .we denote by an input distribution that achieves the dispersion .fix .the encoding procedure is the same as that for the basic streaming setup in section [ subsec : md_pf ] .let us consider the decoding of at the end of block .the decoding procedure is modified from that for the basic streaming setup in section [ subsec : md_pf ] as follows : * the decoding test is modified as follows : }(\hat{g}_{t_k , [ 1:j-1 ] } , g_{[j : t_k ] } ) , \mathbf{y}_{[j : t_k]})>(t_k - j+1 ) \cdot ( \log m_n+ \gamma n\rho_n ) , \label{eqn : dec_rule_erasure}\end{aligned}\ ] ] i.e. , the threshold value is increased proportional to . *if there is none or more than one that satisfies the decoding test for some } ] , let us bound and .similarly as in section [ subsec : md_pf ] , s denote i.i.d .random variables each generated according to in the following .first , we have for sufficiently large , where is some non - negative constant dependent only on the input distribution and channel statistics and is from lemma [ lemma : nonasymptotic_md ] in section [ subsec : md_pf ] .next , we have \\ & \leq m_n^{t_k - j+1}\cdot \exp\left\{-(t_k - j+1 ) \cdot ( \log m_n+\gamma n\rho_n)\right\}\\ & = \exp\left\{-(t_k - j+1)\gamma n\rho_n\right\},\end{aligned}\ ] ] where is obtained by applying a chain of equalities similar to that in appendix [ appendix : chain ] . hence, we obtain &\leq \sum_{j=1}^k \big ( \exp\left\{-(t_k - j+1)n\rho_n^2(1-\gamma)^2\left(\frac{1}{2v}-(1-\gamma ) \rho_n \tau\right)\right\}\cr & \qquad \qquad \qquad + \exp\left\{-(t_k - j+1)n\gamma\rho_n\right\ } \big)\\ & \leq \sum_{j = t}^{t_k } \left ( \exp\left\{-jn\rho_n^2(1-\gamma)^2\left(\frac{1 } { 2v}-(1-\gamma ) \rho_n \tau\right)\right\}+\exp\left\{-jn\gamma \rho_n\right\}\right ) \\ & \leq \frac{\exp\left\{-tn\rho_n^2(1-\gamma)^2\left(\frac{1 } { 2v}-(1-\gamma)\rho_n \tau\right)\right\}}{1-\exp\{-n\rho_n^2(1-\gamma)^2\left(\frac{1 } { 2v}-(1-\gamma ) \rho_n \tau\right)\}}+\frac{\exp\left\{-tn\gamma\rho_n\right\}}{1-\exp\left\{-n\gamma \rho_n\right\ } } \label{eqn : averg_total}\end{aligned}\ ] ] and &\leq\frac{\exp\left\{-tn\gamma\rho_n\right\}}{1-\exp\left\{-n\gamma \rho_n\right\ } } \label{eqn : averg_undetec}\end{aligned}\ ] ] for sufficiently large . to show the existence of a deterministic code , we apply markov s inequality as follows : } { n}\right)<\frac{1}{2 } \\ & \pr\left(\limsup_{n\rightarrow \infty } \sum_{k=1}^n\frac{\pr(\hat{g}_k\neq g_k , \hat{g}_k\neq 0|\mathcal{c}_n)}{n}>2\limsup_{n\rightarrow \infty } \sum_{k=1}^n\frac{\e_{\mathcal{c}_n}[\pr(\hat{g}_k\neq g_k , \hat{g}_k\neq 0|\mathcal{c}_n)]}{n } \right)<\frac{1}{2}. \end{aligned}\ ] ] then ,from the union bound , we have } { n } \mbox { or } \cr & \limsup_{n\rightarrow \infty } \sum_{k=1}^n\frac{\pr(\hat{g}_k\neq g_k , \hat{g}_k\neq 0|\mathcal{c}_n)}{n}>2\limsup_{n\rightarrow \infty } \sum_{k=1}^n\frac{\e_{\mathcal{c}_n}[\pr(\hat{g}_k\neq g_k , \hat{g}_k\neq 0|\mathcal{c}_n)]}{n } \big)<1.\end{aligned}\ ] ] therefore , there must exist a sequence of codes that satisfies and which completes the proof . consider a dmc with and any sequence of integers such that , where and .we denote by an input distribution that achieves the dispersion .fix and .the encoding procedure is the same as that for the basic streaming setup in section [ subsec : md_pf ] .let us consider the decoding of message at the end of block for . at the end of _every _ block for . ]if ] for the decoding of for ] for is continuously differentiable . for ,the following bound holds : where is the _ rate function _ defined as follows : for any , we obtain by applying the same steps used to obtain in the proof of lemma [ lemma : nonasymptotic_md ] . since is arbitrary ,we obtain the following bound furthermore , because and , we conclude that .s. c. draper and a. khisti , `` truncated tree codes for streaming data : infinite - memory reliability using finite memory , '' in _ proc . international symposium on wireless communication systems ( iswcs ) _ , nov .2011 , pp . 136140 .v. strassen , `` asymptotische abschtzungen in shannons informationstheorie , '' in _ trans .third prague conf .theory _ , prague , 1962 ,689723 , http://www.math.cornell.edu//strassen.pdf . v. y. f. tan , _asymptotic estimates in information theory with non - vanishing error probabilities_.1em plus 0.5em minus 0.4em foundations and trends in communications and information theory , 2015 , vol . 11 , no . 1 - 2. y. polyanskiy and s. verd , `` channel dispersion and moderate deviations limits for memoryless channels , '' in _ proc . 48th annual allerton conference on communication , control , and computing _ , monticello , il , 2010 .z. lin , v. y. f. tan , and m. motani , `` on error exponents and moderate deviations for lossless streaming compression of correlated sources , '' _ ieee trans .inf . theory _ , submitted for publication .[ online ] .available : http://arxiv.org/abs/1507.03190 . | we consider streaming data transmission over a discrete memoryless channel . a new message is given to the encoder at the beginning of each block and the decoder decodes each message sequentially , after a delay of blocks . in this streaming setup , we study the fundamental interplay between the rate and error probability in the central limit and moderate deviations regimes and show that i ) in the moderate deviations regime , the moderate deviations constant improves over the block coding or non - streaming setup by a factor of and ii ) in the central limit regime , the second - order coding rate improves by a factor of approximately for a wide range of channel parameters . for both regimes , we propose coding techniques that incorporate a joint encoding of fresh and previous messages . in particular , for the central limit regime , we propose a coding technique with truncated memory to ensure that a summation of constants , which arises as a result of applications of the central limit theorem , does not diverge in the error analysis . furthermore , we explore interesting variants of the basic streaming setup in the moderate deviations regime . we first consider a scenario with an erasure option at the decoder and show that both the exponents of the total error and the undetected error probabilities improve by factors of . next , by utilizing the erasure option , we show that the exponent of the total error probability can be improved to that of the undetected error probability ( in the order sense ) at the expense of a variable decoding delay . finally , we also extend our results to the case where the message rate is not fixed but alternates between two values . |
frequency division multiplexing ( ofdm ) has been widely adopted in both cellular systems , such as long - term evolution ( lte ) and wi - fi systems .the main advantage of ofdm modulation is to convert an intersymbol interference ( isi ) channel into multiple isi - free subchannels and thus reduces the demodulation complexity at the receiver .however , since each symbol is only transmitted over a parallel flat fading subchannel , the conventional ofdm technique may not collect multipath diversity , it thus performs worse than single carrier transmission . furthermore , ofdm has high peak - to - average power ratio ( papr ) of the transmitted signals , which may affect its applications in broadband wireless communications .single - carrier frequency domain equalization ( sc - fde ) is an alternative approach to deal with isi with low transmission papr .however , induced by both fast fourier transform ( fft ) and inverse fft ( ifft ) operations at the receiver , sc - fde suffers from the drawback that transmitter and receiver have unbalanced complexities . as a result , ofdm is more suitable for downlink with high transmission speed , whereas sc - fde can be applied for uplink that reduces papr and transmitter complexity as in lte .vector ofdm ( v - ofdm ) for single transmit antenna systems first proposed in converts an isi channel to multiple vector subchannels where the vector size is a pre - designed parameter and flexible . for each vector subchannel, the information symbols of a vector may be ( are ) isi together . since the vector size is flexible , when it is , v - ofdm coincides with the conventional ofdm .when the vector size is , each vector subchannel may have two information symbols in isi . when the vector size is large enough , say , the same as the ifft size , then the maximal number of information symbols are in isi and it is then equivalent to sc - fde .therefore , v - ofdm naturally builds a bridge between ofdm and sc - fde in terms of both isi level and receiver complexity , and it has attracted recent interests . for v - ofdm ,an adaptive vector channel allocation scheme was proposed for v - ofdm systems .some key techniques , such as carrier / sampling - frequency synchronization and guard - band configuration in v - ofdm system were designed and made comparisons with the conventional ofdm systems .iterative demodulation and decoding under turbo principle is an efficient way for v - ofdm receiver .constellation - rotated v - ofdm was proposed with improved multipath diversity .linear receivers and the corresponding diversity order analyses are recently given in and phase noise influence is investigate in .for a very broadband channel , the ifft size of an ofdm system needs to be very large , which may cause practical implementation problems , such as high papr and high complexity .in contrast , for a v - ofdm , its ifft size can be fixed and independent of a bandwidth , while its vector size can be increased to accommodate the increased bandwidth . in this paper , we are interested in v - ofdm over a broadband sparse channel in the sense that it has a large time delay spread but only a few of nonzero taps . for sparse channels ,there have been many studies in the literature , see for example .recently , sparse fft ( sfft ) theory was proposed by the computer science & artificial intelligence lab , massachusetts institute of technology .if a signal has a small number of nonzero fourier coefficients , the output of the fourier transform can be represented succinctly using only coefficients . for such signals ,the runtime is sublinear in the signal size rather than .furthermore , several new algorithms for sfft are presented , i.e. , an -time algorithm for the exactly -sparse case , an -time algorithm for the general case . in this paper , a sparse channel estimation and decoding scheme for v - ofdm systems is proposed .inspired by the idea of sfft under the condition of signals with only a few nonzero fourier coefficients , we first use pilot symbols to obtain channel frequency response ( cfr ) , and then estimate channel impulse response ( cir ) by using sparse ifft ( sifft ) .based on the estimation of nonzero channel coefficients and their corresponding coordinates , an efficient partial intersection sphere ( pis ) decoding is investigated and it achieves the same diversity order as the maximum likelihood ( ml ) decoding .the main contributions of the paper are summarized as follows . * we find a connection between the exactly and approximately sparse channel models in the estimation of a v - ofdm sparse multipath channel . for a multipath channel with only a few nonzero taps ,if there is no noise during the transmission , then the sparse channel can be estimated by the exactly sparse multipath channel algorithm that corresponds to algorithm 3.1 for exactly sparse fft in .when there is additive white gaussian noise ( awgn ) during the transmission , the sparse multipath channel can be estimated by the approximately sparse multipath channel algorithm that corresponds to algorithms 4.1.2 for generally sparse fft in .* by using the sifft - based algorithms , one can directly recover the nonzero channel coefficients and their corresponding coordinates , which is significant to the pis decoding process .* for the pis decoding in v - ofdm systems , the bit error rate ( ber ) is dependent of nonzero taps in a sparse channel and the vector size to some extent , but roughly independent of the maximum delay .* for any given small sphere radius , the proposed pis decoding and ml decoding are of the same diversity order , which is equal to the cardinality of the set of reminder coordinates after mod , but the pis decoding can substantially reduce the computational complexity with probability .the reminder of the paper is organized as follows . in sectionii , the system model of v - ofdm is reviewed . in section iii ,sifft - based channel estimation schemes for the exactly sparse case and the approximately sparse case are introduced . in sectioniv , a pis decoding for v - ofdm systems is proposed and analyzed . in sectionv , simulation results are presented and discussed . in sectionvi , this paper is concluded .we first briefly recall a v - ofdm system for single transmit antenna , which is shown in fig .1 . the description of system model follows the notations in below . in v - ofdm systems , symbols are blocked into vectors ( called vector blocks ( vb ) ) of size .denote the transmitted vb in as ^{\mathrm t},~l=0,1,\ldots , l-1\ ] ] where denotes the transpose .assume the average power is normalized , i.e. , , where denotes the mathematical expectation .accordingly , is defined as the normalized vb - based ifft of size , i.e. , here , is a column vector of size represented as ^{\mathrm t} ] . in order to avoid the interblock interference ( ibi ) ,the length of cp denoted by should not be shorter than the maximum time delay of a multipath channel .note that does not need to be divisible by . at the transmitter ,the signal sequence inserted by cp , is transmitted serially through the channel with the order ^{\mathrm t} ] .it is derived from that the relationship between the transmitted vb and received vb as where is a blocked channel matrix of the original isi channel as where is the polyphase component of , .the additive noise in ( 6 ) is the blocked version of whose entries have the same power spectral density as in that are i.i.d . complex gaussian random variables .note that can be diagonalized as where is a unitary matrix whose entries {r , c}=\frac{1}{\sqrt{m}}\mathrm e^{-\mathrm j\frac{2\mathrm\pi}{n}(l+rl)c},~r , c=0,1,\ldots , m-1 ] , and is a diagonal matrix defined as it can be seen from ( 6 ) that the original isi channel of symbols interfered together is converted to vector subchannels , each of which may have symbols interfered together .note that is the vector size and can be flexibly designed .when , ( 6 ) is back to the original ofdm , i.e. , no isi occurs in each subchannel . when , all symbols are interfered together and it is back to the sc - fde .now , we rewrite the relationship of inputs and outputs in ( 6 ) for the better understanding of channel transmission structure it is straightforward to show that after the unitary transformation , the vb is transmitted parallel over the subchannels .mathematically , is a kind of rotation matrix , can be thus viewed as the equivalent channel fourier coefficients .denote as the number of pilot channels and assume is divisible by .if the vb ^{\mathrm t} ] and is the estimator of .it is convenient to estimate by the least squares approach such that ^{-1}\widetilde{\bm y}_l ] , . with the knowledge of column vector , we can further obtain column vector by implementing the ifft operation without normalization . since the additive noise is an i.i.d .random sequence whose entries \thicksim\mathcal{cn}\left(0,\sigma^2\right) ] .now , we check the mse of estimator as =\frac{\mathbf f_{mp } \mathbf\sigma\mathbf f_{mp}^{-1}}{mp}\ ] ] where is a diagonal matrix whose diagonal entries {p+mp}=\left[\mathbf \sigma_{l_p}\right]_m,~m=0,1,\ldots , m-1,~p=0,1,\ldots , p-1 ] + now , we consider a more practical scenario that the pilot signals are transmitted through a sparse channel with only nonzero taps spread and awgn , which is called an approximately sparse multipath channel .since awgn is induced during the transmission , the estimator is no longer with only nonzero entries . in fact, the estimator has dominant entries and the rest entries are small , when the snr is not low .for the approximately sparse vector , define the parameter as the maximum expectation power ratio of the selected entries to the rest entries such that where denotes the norm of a vector . reflects how approximately the sparse multipath channel is and determines the root - mean - square error ( rmse ) of sifft algorithm . in particular ,exactly sparse is an extreme case for .the sifft - based channel estimation for approximately sparse multipath channel is shown in algorithm 2 that has the following basic idea . to deal with noise , the algorithm estimates the nonzero coordinates and their corresponding values separately . for the coordinate estimation ,all the coordinates are first divided into small regions .the input is permutated randomly by and , respectively , then multiplied by flat window filtering .the phase difference between these two permutations determines the circular distance to each region .select the appropriate regions with the nearest circular distance and get one vote . after repeating the above process times ,choose the final regions with more than votes . by narrowing the regions of nonzerocoordinates in each iteration , the algorithm eventually obtains the nonzero coordinates . for the value estimation , after the permutation and filtering , the nonzero values corresponding to the coordinates estimated before are obtained by permutation . repeating times and choose the median as the estimations of the values such that the estimation error decreases exponentially with .repeat the above process times and ultimately recover with dominant taps .the algorithm includes five functions , in which hashtobins is defined the same as in algorithm 1 .* approximatelysparseifft : iterate coordinate and value , then update . in each iteration , reduce -sparse to -sparse , repeat times and eventually find with dominant entries .* coordinate : access to range and narrow the range of dominant coordinates , repeat times until the dominant coordinates are uniquely determined . *range : permute randomly with times , divide all the coordinates into several regions , find the appropriate regions with the nearest circular distance and then gets one vote .after repeating times , choose the final regions with more than votes .* value : access to hashtobins and obtain the estimations of the values , repeat times and take the median of such values with real and imaginary parts , respectively .* initialization : * + choose randomly from choose randomly from * initialization : * + {\frac{n}{b}j} ] {m,(m - i_{\kappa-1})\bmod m}\big] ] , that lie in the certain sphere of radius around the received signal and generate the set of symbol sequences as where is the entry of the column vector .3 . for each , construct an injective mapping of coordinates . for each symbolsequence , where is the set of entire symbol sequences generated from the previous iteration , compare the current symbol sequence for the coordinates belonging to the partial intersection with the existed symbol sequence , namely , if holds for all , where stands for the entry in , then is put into the set of symbol sequences , which can be expressed as .then , insert the symbols whose coordinates belong to the complement of the partial intersection to each symbol sequence , i.e. , , where stands for the complement of in , set , insert to each symbol sequence in and generate , the new set of symbol sequences is thus updated as .repeat step 3 by enumerating all .then the set of entire symbol sequences is obtained by the union of all , i.e. , . is updated to as the existed coordinates of nonzero entries for the next iteration .after iterations , the set of possible vb sequences can be ultimately obtained , then choose the symbol sequence with the minimum distance of as the estimation of the transmitted vb . * initialization : * calculate according to ( 7 ) are entries in with the ascending order is the entry of column vector is vector aligned as the row and , columns of + for a v - ofdm system with the pis decoding , assume the cir and the average power of complex awgn are known at the receiver . denote as the correct symbol sequence corresponding to the transmitted symbols , i.e. , is extracted from the , , , entries of and generated as {(m - i_0)\bmod m},\left[\bm x_l\right]_{(m - i_1)\bmod m},\ldots ] .hence , the distance between and , i.e. , , is rayleigh distributed with mean and variance . according to the cumulative distribution function of rayleigh distribution , the probability that the transmitted symbol lies in the sphere radius in the iteration is from the previous analysis , the additive noise in ( 6 ) is an vector whose entries are i.i.d .complex gaussian random variables . hence , for , the events that the transmitted symbol lies in the sphere are independent and the probabilities that each event occurs are the same .after iterations , the occurrence of event is equivalent to the occurrence of all the events , then we have note that is a set of possible vb sequences whose corresponding -dimensional symbol sequences lie in the sphere radius such that ( 28 ) holds for all rows in ( 6 ) .in fact , the choice of sphere radius is a tradeoff between the symbol error rate ( ser ) performance and the computational complexity . with the increase of , also increases which consequently improves the ser performance .however , this means that more possible symbol sequences need to be compared in each iteration .the ser of the proposed pis decoding is a function of sphere radius and calculated by the law of total probability note that for , there is no doubt that since is chosen from .denote as the probability that symbol error occurs conditioned on , i.e. , substituting ( 32 ) into ( 31 ) , can be further simplified as for a v - ofdm system , the signal vector needs to be specifically rotated / transformed to achieve full diversity for the ml decoding as done in . in section ii b , it was analyzed that if subchannels with the indices are allocated to transmit pilot symbols , the ifft / sifft - based channel estimation can be applied to recover cir . for the remaining subchannels allocated to transmit data symbols , it is proved in appendix b that the diversity order of the ml decoding for a sparse multipath channel is .we describe the diversity order by the exponential equality , which is mathematically defined as , where the ser of the ml decoding can be found in . instead of the ml decoding that enumerates symbol constellation and estimates the transmitted symbols with the minimum distance , the pis decoding first generates the set of possible transmitted symbols and then chooses the transmitted symbols only from with the minimum distance .it is proved in appendix c that .accordingly , for sufficiently large , is exponentially less than or equal to , which can be expressed as , i.e. , .therefore , from ( 33 ) , is exponentially less than or equal to if and only if is exponentially less than or equal to , i.e. , we say the sphere radius is the asymptotically greater than or equal to the sphere radius , denoted by , when for a sufficiently large , the infinitesimal approximates to . substituting ( 34 ) into ( 35 ) and supposing is sufficiently large , we have . furthermore , for a sufficiently large , the term can be neglected compared with , then the necessary and sufficient condition of is it is well known that the ser of the proposed pis decoding can not be better than that of the ml decoding . according to ( 36 ) , we have the following lemma that gives the criterion of sphere radius satisfying .the ser of the proposed pis decoding is exponentially equal to that of the ml decoding in the choice of sphere radius for a sufficiently large . for the v - ofdm system, it is known that the complexities with respect to complex multiplication operation of mmse decoding and ml decoding are and , respectively .the pis decoding only needs trials with complex multiplication operation in each trial .hence , the complexity with respect to complex multiplication operation of the pis decoding is .besides , the evaluation and comparison operations should be taken into account in pis decoding , which in fact , may vary from to and are related to the cardinality of and ultimately dependent of sphere radius . as illustrated in algorithm 3 ,the evaluation operation is an operator used for assignment where the source is a complex number and the destination is the entry in the symbol sequence , i.e. , , while the comparison operation is one of relational operator used to check the equality of two complex numbers and , i.e. , , if the equality holds return , otherwise return . in assembly language , an evaluation operation or a comparison operation usually executes instruction cycle , whereas a real multiplication operation executes instruction cycles or slightly more due to hardware .although different operations may have different execute time , the number of instruction cycles for any operation is fixed and can be seen as a constant .then , the total complexity depends ultimately on the number of operations executed in the program . with the increase of ,it is wise to decrease the sphere radius such that the computational complexities with respect to the evaluation and comparison operations can reach the lower bound with probability .note that , then we have the following theorem .for any given small sphere radius , the pis decoding can achieve the diversity order which is the same as the ml decoding for a sparse multipath channel , but the computational complexity decreases from to with probability .see appendix d for the proof .therefore , by choosing asymptotically equal to , the proposed pis decoding algorithm can balance the tradeoff between the ser performance and the computational complexity . since the diversity order is that depends on the set of the reminders of the nonzero channel coefficient coordinates modulo , in practice , for a given channel model , i.e. , for a given set of coordinates of nonzero channel coefficients , one may properly choose such that is maximized .in this section , we provide simulation results to verify the previous analysis .the bpsk modulation is employed in the v - ofdm system .sparse multipath channel is modelled as i.i.d .complex gaussian distributed nonzero taps randomly distributed within the maximum delay .we first employ the rmse to evaluate the performances of the sifft - based sparse channel estimation .then , we give an example of different channels with deterministic nonzero coordinates to make a comparison of the diversity order .besides , we investigate the relationship between the ber performance of pis decoding and the parameters , , , respectively .furthermore , the pis decoding is compared with the conventional zf , mmse , ml decoding schemes in the v - ofdm system .finally , channel estimation and decoding algorithm are jointly considered to show the ber performances in both ofdm and v - ofdm systems .3 and 4 show the rmse performances of the sifft - based sparse channel estimation with and without noise , respectively . in the simulation of the sifft - based exactly sparse multipath channel estimation ,the parameters and are set to such that is a constant regardless of .it can be seen from fig .3 that the rmse of the channel estimation is below but reduces the complexity to , where is pilot channel number .the estimation error is mainly caused by the imperfect permutation that the nonzero entries are not separated into different bins .for the sifft - based approximately sparse multipath channel estimation , suppose the parameters and that can keep the collision at a relatively low level .4 indicates that with the increase of , dominant entries are slightly influenced by the rest entries , which consequently , reduces the rmse of channel estimation .for instance ,when , the rmse of sparse channel estimation is below .whereas when , however , there is a sharp decrease for since the condition does not hold in this case .the complexity for the sifft - based approximately sparse multipath channel estimation is . in fig . 5, we give an example of different channels with deterministic nonzero coordinates and for each channel , the nonzero channel coefficients are i.i.d .complex gaussian distribution .suppose , , , and the nonzero coordinates for channel a : , channel b : , channel c : , channel d : , channel e : , channel f : .accordingly , the reminders of the nonzero coordinates modulo for channel a : , channel b : , channel c : , channel d : , channel e : , channel f : .it can be seen from fig .5 that the diversity order of channel a is , channel b and channel c are , channel d and channel e are , channel f is .it is pointed out that although and , their corresponding diversity orders are different . as a result, we can verify the previous analysis that the diversity order of sparse multipath channel is determined by the cardinality of the set of reminder coordinates after mod , rather than the cardinality of coordinate set itself .6 show how the parameters , , influence on the ber performance of pis decoding , respectively .suppose the transmitted snr . in fig 6 ,we compare the ber with respect to the maximum delay .it can be seen that the ber is roughly irrelevant to the variation of , since nonzero taps are randomly distributed within taps .7 investigates the relationship between the number of nonzero taps and the ber performance .simulation result indicates that with the increase of , the ber decreases almost linearly in the logarithmic scale .the reason is that the diversity order can be directly determined by and increases with with large probability .8 shows the ber performance with respect to vb size in the v - ofdm system . the ber first decreases with the increase of for , whereas for , the ber increases with instead . on one hand, a larger can avoid nonzero taps interacting with each other better after mod , in this case , with high probability which may improve the ber performance . on the other hand , for a given sphere radius , with the increase of , the probability that the transmitted symbols lie in the certain sphere decreases exponentially according to ( 30 ) , and thus diminishes the advantage of the pis decoding .therefore , one can improve the ber performance of the pis decoding by choosing an appropriate vb size .9 compares different decoding approaches in the v - ofdm system .suppose the parameters , , , , .since the condition does not hold , with not a small probability which may diminish the multipath diversity orders of the mmse decoding , ml decoding and pis decoding .in fact , when is sufficiently large such that the reminders of the nonzero coordinates modulo can be regarded randomly distributed at the coordinates , for the given and , the probability mass function of is . after averaging over the random nonzero coordinates of channel , the diversity orders of the zf decoding , mmse decoding , ml decoding and pis decoding are corresponding to the minimum of and thus equal to . for the different decoding approaches , denote the bers of zf decoding , mmse decoding , ml decoding , pis decoding as , , , , respectively .it is well known that .simulation result indicates that the pis decoding loses certain ber performance since ( 36 ) does not hold if is not large enough , while for , the proposed pis decoding outperforms the zf decoding and mmse decoding and gradually approximates to the ml decoding with the increase of .furthermore , the complexity of the pis decoding decreases with and is much less than the ml decoding . and.,width=336 ] with and .,width=336 ] with and .,width=336 ] with and .,width=336 ] in fig .10 , we consider the channel estimation and decoding algorithms jointly .the ber performance is not only dependent of the decoding approaches , but also influenced by the channel estimation accuracy .suppose the parameters , .for the ofdm system , the receiver employs the fft - based interpolation for channel estimation and symbol - by - symbol decoding with parameters , pilot channel number . for the v - ofdm system with linear receivers, we estimate channel by the conventional ifft - based approach , and employ the zf decoding and the mmse decoding with parameters , , , respectively . for the v - ofdm system with the ml decoding, we estimate the channel by the conventional ifft - based approach as well with the parameters , , . for the v - ofdm system with the pis decoding ,the sifft - based algorithm is employed for sparse multipath channel estimation with parameters , , .if a slight bias is induced during the process of channel estimation , the sphere radius should not be extremely small since a robust sphere radius is needed to guarantee that the probability of the transmitted symbols lying in the sphere does not decrease . an empirical method to balancethe tradeoff between the estimation error and the complexity is to choose sphere radius .it can be seen from fig .10 that v - ofdm system outperforms the conventional ofdm system . compared with the zf decoding and the mmse decoding , the proposed sifft - based channel estimation and pis decoding reduces the ber significantly . andin this paper , we investigate sparse multipath channel estimation and decoding for broadband v - ofdm systems . from the system model ,if the pilot channels are evenly allocated over multiple subcarriers , the pilot symbols are evenly distributed over the equivalent channels . for the sparse multipath channel estimation , we first design a type of pilot symbols that can minimize the mse of an estimator. then , we give sifft - based algorithms for exactly and approximately sparse multipath channel estimations corresponding to the cases with and without awgn induced during the transmission , respectively .the remarkable significance of the sifft - based approach is to estimate the nonzero channel coefficients and their corresponding coordinates directly . for the pis decoding algorithm ,the diversity order is determined by not only the number of nonzero taps , but also the coordinates of nonzero taps .simulation results indicate that the ber performance of the pis decoding is comparable to that of the ml decoding with certain sphere radius for a sufficiently large snr , but reduces the complexity substantially .for the pilot signals being transmitted through multipath channel with awgn , it was derived in ( 18 ) that is an unbiased estimator of .the diagonal entries of ( 19 ) corresponding to the variance of , which are all equal to .hence , is a random vector and can be regarded as with an additive noise whose entries are identically distributed with complex gaussian noise , i.e. , , but may not be white . for the sparse channel with only nonzero taps , approximates the expectation power ratio of dominant entries in to the rest entries such that considering the sparse channel that , ( 37 ) can thus be further simplified as it was listed in table i that the designed pilot symbols can reduce the interference efficiently .suppose norm of the sparse channel is normalized , i.e. , .since is proportional to , we have where the typical value of is and roughly independent of .therefore , it can be naturally concluded that if the designed pilot symbols are transmitted through the normalized sparse multipath channel with awgn , then the estimator is an approximately sparse vector with the parameter , where denotes the transmitted snr , the typical value of is and roughly independent of .assume the ideal channel state information is known at the receiver .denote as the estimation of with the ml decoding , then the ser of the ml decoding conditioned on is written as . according to the ml decoding , for the transmitted symbol and a distinct symbol from the symbol constellation ,if holds , then the symbol error occurs .since the event is equivalent to , it is not difficult to derive a lower bound of the ser as and an upper bound of the ser as furthermore , the total number of elements in the symbol constellation is , then the upper bound ( 41 ) can be further simplified as therefore , the upper and low bounds of the ser have the same tendency and only differ by a constant multiplier .denote , then we have where the -function is define as . substituting ( 8) into ( 43 ) andnote that the distance does not change after the unitary transformation , ( 43 ) can be simplified as consider the sparse channel has only i.i.d .nonzero taps and each nonzero entry is a complex gaussian random variable with zero mean and unit variance . recall that is the set of coordinates of the nonzero taps .suppose are the entries in with the ascending order .we construct a vector ^{\mathrm t}]th rows , , , , columns of the -point fft matrix without normalization , i.e. , {r , c}=\mathrm e^{-\mathrm j\frac{2\mathrm\pi}{n}(l+rl)j_c} ] , where the column vector }\big]^{\mathrm t} ] . if there exists a distinct and an integer such that still holds , then .hence , the vectors and are linearly dependent of . since there are only such in total ,if let $ ] , we have . since the vectors in each column of a dft matrix are linearly independent , the vector columns which are equal to the columns of the -point fft matrix without normalization while multiplied by the factor are also linearly independentthen , the maximal number of linearly independent columns of is .thus , we have . since is an invertible matrix , . as a result, it can be concluded that the diversity order of the ml decoding for a sparse multipath channel is .denote the estimation symbol sequence as the conventional ml decoding of .in contrast to the pis decoding , the ser of the ml decoding can be written as since , we have sparse channel has only i.i.d .nonzero taps and each channel coefficient follows complex gaussian random distribution .it is not difficult to find that the nonzero entries in are also complex gaussian random variables but the variances may not be equal . recall that was defined in ( 28 ) that all possible symbol sequences lying in the certain sphere of radius around the received signal . is the correct symbol sequence corresponding to the transmitted symbols . is the distance between and , i.e. , . since the noise in ( 6 ) is complex awgn, is rayleigh distributed with mean and variance .according to the triangle inequality , , we have compared with defined in ( 28 ) , it can be found that . if is chosen exponentially equal to such that lemma 1 is satisfied , then as . due to rayleigh distribution , with probability as . since the probability density function of each nonzero entry in is a complex gaussian function , for any given ,if , we have it is found that for , such that ( 53 ) approximates to . since has only a bounded finite elements , we have , thus , has only one entry with probability when . obviously , with probability 1 as . since is a subset of , also has only one entry with probability . for any given small sphere radius ,lemma 1 is satisfied as such that the pis decoding achieves the same multipath diversity order as the ml decoding , which is equal to .furthermore , the cardinality of the set of possible symbol sequences in algorithm 3 remains in each iteration that the complexity for updating can be reduced significantly . for , the evaluation and comparison operationsare performed times in the iteration .consider iterations and subchannels , the complexities of the evaluation and comparison operations are with probability . in the previous analysis of the pis decoding ,the complexity of complex multiplication operation is .since the evaluation and comparison operations for a complex number are faster than a complex multiplication operation , the total complexity of the pis decoding is with probability .d. falconer , s. l. ariyavisitakul , a. benyamin - seeyar , and b. eidson , `` frequency domain equalization for single - carrier broadband wireless systems , '' _ ieee commun . mag .4 , pp . 58 , apr . 2002 . x .- g .xia , `` precoded and vector ofdm robust to channel spectral nulls and with reduced cyclic prefix length in single transmit antenna systems , '' _ ieee trans .49 , no . 8 , pp . 1363 , aug .2001 .h. zhang , x .-xia , q. zhang , and w. zhu , `` precoded ofdm with adaptive vector channel allocation for scalable video transmission over frequency selective fading channels , '' _ ieee trans . mobile comput ._ , vol . 1 , no. 2 , pp . 132 , apr . 2002 .h. zhang , x .-xia , l. j. cimini , and p. c. ching , `` synchronization techniques and guard - band - configuration scheme for single - antenna vector - ofdm systems , '' _ ieee trans .wireless commun .5 , pp . 2454 , sep . 2005 . c. han , t. hashimoto , and n. suehiro , `` constellation - rotated vector ofdm and its performance analysis over rayleigh fading channels , '' _ ieee trans . commun .828 , mar .2010 . c. han and t. hashimoto , `` tight pep lower bound for constellation - rotated vector - ofdm under carrier frequency offset and fast fading , '' _ ieee trans .62 , no . 6 , pp . 1931 , june 2014 .i. ngebani , y. li , x .-xia , s. a. haider , a. huang , and m. zhao , `` analysis and compensation of phase noise in vector ofdm systems , '' _ ieee trans . signal process .6143 , dec . 2014 .m. z. win and r. a. scholtz , `` characterization of an ultra - wideband wireless indoor channel : a communication - theoric view , '' _ ieee j. select areas commun .1613 , dec . 2002 . c. r. berger , s. zhou , j. c. preisig , and p. willett , `` sparse channel estimation for multicarrier underwater acounstic communication : from subspace methods to compressed sensing , '' _ ieee trans .signal process ._ , vol 58 , no .3 , pp . 1708 , mar . 2010 .h. vikalo and b. hassibi , `` on the sphere - decoding algorithm ii .generalizations , second - order statistics , and applications to communications , '' _ ieee trans . signal process .2819 , aug .2005 .r. prasad , c. r. murthy , and b. d. rao , `` joint approximately sparse channel estimation and data detection in ofdm systems using sparse bayesian learning , '' _ ieee trans .signal process .3591 , july 2014 .v. tarokh , n. seshadri , and a. r. calderbank , `` space - time codes for high data rate wireless communication : performance criterion and code construction , '' _ ieee trans .inf . theory _2 , pp . 744 , mar . 1998 . | vector orthogonal frequency division multiplexing ( v - ofdm ) is a general system that builds a bridge between ofdm and single - carrier frequency domain equalization in terms of intersymbol interference and receiver complexity . in this paper , we investigate the sparse multipath channel estimation and decoding for broadband v - ofdm systems . unlike the non - sparse channel estimation , sparse channel estimation only needs to recover the nonzero taps with reduced complexity . we first consider a simple noiseless case that the pilot signals are transmitted through a sparse channel with only a few nonzero taps , and then consider a more practical scenario that the pilot signals are transmitted through a sparse channel with additive white gaussian noise interference . the exactly and approximately sparse inverse fast fourier transform ( sifft ) can be employed for these two cases . the sifft - based algorithm recovers the nonzero channel coefficients and their corresponding coordinates directly , which is significant to the proposed partial intersection sphere ( pis ) decoding approach . unlike the maximum likelihood ( ml ) decoding that enumerates symbol constellation and estimates the transmitted symbols with the minimum distance , the pis decoding first generates the set of possible transmitted symbols and then chooses the transmitted symbols only from this set with the minimum distance . the diversity order of the pis decoding is determined by not only the number of nonzero taps , but also the coordinates of nonzero taps , and the bit error rate ( ber ) is also influenced by vector block size to some extent but roughly independent of the maximum time delay . simulation results indicate that by choosing appropriate sphere radius , the ber performance of the pis decoding outperforms the conventional zero - forcing decoding and minimum mean square error decoding , and approximates to the ml decoding with the increase of signal - to - noise ratio , but reduces the computational complexity significantly . vector orthogonal frequency division multiplexing , sparse multipath channel , sparse inverse fast fourier transform , partial intersection sphere decoding , diversity order . |
quantum adiabatic processes are a powerful strategy to implement quantum state engineering , which aims at manipulating a quantum system to attain a target state at a designed time t. in the adiabatic scenario , the quantum system evolves under a sufficiently slowly - varying hamiltonian , which prevents changes in the populations of the energy eigenlevels .in particular , if the system is prepared in an eigenstate of the hamiltonian at a time , it will evolve to the corresponding instantaneous eigenstate at later times .this transitionless evolution is ensured by the adiabatic theorem , which is one of the oldest and most explored tools in quantum mechanics .the huge amount of applications of the adiabatic behavior has motivated renewed interest in the adiabatic theorem , which has implied in its rigorous formulation as well as in new bounds for adiabaticity . in quantum information processing ,the adiabatic theorem is the basis for the methodology of adiabatic quantum computation ( aqc ) , which has been originally proposed as an approach for the solution of hard combinatorial search problems .more generally , aqc has been proved to be universal for quantum computing , being equivalent to the standard circuit model of quantum computation up to polynomial resource - overhead .moreover , it is a physically appealing approach , with a number of experimental implementations in distinct architectures , e.g. , nuclear magnetic resonance , ion traps , and superconducting flux quantum bits ( qubits ) through the d - wave quantum annealer .recently , the circuit model has been directly connected with aqc via hybrid approaches .then , an adiabatic circuit can be designed based on the adiabatic realization of quantum gates , which allows for the translation of the quantum circuit to the aqc framework with no further resources required . in particular , it is possible to implement universal sets of quantum gates through controlled adiabatic evolutions ( cae ) . in turn , cae are used to perform one - qubit and two - qubit gates , allowing for universality through the set of one - qubit rotations joint with an entangling two - qubit gate . however , since these processes are ruled by the adiabatic approximation , it turns out that each gate of the adiabatic circuit will be implemented within some fixed probability ( for a finite evolution time ) .moreover , the time for performing each individual gate will be bounded from below by the adiabatic time condition . for a recent analysis on adiabatic control of quantum gates and its corresponding non - adiabatic errors ,see ref . . in order to resolve the limitations of adiabaticity in the hybrid model ,we propose here a general shortcut to cae through simple time - independent counter - diabatic assistant hamiltonians within the framework of the superadiabatic theory .the physical resources spent by this strategy will be governed by the quantum circuit complexity , but no adiabatic constraint will be required in the individual implementation of the quantum gates . moreover ,the gates will be deterministically implemented with probability one as long as decoherence effects can be avoided .in particular , we discuss the realization of rotation gates and arbitrary n - qubit controlled gates , which can be used to design different sets of universal quantum gates .this analog approach allows for fast implementation of individual gates , whose time consumption is only dictated by the quantum speed limit ( qsl ) ( for closed systems , see refs .indeed , the time demanded for each gate will imply in an energy cost , which increases with the speed of the evolution . in this context , by analyzing the energy - time complementarity , we will show that the qsl provides an energy cost for superadiabatic evolutions that upper bounds the cost of adiabatic implementations .let us begin by discussing the design of adiabatic quantum circuits as introduced by hen through the implementation of quantum gates via cae . in order to define quantum gates through cae, we will introduce a discrete bipartite system associated with a hilbert space . the system is composed by a target subsystem and an auxiliary subsystem , whose individual hilbert spaces and have dimensions and , respectively .the dynamics of will be governed by a hamiltonian in the form + g\left ( t\right ) \left [ \sum\nolimits_{k}p_{k}\otimes h_{k}^{\left ( f\right ) } \right ] , \label{sce.1.1}\]]where , , and denotes a complete set of orthogonal projectors over , so that they satisfy and .alternatively , we can write eq .( [ sce.1.1 ] ) as denoting a hamiltonian that acts on .suppose now that we prepare the system in the initial state , where is an arbitrary state of and is the ( non - degenerate ) ground state of .then is the ground state of the initial hamiltonian . by applying the adiabatic theorem ,a sufficiently slowing - varying evolution of will drive the system ( up to a phase ) to the final state where is the ground state of .we can perform a single - qubit unitary transformation through a general rotation of an angle around a direction on the bloch sphere . in this direction , we begin by preparing the system , taken here as two qubits , in the initial state , where are the computational states of the auxiliary system .then , we let the system adiabatically evolve driven by the hamiltonian and are adiabatically - evolved hamiltonians , whose effect will be restricted to the respective subspaces of the projectors , where is a unitary vector on the bloch sphere associated with and , with denoting the set of pauli matrices .the hamiltonians are taken as \right\} ] .note that each projector is associated with a hamiltonian .for instance , for the adiabatic implementation of -controlled gates , we have defined the hamiltonian in eq .( [ adg.1.6 ] ) by linking the set with and by linking the remaining projector with .the next step is to obtain the counter - diabatic hamiltonian that implements the shortcut to the adiabatic evolution of . in this direction, we use the eigenstates of as given by eq .( [ sce1.5 ] ) .then , we get , \label{sce1.5.a}\]]with . therefore \nonumber \\ & = & \sum\nolimits_{l}\left [ p_{l}\otimes h_{l}^{cd}\left ( t\right ) \right ] , \label{sce1.5.b}\end{aligned}\]]where and is the counter - diabatic hamiltonian to be associated with the piecewise adiabatic contribution acting over subsystem , which reads .\label{sce1.6}\]]hence , from eq .( [ sfa.1.1 ] ) , we can implement the shortcut dynamics through the superadiabatic hamiltonian is the piecewise superadiabatic hamiltonian .note that the cost of performing superadiabatic evolutions requires the knowledge of the eigenvalues and eigenstates of .for the implementation of general -controlled gates , this is a hamiltonian acting over a single qubit , which is independent of the circuit complexity .moreover , we can show that , for an arbitrary -controlled quantum gate , the counter - diabatic hamiltonians ( ) associated with shortcuts to adiabatic evolutions driven by \right\} ] is the bures metric for pure states and for superadiabatic evolutions, the initial state evolves to , where denotes the instantaneous ground state of the adiabatic hamiltonian . by using the parametrized time , we can show from eqs .( [ qsl.1 ] ) and ( [ qsl.2 ] ) that the total time that mimics the adiabatic evolution within the superadiabatic approach can be reduced to an arbitrary small value .more specifically , the addition of a counter - diabatic hamiltonian implies into the qsl bound with and , as shown in section _methods_. therefore , the qsl bound reduces to with and defined by the superadiabatic hamiltonian .this means that the superadiabatic implementation is compatible with an arbitrary reduction of the total time , which holds _ independently _ of the boundary states and .naturally , a higher energetic cost is expected to be involved for a smaller evolution time .in particular , saturation of eq .( [ qsl.final2 ] ) is achieved for either or , with both cases implying in .note that this limit is forbidden in the adiabatic regime for finite , since the energy gap is proportional to , which implies in an adiabatic time of the order , with .hence , eq . ( [ qsl.final2 ] ) leads to a flexible running time in a superadiabatic implementation , only limited by the energy - time complementarity .let us show now that time and energy are complementary resources in superadiabatic implementations of quantum evolutions .we shall define the energetic cost associated with a superadiabatic hamiltonian through given by eq .( [ sce1.7 ] ) and the norm provided by the hilbert - schmidt norm } $ ] . since is hermitian , we can write } dt \nonumber \\ & = & \frac{1}{\tau } \int_{0}^{\tau } \sqrt{\text{tr}\left[ { h}^{2}\left ( t\right ) + { h}_{cd}^{2}\left ( t\right ) \right ] } dt . \label{cost1.2}\end{aligned}\ ] ] to derive eq . ( [ cost1.2 ] ) , we have used that .this can be obtained by computing the trace in the eigenbasis of and noticing that the expectation value of taken in an eigenstate of vanishes , i.e. . in particular , let us define the energetic cost to the adiabatic hamiltonian as } dt .\]]then , it follows that the energetic cost in superadiabatic evolutions supersedes the energetic cost for a corresponding adiabatic physical process . in order to evaluate adopt the basis of eigenstates of the adiabatic hamiltonian . by using eq .( [ sce1.5 ] ) , this yields } dt , \label{cost1.4}\ ] ] where are the energies of the adiabatic hamiltonian and in order to analyze the energetic cost as provided by eq .( [ cost1.4 ] ) for superadiabatic qubit rotation gates , we set and . moreover , by using eq .( [ sce1.5 ] ) , we obtain , which leads to [ see eqs .( [ cqa.2.5a ] ) and ( [ cqa.2.5b ] ) in section _ methods _ ] . hence as a function of for different values of . ]we illustrate the behavior of in fig .[ graph1 ] , where it is apparent that the energetic cost increases inversely proportional to the total time of evolution .in particular , note also that , for a fixed energetic cost , the optimal choice requires a longer evolution .this is because of the fact that , in this case , the final state associated with the auxiliary qubit is orthogonal to its initial state , so it is farther in the bloch sphere . in the more general case of controlled gates , the analysis is similar as in the case of single - qubit gates .however , we must take into account the number of projectors composing the set . more specifically , the sum over in eq .( [ cost1.4 ] ) shall run over to , which is the number of projectors over the subsystem .thus we can show that energetic cost to implement controlled gates is .let us explicitly design here the superadiabatic implementation of controlled evolutions for piecewise hamiltonians as provided by eqs .( [ pw - h ] ) . to this end , consider the eigenvalue equation , where from eq .( [ sce1.5 ] ) , it follows that the eigenstates for the adiabatic hamiltonian governing the composite system are given by the sets and associated with the set of eigenvalues and , respectively . by evaluating the eigenvalues of and , we obtain that their spectra are equal , being provided by .thus , exhibits doubly degenerate levels , with and associated with levels and , respectively . by using now eqs .( [ cqa.2.5a ] ) and ( [ cqa.2.5b ] ) , we obtain , for any and .then , from eq .( [ sce1.5.b ] ) , we obtain that the counter - diabatic hamiltonian is , which leads to the time - independent counter - diabatic hamiltonian given by eq .( [ sce1.7.ad ] ) .the extension to the case of -controlled gates can be achieved as follows . from eq .( [ sce1.5 ] ) , the eigenstates of read where , and . by computing the eigenvalues of , we obtain that the spectrum of is -degenerate , with and associated with the levels and , respectively . by using these results into eq .( [ sce1.5.b ] ) , we obtain that the counter - diabatic piecewise hamiltonian is given by eq .( [ sce1.7.ad ] ) .hence , the implementation any -controlled gate is achieved through a time - independent counter - diabatic hamiltonian .let us apply here the qsl bound to superadiabatic evolutions . by using the fact than the evolves in the ground state of and that is given by eq .( [ sfa.1.1 ] ) , we have where is the instantaneous ground state energy of .now we use eq .( [ sfa.1.2 ] ) and the inequality , which yields using the parametrized time , we obtain the parameters ( ) are given by . sincethe ground state energy for the adiabatic hamiltonian in the case of -controlled gates is [ see eqs .( [ pw - h ] ) and ( [ adg.1.6 ] ) ] , we write , with . moreover , we define . then us now analise the term . first , note that ( see proof in ref . ) , which yields we have used the inequality . from the definition of the buresmetric , we have .hence , , which implies into eq .( [ qsl.final2 ] ) .we have proposed a scheme for implementing universal sets of quantum gates within the superadiabatic approach . in particular , we have shown that this can be achieved by applying a _ time - independent _ counter - diabatic hamiltonian in the auxiliary qubit to induce fast controlled evolutions .remarkably , this hamiltonian is universal , holding both for performing single - qubit and -controlled qubit gates .therefore , a shortcut to the adiabatic implementation of quantum gates can be achieved through a rather simple mechanism . in particular, different sets of universal quantum gates can be designed by using essentially the same counter - diabatic hamiltonian .moreover , we have shown that the flexibility of the evolution time in a superadiabatic dynamics can be directly traced back from the qsl bound . in this context , the running time is only constrained by the energetic cost of the superadiabatic implementation , within a time - energy complementarity relationship .implications of the superadiabatic approach under decoherence and a fault - tolerance analysis of superadiabatic circuits are further challenges of immediate interest . in a quantum open - systems scenario ,there is a compromise between the time required by adiabaticity and the decoherence time of the quantum device .therefore , the superadiabatic implementation may provide a direction to obtain an optimal running time for the quantum algorithm while keeping an inherent protection against decoherence . in turn, a basis for such development may be provided by the generalization of the superadiabatic theory for the context of open systems . concerning error - protection, it may also be fruitful the comparison of our approach with non - adiabatic holonomic quantum computation , where non - adiabatic geometric phases are used to perform universal quantum gates ( see , e.g. recent proposals in refs . ) . moreover, the behavior of correlations such as entanglement may also be an additional relevant resource for superadiabaticity applied to quantum computation .these investigations as well as experimental proposals for superadiabatic circuits are left for future research .we are grateful to itay hen and adolfo del campo for useful discussions .m. s. s. thanks daniel lidar for his hospitality at the university of southern california .we acknowledge financial support from the brazilian agencies cnpq , capes , and faperj . this work has been performed as part of the brazilian national institute of science and technology for quantum information ( inct - iq ) .richerme , p. , senko , c. , smith , j. , lee , a. , korenblit , s. & monroe , c. experimental performance of a quantum simulator : optimizing adiabatic evolution and identifying many - body ground states .rev . a _ * 88 * , 012334 ( 2013 ) .torrontegui , e. , ibanez , s. , martinez - garaot , m. , modugno , m. , del campo , a. , guery - odelin , d. , ruschhaupt , a. , chen , x. & muga , j. g. shortcuts to adiabaticity _ adv .atom . mol .phys . _ * 62 * , 117 ( 2013 ) . | adiabatic state engineering is a powerful technique in quantum information and quantum control . however , its performance is limited by the adiabatic theorem of quantum mechanics . in this scenario , shortcuts to adiabaticity , such as provided by the superadiabatic theory , constitute a valuable tool to speed up the adiabatic quantum behavior . here , we propose a superadiabatic route to implement universal quantum computation . our method is based on the realization of piecewise controlled superadiabatic evolutions . remarkably , they can be obtained by simple time - independent counter - diabatic hamiltonians . in particular , we discuss the implementation of fast rotation gates and arbitrary n - qubit controlled gates , which can be used to design different sets of universal quantum gates . concerning the energy cost of the superadiabatic implementation , we show that it is dictated by the quantum speed limit , providing an upper bound for the corresponding adiabatic counterparts . |
clock synchronization is a well studied problem with many practical and scientific applications . in the special theory of relativitythere are two standard methods for synchronizing a pair of spatially separated clocks , einstein synchronization and eddington s slow clock transport .recently , two new quantum protocols have been proposed for synchronizing remote clocks .the first one uses prior quantum bit entanglement between the two parties and was proposed by _jozsa et al _ .this protocol is based on the assumption that the entanglement can be achieved without any relative phase error .however , the validity of this assumption has been discussed and questioned in a number of papers . once and if this entanglement can be obtained , their algorithm determines the time difference between the clocks by essentially monitoring the oscillation of a function , and thus requires shared singlets .the second protocol was proposed by _i. chuang _ , and obtains to bits of accuracy by communicating only qubits and using an range of frequencies . after communicating the bits according to his protocol , they are in the state corresponding to the fourier transform ( over ) of the state , for some fixed and known . as a result , one can apply an inverse fourier transform and subsequently measure the value of and hence . in this paper, we improve significantly on chuang s result by presenting an algorithm that is able to calculate to bits of accuracy while communicating only one qubit in one direction and using an range of frequencies .further , we prove that , under our computational model , the product of the frequency range and the number of transmitted qubits must be , and conclude that our algorithm is optimal in this model .in our protocol alice sends a photon to bob with some tick rate .the state of the received photon is , where is the time the photon spent in transit and is the pauli matrix }.\ ] ] even though we only use one - way communication from alice to bob , for the purposes of proving computational lower bounds we assume an even stronger model , where the two parties can exchange photons back and forth .the information they get about the time difference between the two clocks comes from a phase change in the state of the qubit , which depends on and the tick rate .let s now define this procedure and see how we can actually implement it .the input to this procedure will be a quantum register which holds the tick rate and a qubit .the output is a state that has a phase which depends on and .let tqh be a black box quantum procedure defined by the equation where is the time difference between the two parties and is a known base tick rate .this is a very reasonable and powerful model , since we know that all the information one can get about the time difference via such photon communications is in the form of a relative phase change . herethe first register handles the tick rate of the photon to be transmitted and the second register is the photon that alice communicates to bob ( or bob to alice ) .the implementation of this black box is based on the ticking qubit handshake protocol ( tqh ) described in i. chuang s paper .suppose alice wants to create the state .she first sends the qubit to bob with ticking rate . along a classical channelshe also tells him her time at the moment of the quantum communication .bob receives at time ( according to bob s clock ) a quantum state , where is the time the qubit spent in transit .finally , bob applies a phase change and thus the final state of the qubit is .we are going to describe a protocol for synchronizing two remote clocks by communicating one photon . in this algorithm , alice starts by preparing a register of qubits in a certain superposition . then she sends a photon to bob with the superposition of tick rates specified in .bob measures the received photon and alice obtains to bits of accuracy by processing a phase estimation on . in more detail , 1 .alice starts with a register of qubits initialized to , and after applying a fourier transform to them she obtains she also prepares a photon with polarization state .alice now transmits the prepared photon with the tick rate described by her first register .if the photon had a definite tick rate and polarization state the final state would be . since the register described in step 1 is in a superposition of tick rates , the outcome will be in a superposition of states 3 .bob measures the received photon .assuming without loss of generality that the outcome is , alice s register becomes 4 .alice then applies an inverse fourier transform , obtaining the state in .it is easy to see that this algorithm is an application of the general procedure known as _ phase estimation_. in this procedure , we assume a unitary operator with an eigenvector and eigenvalue .the goal is to estimate to bits of accuracy . to perform the estimation we start with two registers , the first one in a uniform superposition over all states in and the second one in the state .then we apply the unitary operation to the second register times , where is the content of the first register . by analyzing the performance of this procedureit can be seen that our algorithm obtains to bits of accuracy with constant probability .we can boost the probability of success to by increasing the size of the first register to .further analysis can be found in , page 221 .in this section we will prove a lower bound on the product of the range of tick rates ( frequencies ) we use and the number of qubits we communicate .if a quantum algorithm in the tqh model makes only queries to the black box with a single tick rate , then it must make a total number of queries in order to obtain to digits of accuracy . by making a query to the black box ,the input will become ; after applying a hadamard transform we obtain the quantum state from this we see that the problem of determining is equivalent to estimating the amplitude of ( or ) . the problem of estimating the amplitude of a quantum state , which is equivalent to the problem of counting the number of solutions to a quantum problem , is well - studied . in prove that queries are required for a -approximate count , where is the number of solutions , is the set of possible inputs and defines the closeness of the approximation .if we use this lower bound for the case of amplitude estimation , we get a lower bound of , since the amplitude is . in our case , can take any value in , must be less than and , so we obtain the lower bound of qubits .+ suppose we are able to query the black box with frequencies in the range ] black box with tick rate is equivalent to consecutive queries with tick rate , using the output of one query as the input to the next .notice that a superposition of queries to the $ ] black box does not pose any challenge to the simulation , since we can also query the one - tick rate black box in a superposition of times .for such an input , the number of queries is defined to be the maximum over all states of the superpositions . since in all caseswe query the black box at most times , this means that the one - tick rate version will run with at most queries . now , in lemma 1 we have already proved that when we use only one tick rate , we need to communicate at least qubits , and therefore . | the clock synchronization problem is to determine the time difference between two spatially separated parties . we improve on i. chuang s quantum clock synchronization algorithm and show that it is possible to obtain to bits of accuracy while communicating only one qubit in one direction and using an frequency range . we also prove a quantum lower bound of for the product of the transmitted qubits and the range of frequencies , thus showing that our algorithm is optimal . |
when taxicab geometry is mentioned , the context is often a contrast with two - dimensional euclidean geometry . whatever the emphasis , the depth of the discussion frequently does not venture far beyond how the measurement of length is affected by the usual euclidean metric being replaced with the taxicab metric where the distance between two points and is given by papers have appeared that explore length in taxicab geometry in greater detail , and a few papers have derived area and volume formulae for certain figures and solids using exclusively taxicab measurements .however , this well - built foundation has left a number of open questions . what is the taxicab length of a curve in two dimensions ? in three dimensions ?what does area actually mean in two - dimensional taxicab geometry ?is the taxicab area of a `` flat '' surface in three dimensions comparable to the taxicab area of the `` same '' surface ( in a euclidean sense ) in two dimensions ?these are all fundamental questions that do not appear to have been answered by the current body of research in taxicab geometry . in this paperwe wish to provide a comprehensive , unified view of length , area , and volume in taxicab geometry .where suitable and enlightening , we will use the value as the value for in taxicab geometry .the first dimensional measure we will examine is the simplest of the measures : the measurement of length .this is also , for line segments at least , the most well understood measure in taxicab geometry .the simplest measurement of length is the length of a line segment in one dimension . on this point ,euclidean and taxicab geometry are in complete agreement .the length of a line segment from a point to another point is simply the number of unit lengths covered by the line segment , in two dimensions , however , the euclidean and taxicab metrics are not always in agreement on the length of a line segment . for line segments parallel to a coordinate axis , such as the line segment with endpoints and ,there is agreement since both metrics reduce to one - dimensional measurement : . only when the line segment is not parallel to one of the coordinate axes do we finally see disagreement between the euclidean and taxicab metrics .the taxicab length of such a line segment can be viewed as the sum of the euclidean lengths of the projections of the line segment onto the coordinate axes ( figure [ taxicab_length ] ) , the pythagorean theorem tells us the euclidean and taxicab lengths will generally not agree for line segments that are not parallel to one of the coordinate axes .line segments of the same euclidean length will have various taxicab lengths as their position relative to the coordinate axes changes .if one were to place a scale on a diagonal line , the euclidean and taxicab markings would differ with the largest discrepancy being at a angle to the coordinate axes ( figure [ scale_diff ] ) .these cases and types of length measurement are well known and are well understood to those familiar with taxicab geometry .but , even in two dimensions , there is at least one other type of length measurement in euclidean geometry : the length of a curve .how is the length of a ( functional ) curve in two dimensions measured in taxicab geometry ? in euclidean geometry , the arc length of a curve described by a function with a continuous derivative over an interval ] into subintervals with widths by defining points between and ( figure [ arc_length_2d ] ) .this allows for definition of points on the curve whose -coordinates are . by connecting these points, we obtain a polygonal path approximating the curve . at this crucial point of the derivationwe apply the taxicab metric instead of the euclidean metric to obtain the taxicab length of the line segment , by the mean value theorem , there is a point between and such that .thus , the taxicab length of the segment becomes and the taxicab length of the entire polygonal path is by increasing the number of subintervals while forcing , the length of the polygonal path will approach the arc length of the curve .so , the right side of this equation is a definite integral , so the taxicab arc length of the curve described by the function over the interval ] is given by which is precisely one - fourth of the circumference of a taxicab circle of radius .a more interesting application involves the northeast quadrant of the euclidean circle of radius at the origin described by . the distance is the same as if we had traveled along the taxicab circle between the two endpoints !while this is a shocking result to the euclidean observer who is accustomed to distinct paths between two points generally having different lengths , the taxicab observer merely shrugs his shoulders .a curve such as a euclidean circle can be approximated with increasingly small horizontal and vertical steps near its path ( figure [ different_paths]b ) . as any good introduction to taxicab geometry teaches us , such as krause ,multiple straight - line paths between two points have the same length in taxicab geometry ( see figure [ different_paths]a ) .so , each of these approximations of the euclidean circle will have the same taxicab length .therefore , we should expect the limiting case to also have the same length . to further make the point, we can follow the euclidean parabola in the first quadrant and arrive at the same result ( figure [ multiple_curves ] ) . to formalize these observations , we have the following theorem .[ thm_multiple_paths ] if a function is monotone increasing or decreasing and differentiable with a continuous derivative over an interval ] is ( i.e. the path from to is independent of the function under the stated conditions ) .the taxicab arc length of over ] divide the interval into subintervals . using a linear approximation of the curve over each subinterval, we revolve each line segment about the -axis . in the euclidean derivation, this creates the frustrum of a cone . in taxicab geometry ,the discussion above shows us this will create the frustrum of a taxicab cone , which is equivalent to the frustrum an euclidean right square pyramid ( partially shown in figure [ fig_revolve ] ) . to find the taxicab surface area of the frustrum, we will compute the euclidean area and then scale the area by examining the rotations of the frustrum sides to the -plane .the euclidean surface area of the frustrum of a pyramid is where and are the perimeters of the `` top '' and `` bottom '' edges of the frustrum and is the slant height . for the subinterval ,the euclidean perimeters are and . to compute the slant height , we project the shaded triangle in figure [ fig_revolve ] onto a plane parallel to the -plane .the slant height is the hypotenuse of the right triangle .the projected triangle is an isoceles right triangle with legs of length ( labeled in the figure ) , so the altitude of this triangle is .the other leg of triangle has length ( labeled in the figure ) .therefore , the slant height is ^ 2}\ ] ]so , we have the euclidean surface area of frustrum of the approximation : ^ 2}\\ & = & 2\sqrt{2}(f(x_{k-1})+f(x_k))\sqrt{(\triangle x_k)^2+\frac{1}{2}[f(x_{k})-f(x_{k-1})]^2}\end{aligned}\ ] ] the sides of this frustrum form euclidean angles of with the -plane using a cross - sectional plane parallel to the -plane .this gives one scaling factor for the taxicab surface area of .the other scaling factor is dependent on the sum of the cosine and sine of the angle of the linear approximation of the function and the -axis .this gives another scaling factor of ^ 2}}\ ] ] therefore , using equation ( [ eq_scaling ] ) the taxicab surface area of the frustrum of the approximation is ^ 2}}{\sqrt{(\triangle x_k)^2+[f(x_k)-f(x_{k-1})]^2 } } } \ ] ] as the derivation now continues along the same lines as the euclidean version , the intermediate value theorem and the mean value theorem give us values and such that and .therefore , ^ 2}}{\sqrt{(\triangle x_k)^2+[f'(x_k^{**})\triangle x_k]^2}}\\ & = & \frac{8f(x_k^*)(1+|f'(x_k^{**})|)\sqrt{1+\frac{1}{2}[f'(x_k^{**})]^2}}{\sqrt{1+[f'(x_k^{**})]^2 } } \triangle x_k\\ & = & 8f(x_k^*)(1+|f'(x_k^{**})|)\sqrt{1-\frac{[f'(x_k^{**})]^2}{2(1+[f'(x_k^{**})]^2 ) } } \triangle x_k\end{aligned}\ ] ] accumulating the frustrums and taking the limit yields the formula for the taxicab surface area of a solid of revolution : ^ 2}{2(1+[f'(x)]^2 ) } } \ , dx\ ] ] this is a very interesting formula .the portion outside the radical is very reminiscent of the euclidean formula with the expected changes in the circle circumference factor ( ) and the arc length factor ( ) due to the taxicab metric .the extra radical represents a scaling factor based on the ratio of the slant height to the linear approximation of the curve .the greater the difference between these two quantities the more the frustrum sides are rotated with respect to the -plane thus requiring more scaling .if these two quantities are close , the frustrum sides are rotated very little and therefore require little scaling . to complete the analysis inspired by janssen , half of a taxicab sphere of radius is obtained by revolving the function over the interval ] . ^ 2 \ , dx\ ] ] continuing our primary example , the upper half of a taxicab circle of radius centered at the origin is described by using equation ( [ eq_solid_volume ] ) the volume of a taxicab sphere obtained by revolving the upper half of the circle about the -axis is ^{2 } \, dx\\ & = & \frac{1}{2}\pi_t \int_{-r}^{0 } \ ! ( x+r)^{2 } \ , dx + \frac{1}{2}\pi_t \int_{0}^{r } \ ! ( -x+r)^{2 } \ , dx\\ & = & \left.\frac{1}{6}\pi_t ( x+r)^{3 } \ , \right|_{-r}^{0 } - \left.\frac{1}{6}\pi_t ( -x+r)^{3 } \ , \right|_{0}^{r}\\ & = & \frac{1}{3}\pi_t r^{3}\end{aligned}\ ] ] this result agrees precisely with the volume of a taxicab sphere as a special case of a tetrahedron providing some assurance of our concept of volume and computational approach .it should also be noted that the surface area of a sphere is not the derivative of the volume with respect to the radius .this is a consequence of the radius of the sphere not being everywhere perpendicular to the surface .to conclude our discussion , we can derive surface area and volume formulae for the taxicab equivalents of a few common solids . using the standard definition of a taxicab parabola described in , half of a ( horizontally ) parallel case of the parabola with focus and directrix given by restricting the total `` height '' of the parabola to and revolving the curve about the -axis yields an open - top taxicab paraboloid ( figure [ fig_paraboloid ] ) surface area as we would expect based on figure [ fig_paraboloid ] , this is the sum of the surface area of a cylinder with radius and height and half a sphere of radius . for the volume of a paraboloid we have again , as expected , this is the sum of the volume of a cylinder of radius and height and half a sphere of radius . in the defining paper concerning conics in taxicab geometry , nondegenerate ( or `` true '' ) two - focitaxicab ellipses are described as taxicab circles , hexagons , and octagons .if we revolve half of one of these figures about the -axis we obtain a taxicab ellipsoid ( figure [ fig_ellipsoid ] ) .if we consider a taxicab ellipse with major axis length , minor axis length , and the sum of the distances from a point on the ellipse to the foci , the function describes the upper half of an ellipse centered at the origin .this function will generally cover the sphere ( and ) , hexagon ( and ) , and octagon ( ) cases for a taxicab ellipse .the ellipsoid solid of revolution will have volume for the case of a spherical taxicab ellipsoid ( figure [ fig_ellipsoid]c ) , this formula reduces to in agreement with our previous result . for the case of a hexagon ( figure [ fig_ellipsoid]b ), this formula reduces to which is the sum of the volume of a taxicab sphere of radius and a taxicab cylinder of radius and height .this is to be expected since a hexagonal taxicab ellipsoid is composed of a taxicab cylinder capped by two taxicab half - spheres . in euclidean geometry ,there is not a closed form for the surface area an ellipsoid .since a taxicab ellipse is composed of straight lines , this problem is avoided in taxicab geometry .the surface area of an taxicab ellipsoidal solid of revolution is for a hexagonal ellipsoid , the first term is the surface area of the ends which combine to equal a taxicab sphere of radius ; the second and last terms are zero ; and , the third term is the area of a taxicab cylinder of radius and length composing the middle of the ellipsoid . for the octogonal ellipsoid ( figure [ fig_ellipsoid]a ) , the first term is an overestimate since the top of the sphere is not present on either end .this is corrected by the subtraction of the second term amounting to a taxicab sphere of radius .( a similar correcion term is seen in the volume formula above . )the last term accounts for the area of the ends of the ellipsoid which are taxicab circles of radius .generalizing the patterns we have set forth in this paper , it appears that the taxicab measure of a -dimensional figure in -dimensional space will agree with the euclidean measure of the figure .however , in -dimensional space , the taxicab measure of the figure will in general depend on its position in the space when compared with the euclidean measure . alongthe way we generalized the observation that multiple paths between two points can have the same taxicab length and described a strategy for dealing with figures living in a dimension higher than themselves .we also developed in taxicab geometry the strategy of revolving a function about the -axis axis by observing that the shape of the cross - section of the solid in the -plane should be a taxicab circle .as in euclidean geometry , this has yielded a simple method for deriving surface area and volume formulas for some standard taxicab solids created as solids of revolution . | while the concept of straight - line length is well understood in taxicab geometry , little research has been done into the length of curves or the nature of area and volume in this geometry . this paper sets forth a comprehensive view of the basic dimensional measures in taxicab geometry . |
consider a quantum state of some system consisting of many particles . this system could be a collection of cold atoms in an optical lattice , or of atoms in cavities , coupled by light , or entirely optical systems .assume that one is capable of performing local projective measurements on that system , however there is no way to realize a controlled coherent evolution .can one perform universal quantum computing in such a setting ?perhaps surprisingly , this is indeed the case : the _ one - way model _ of refs . demonstrates that local measurements on the _ cluster state _ a certain multi - particle entangled state on an array of qubits do possess this computational power .the insight gives rise to an appealing view of quantum computation : one can in principle abandon the need for any unitary control , once the initial state has been prepared .the local measurements a feature that any computing scheme would eventually embody then take the role of preparation of the input , the computation proper , and the read - out .this is of course a very desirable feature : quantum computation then only amounts to ( i ) preparing a universal resource state and ( ii ) performing local projective measurements [ 26 ] .but what about other entangled quantum states , different from cluster or graph states ?can they form a resource for universal computation ?is it possible to tailor resource states to specific physical systems ?for some experimental implementations e.g. , cold atoms in optical lattices , atoms in cavities , optical systems [ 11 - 13 ] , ions in traps , or many - body ground states it may well be that preparation of cluster states is unfeasible , costly , or that they are particularly fragile to finite temperature or decoherence effects .also , from a fundamental point of view , it is clearly interesting to investigate the computational power of many - body states either for the purpose of building measurement - based quantum computers or else for deciding which states could possibly be classically simulated . interestingly , very little progress has been made over the last years when it comes to going beyond the cluster state as a resource for measurement - based quantum computation ( mbqc ) . to our knowledge , no single computational model distinct from the one - way computer has been developed which would be based on local measurements on an algorithm - independent qubit resource state .the apparent lack of new schemes for mbqc is all the more surprising , given the great advances that have been made toward an understanding of the structure of cluster state - based computing itself .for example , it has been shown that the computational model of the one - way computer and teleportation - based approaches to quantum computing are essentially equivalent .a particularly elegant way of realizing this equivalence was discovered in ref . : they pointed out that the maximally entangled states used for the teleportation need not be physical .instead , the role can be taken on by virtual entangled pairs used in a `` valence bond '' description of the cluster state .this point of view is closely related to our approach to be described below .further progress includes a clarification of the temporal inter - dependence of measurements . in ref . a first non - cluster ( though not universal , but algorithm - dependent ) resource has been introduced , which includes the natural ability of performing three - qubit gates .recently , refs . initiated a detailed study of resource states which can be used to prepare cluster states ( see section [ sec : propertiesdiscussion ] ) . in this work ,we describe methods for the systematic construction of new mbqc schemes and resource states .this continues a program initiated in ref . in a more detailed fashion .we analyze mbqc in terms of `` computational tensor networks '' , building on a familiar tool from many - body physics known by the names of matrix - product states , finitely correlated states or projected entangled pair states .the problem of finding novel schemes for measurement - based computation can be approached from two different points of view .firstly , one may concentrate on the _ quantum states _ which provide the computational power of measurement - based computing schemes and ask 1 . _what are the properties that render a state a universal resource for a measurement - based computing scheme for a discussion , in particular in relation to ref . . ] ? _ secondly , putting the emphasize on _ methods _ , the central question becomes 1 ._ how can we systematically construct new schemes for measurement - based quantum computation ?is there a framework which is flexible enough to allow for the construction of a variety of different models ? _ both of these intertwined questions will be addressed in this work .as our main result , we present a plethora of new universal resource states and computational schemes for mbqc .the examples have been chosen to demonstrate the flexibility one has when constructing models for measurement - based computation .indeed , it turns out that many properties one might naturally conjecture to be necessary for a state to be a universal resource can in fact be relaxed .needless to say , the weaker the requirements are for a many - body state to form a resource for quantum computing , the more feasible physical implementations of mbqc become .below , we enumerate some specific results concerning the properties of resource states .the list pertains to question 1 given in the introduction .* in the cluster state , every particle is maximally entangled with the rest of the lattice . also , the localizable entanglement is maximal ( i.e. one can deterministically prepare an maximally entangled state between any two sites , by performing local measurements on the remainder ) . while both properties are essential for the original one - way computer , they turn out not to be necessary for computationally universal resource states . to the contrary, we construct _ universal states which are locally arbitrarily pure_. * for previously known schemes for mbqc , it was essential that far - apart regions of the state were uncorrelated .this feature allowed one to logically break down a measurement - based calculation into small parts corresponding to individual quantum gates .our framework does not depend on this restriction and resources with _ non - vanishing correlations _ between any two subsystems are shown to exist .this property is common e.g. , in many - body ground - states .* cluster states can be prepared step - wise by means of a bi - partite _ entangling gate _ ( controlled - phase gate ) .this property is important to the original universality proof .more generally , one might conjecture that resource states must always result from an entangling process making use of mutually commuting entangling gates , also known as a unitary _ quantum cellular automaton _once more , this requirement turns out not to be necessary . *the cluster states can be used as _ universal preparators _ : any quantum state can be distilled out of a sufficiently large cluster state by local measurements .once more , this property is essential to the original one - way computer scheme .however , computationally universal resource states not exhibiting this properties do exist ( the reader is referred to ref . for an analysis of resource states which are required to be preparators ; see also the discussion in section [ sec : propertiesdiscussion ] ) .more strongly , we construct universal resources out of which not even a single two - qubit maximally entangled state can be distilled . *a genuine _ qu - trit _ resource is presented ( distinct , of course , from a qu - trit version of the cluster state ) .measurement - based quantum computing as generalization of the one - way model as being considered in this work .initially , an entangled resource state is available , different from the cluster state , followed by local projective measurements on all individual constituents in the regular not necessarily cubic lattice . in all figures ,dark gray circles denote individual physical systems.,width=207 ] we will further see that there is quite some flexibility concerning the computational model itself ( addressing question 2 mentioned in the introduction ) : * the new schemes differ from the one - way model in the way the _ inherent randomness _ of quantum measurements is dealt with . *we generalize the well - known concept of _ by - product operators _ to encompass any finite group .e.g. we show the existence of computational models , where the by - product operators are elements of the entire single - qubit clifford group , or the dihedral group .* we explore schemes where each logical qubit is encoded in _ several neighboring correlation systems _ ( see section [ sec : ctn ] for a definition of the term `` correlation system '' ) .* one can find ways to construct schemes in which interactions between logical qubits are controlled by `` routing '' the qubits towards an `` interaction zone '' or keeping them away from it . * in many schemes , we adjust the layout of the measurement pattern dynamically , incorporating information about previous measurement outcomes as we go along . in particular , the expected length of a computation is random ( this constitutes no problem , as the probability of exceeding a finite expected length is exponentially small in the excess ) .what are the properties from which a universal resource state derives its power ? after clarifying the terminology , we will argue that an answer to this question desirable as it may be faces formidable obstacles .quantum computation can come in a variety of different incarnations , as diverse as e.g. , the well - known gate - model , adiabatic quantum computation or mbqc .all these models turn out to be equivalent in that they can simulate each other efficiently . for measurement - based schemes , the `` hardware '' consists of a multi - particle quantum system in an algorithm - independent state and a classical computer .the input is a gate - model description of a quantum computation . in every step of the computation, a local measurement is performed on the quantum state and the result is fed into the classical computer .based on the outcomes of previous steps , the computer calculates which basis to use for the next measurements and , finally , infers the result of the computation from the measurement outcomes . having this procedure in mind , we call a quantum state a _ universal resource _ for mbqc , if a classical computer assisted by local measurements on this states can efficiently predict the outcome of any quantum computation. the reader should be aware that another approach has recently been described in the literature .the cluster state has actually a stronger property than the one just used for the definition of universality : it is a universal preparator .this means that one can prepare any given quantum state on a given sub - set of sites of a sufficiently large cluster by means of local measurements .hence , cluster states could in principle be used for information processing tasks which require a quantum output . referred to this scenario as _ cq - universality _ i.e. universality for problems which require a classical input but deliver a quantum output .this observation is the basis of ref . , where a state is called a universal resource if it possesses the strong property of being a universal preparator , or , equivalently , of being cq - universal . clearly , any efficient universal preparator is also a computationally universal resource for mbqc ( since one can , in particular , prepare the cluster state ) .but the converse is not true , as our results show . indeed , while it proves possible to come up with necessary criteria for a state to be a universal preparator , we will argue below that the current limited understanding of quantum computers makes it extremely hard to specify necessary conditions for computational universality . in order to pinpoint the source of the quantum speedup, we might try to find schemes where more and more work is done by the classical computer , while the employed quantum states become `` simpler '' ( e.g.,smaller or less entangled ) .how far can we push this program without losing universality ?the answer is likely to be intractable .currently , we are not aware of a proof that quantum computation is indeed more powerful than classical methods .hence , it can presently not be excluded that no assistance from a quantum state is necessary at all .[ obs : assumption ] if one is unwilling to _ assume _ that there is a separation between classical and quantum computation ( i.e. , bpp bqp ) , then it is impossible to rule out any state as a universal resource .it is , however , both common and sensible to assume superiority of quantum computers and we will from now on do so .observation [ obs : assumption ] still serves a purpose : it teaches us that the only known way to rule out universality is to invoke this assumption ( this avenue was taken , e.g. , in refs . ) .[ obs : onlyway ] the only currently known method for excluding the possibility that a given quantum state forms a universal resource is to show that any measurement - based scheme utilizing the state can be efficiently simulated by a classical computer .thus , the situation presents itself as follows : there is a tiny set of quantum states for which it is possible to prove that any local measurement - based scheme can be efficiently simulated .on the other extreme , there is an even tinier set for which universality is provable . for the vast majorityno assessment can be made .furthermore , given the fact that rigorously establishing the `` hardness '' of many important problems in computer science turned out to be extremely challenging , it seems unlikely that this situation will change dramatically in the foreseeable future .we conclude that a search for necessary conditions for universality is likely to remain futile .the converse question , however , can be pursued : it is possible to show that many properties that one might naively assume to be present in any universal resource are , in fact , unnecessary .the current section is devoted to an in - depth treatment of a class of states known respectively as valence - bond states , finitely correlated states , matrix product states or projected entangled pairs states , adapted to our purposes of measurement - based quantum computing .this family turns out to be especially well - suited for a description of a computing scheme . indeed , any systematic analysis of resources states requires a framework for describing quantum states on extended systems .we briefly compile a list of desiderata , based on which candidate techniques can be assessed .* the description should be _ scalable _ , so that a class of states on systems of arbitrary size can be treated efficiently .* as quantum states which are naturally described in terms of one - dimensional topologies have been shown to be classically simulable , the framework ought to handle _ two- or higher dimensional topologies _ naturally . *the basic operation in measurement - based computation are _local measurements_. it would be desirable to describe the effect of local measurements in a local manner .ideally , the class of efficiently describable states should be closed under local measurements .* the class of describable states should include elements which show features that naturally occur in _ ground states _ of quantum many - body systems , such as _ non - maximal local entropy of entanglement _ or _ non - vanishing two - point correlations _ , etc . the description of states to be introduced below complies with all of these points .we will introduce the construction in several steps , starting with one - dimensional matrix product states .the new view on the processing of information is that the matrices appearing in the description of resource states are taken literally , as operators processing quantum information. a _ matrix product state _ ( mps ) for a chain of systems of physical dimension ( so for qubits ) is specified by * an _ auxiliary dimensional vector space _ ( being some parameter , describing the amount of correlation between two consecutive blocks of the chain ) , * for each system a set of -_matrices _ , j\in\{0\dots d-1\}| \psi \rangle| l \rangle| s_1 , \dots , s_n \rangle ] , so the mps is translationally invariant up to the boundary conditions .we take the freedom of disregarding normalization whenever this consistently possible .let us spend a minute interpreting eq .( [ eqn : linearmps ] ) .assume we have measured the first site in the computational basis and obtained the outcome .one immediately sees that the resulting state vector on the remaining sites is again a mps , where the left - hand side boundary vector now reads {\mbox{}}.\ ] ] hence the state of the auxiliary system gets changed according to the measurement outcome .so we find that the correlations between the state of the first site and the rest of the chain are mediated via the auxiliary space , which will thus be referred to as _ correlation space _ in the sequel . in the past, the matrices appearing in the definition of have been treated mainly as a collection of variational parameters , used to parametrize ansatz states for ground states of spin chains .however and that is the basic insight underlying our view on mbqc eq . ( [ eqn : firstcomp ] ) can also be read as an operator \langle j || i \rangle| l\rangle\langle r || l \rangle\langle \phi | 1 \rangle|l \rangle]. ] by &{*+[f]{a[0]}}\ar[r ] & } } & = & { \mbox{}}_r { \mbox{}}_l , \\\label{eqn : clustermatrix1 } { \xymatrix=5mm{\ar[r]&{*+[f]{a[1]}}\ar[r ] & } } & = & { \mbox{}}_r { \mbox{}}_l .\end{aligned}\ ] ] the intuition behind this choice is as follows . by the elementary relations the contraction in the middle of &{*+[f]{a[s_1]}}\ar@{-}[r]&{*+[f]{a[s_2]}}\ar[r ] & } } \ ] ] will yield a sign of `` '' exactly if . indeed , setting the boundary vectors to one checks easily that \dots a[s_1]{\mbox{ } } = 2^{-n/2 } ( -1)^p,\ ] ] which is exactly the value required by eq .( [ eqn : clustercoefficients ] ) .below , we will interpret the correlation system of a 1-d chain as a single logical quantum system .for this interpretation to be viable , we must check that the following basic operations can be performed deterministically by local measurements : i ) prepare the correlation system in a known initial state , ii ) transport that state along the chain ( possibly subject to known unitary transformations ) and iii ) read out the final state . to set the state of the correlation system to a definitive value, we measure some site say the -th in the -eigenbasis . throughout this work, we will choose the notation , , and for the _pauli operators_. denote the measurement outcome by . in case of , eq.([eqn : clustermatrix0 ] ) tells us that the state of the correlation system to the right of the -th site will be ( up to an unimportant phase ) .likewise , a outcome prepares the correlation system in , according to eq .( [ eqn : clustermatrix1 ] ) .it follows that we can use -measurements for preparation . how to cope with the intrinsic randomness of quantum measurementswill concern us later .secondly , consider the operators &{*+[f]{a[+]}}\ar[r ] & } & = & 2^{-1/2 } ( \xymatrix=3mm{\ar[r]&{*+[f]{a[0]}}\ar[r ] & } + \xymatrix=3mm{\ar[r]&{*+[f]{a[1]}}\ar[r ] & } ) \nonumber \\ & \propto&{\mbox{}}{\mbox{}}+{\mbox{}}{\mbox{}}=h , \label{eqn : clustertransport+ } \\ \nonumber \\\xymatrix=3mm{\ar[r]&{*+[f]{a[-]}}\ar[r]&}&\propto&h z , \label{eqn : clustertransport- } \end{aligned}\ ] ] where is the hadamard - gate .we see immediately that measurements in the -eigenbasis give rise to a unitary evolution on the correlation space . similarly, one can show that one can generate arbitrary local unitaries by appropriate measurements in the - plane .below , we will frequently be confronted with a situation like the one presented in eqs.([eqn : clustertransport+],[eqn : clustertransport- ] ) , where the correlation system evolves in one of two possibilities , dependent on the outcome of a measurement .it will be convenient to introduce a compact notation that encompasses both cases in a single equation .so eqs . ( [ eqn : clustertransport+],[eqn : clustertransport- ] ) will be represented as &{*+[f]{a[x]}}\ar[r ] & } } = h z^x.\ ] ] here corresponds to the outcome in an -measurement , whereas corresponds to the outcome . in general , a physical observable given as an argument to a tensor corresponds to a measurement in the observable s eigenbasis .the measurement outcome is assigned to a suitable variable as in the above example .lastly , we must show how to physically read out the state of the purely logical correlation system .it turns out that measuring the -th physical system in the -eigenbasis corresponds to a -measurement of the state of the correlation system just after site .indeed , suppose we have measured the first systems and obtained results corresponding to the local projection operator .further assume that as a result of these measurements the correlation system is in the state : {l}\ar@{-}[r]&*+[f]{a[\phi_1]}\ar@{-}[r]&\dots\ar@{-}[r]&*+[f]{a[\phi_i]}\ar[r ] & } = { \mbox{}}.\ ] ] using eq .( [ eqn : clustermatrix1 ] ) we have that {l}\ar@{-}[r]&*+[f]{a[\phi_1]}\ar@{-}[r]&\dots\ar@{-}&*+[f]{a[\phi_i]}{\ar@{-}[r ] } & * + [ f]{a[1]}\ar[r ] & } \\ & \propto & { \mbox{ } } { \mbox{}}=0.\nonumber\end{aligned}\ ] ] but then it follows from eq .( [ eqn : transport ] ) that the probability of obtaining the result for a -measurement on site is equal to zero . in other words : if the _ correlation system _ is in the state after the -th site , then the -th _ physical site _ must also be in the state . an analogous argument forthe -case completes the description of the read - out scheme .the graphical notation greatly facilitates the passage to 2-d lattices . here ,the tensors ] , which will be contracted with the indices of the left , right , upper and lower neighboring tensors respectively . after choosing a set of boundary conditions , the expansion coefficients of the state vector are computed as illustrated in the following example on a -lattice : {u } } & { * + [ f]{u } } \\ { * + [ f]{l}}{\ar@{-}[r]}&{*+[f]{a[s_{1,1}]}}{\ar@{-}[u]}{\ar@{-}[r]}&{*+[f]{a[s_{2,1}]}}{\ar@{-}[u]}{\ar@{-}[r]}&{*+[f]{r } } \\ { * + [ f]{l}}{\ar@{-}[r]}&{*+[f]{a[s_{1,2}]}}{\ar@{-}[u]}{\ar@{-}[r]}&{*+[f]{a[s_{2,2}]}}{\ar@{-}[u]}{\ar@{-}[r]}&{*+[f]{r } } \\ & { * + [ f]{d}}{\ar@{-}[u]}&{*+[f]{d}}{\ar@{-}[u ] } } } \end{xy}.\end{aligned}\ ] ] in the 1-d case , we thought of the quantum information as moving along a single correlation system from the left to the right .for higher - dimensional lattices , a greater deal of flexibility proves to be expedient . for example , sometimes it will be natural to interpret the tensor as specifying the matrix elements of an operator mapping the left and the lower correlation systems to the right and the upper ones : &{*+[f]{a}}\ar[u]\ar[r ] & \\ & \ar[u]&\\ } } \end{xy}.\ ] ] often , on the other hand , the interpretation &\\ \ar[r]&{*+[f]{a}}\ar[r ] & \\ & \ar[u]&\\ } } \end{xy}\ ] ] or yet another one is to be preferred .we have seen in section [ 1dtn ] that the correlation system of a one - dimensional matrix product state can naturally be interpreted as a single quantum system subject to a time evolution induced by local measurements .it would be desirable to carry this intuition over to the 2-d case .indeed , most of the examples to be discussed below are all similar in relying on the same basic scenario : some horizontal lines in the lattice are interpreted as effectively one - dimensional systems , in which the logical qubits travel from the left to the right .the vertical dimension is used to either couple the logical systems or isolate them from each other ( see fig .[ fig : flow ] ) .the reader should recall that this setting is very similar to the original cluster state based - techniques .clearly , it would be interesting to devise schemes not working in this way and the example presented in section [ sec : toric2 ] takes a first step in this direction .once again the cluster state serves as an example .one can work out the tensor network representation of the 2-d cluster state vector in the same way utilized for the 1-d case in section [ sec:1dcluster ] .the resulting tensors are : &{*+[f]{a[0]}}\ar[u]\ar[r ] & \\ & \ar[u ] & } } \end{xy } = { \mbox{}}_r{\mbox{}}_u\,{\mbox{}}_l{\mbox{}}_d , \\\label{eqn:2dcluster1 } \begin{xy } * !c\xybox{\xymatrix=3mm=3 mm { & & \\\ar[r]&{*+[f]{a[1]}}\ar[u]\ar[r ] & \\ & \ar[u]&\ } } \end{xy } = { \mbox{}}_r{\mbox{}}_u\,{\mbox{}}_l{\mbox{}}_d , \\ { \mbox{}}={\mbox{ } } = { \mbox{ } } , \qquad { \mbox{}}={\mbox{}}={\mbox{}}.\end{aligned}\ ] ] an important property of eqs .( [ eqn:2dcluster0 ] , [ eqn:2dcluster1 ] ) is that the tensors | 0 \rangle| + \rangle ] effectively de - couple their respective indices .based on this fact , we will see momentarily how -measurements can be used to stop information from flowing through the lattice .indeed , suppose three vertically adjacent sites are measured , from top to bottom , respectively in the , and -eigenbasis : &{*+[f]{a[z_u]}}\ar[u]\ar[r]&\\ \ar[r]&{*+[f]{a[x]}}\ar@{-}[u]\ar[r]&\\ \ar[r]&{*+[f]{a[z_d]}}\ar@{-}[u]\ar[r]&\\ & \ar[u ] & } } \end{xy}.\ ] ] denote the measurement results by .as before , these numbers correspond to for and for , as well as for and for .in fact , we are mainly interested in the indices of the middle tensor , as they will be the ones which carry the logical information . to this end eq .( [ eqn : factor ] ) is of use , as it says that the upper and lower tensors factor and hence it makes sense to dis - regard all of their indices which do not influence the middle part .it hence suffices to consider {a[z_u]}}\ar@{-}[d]&\\ \ar[r]&{*+[f]{a[x]}}\ar[r]&\\ & { * + [ f]{a[z_d]}}\ar@{-}[u]&\\ } } \end{xy}.\ ] ] as a first step , we calculate {0}}\ar@{-}[d]&\\ \ar[r]&{*+[f]{a[0]}}\ar[r]&\\ & { * + [ f]{+}}\ar@{-}[u]&\\ } } \end{xy } & = & \begin{xy } * ! c\xybox{\xymatrix=3mm=3 mm { & & { * + [ f]{0}}{\ar@{-}[d]}&\\ & & { * + [ f]{+}}\\ \ar[r]&{*+[f]{0}}&&{*+[f]{+}}\ar[r]&\\ & & { * + [ f]{0}}{\ar@{-}[d]}&\\ & & { * + [ f]{+ } } } } \end{xy } = 2^{-1 } { \mbox{}}{\mbox{}},\end{aligned}\ ] ] having used eq .( [ eqn : factor ] ) and the basic fact {+}}{\ar@{-}[r]}[r]&{*+[f]{0 } } } } = { \mbox{}}=2^{-1/2}.\ ] ] a similar calculation where ] yields .hence , for \propto a[0]+a[1]| + \rangle\langle 0|| - \rangle\langle 1 | ] is given by the hadamard gate , instead of the pauli operator : &{*+[f]{a[0]}}\ar[r ] & } } = h.\ ] ] this state shares all the defining properties of the original : it is the unique ground - state of a spin-1 nearest neighbor frustration free gapped hamiltonian ( see appendix [ sec : akltappendix ] ) .against the background of our program , the obvious question to ask is whether these matrices can be used to implement any evolution on the correlation space . to show that this is indeed the case ,let us first analyze a measurement in the -basis , where . in a mild abuse of notation ,we will hence write for state vectors in the subspace spanned by instead of . from eqs.([eqn : akltorg]-[eqn : aklth ] ) one finds that depending on the measurement outcome , the operation realized on the correlation space will be one of or . at this point , we have to turn to an important issue : how to compensate for the randomness of quantum measurement outcomes .assume for now that we intended to just transport the information faithfully from left to right .in this case , we consider the operator as an unwanted _ by - product _ of the scheme . the one - way computer based on cluster states has the remarkable property that the by - products can be dealt with by adjusting the measurement - bases depending on the previous outcomes , without changing the general `` layout '' ( in the sense of fig .[ fig : flow ] ) of the computation . for more general models , as the ones considered in this work , such a simple solution seems not available .fortunately , we can employ a `` trial - until - success '' strategy , which proves remarkably general .the key points to notice are that i ) the three possible outcomes and generate a finite group and ii ) the probability for each outcome is equal to , independent of the state of the correlation system. we will refer to as the model s _ by - product _ group .now suppose we measure adjacent sites in the -basis . the resulting overall by - product operator will be a product of generators .so by repeatedly transporting the state of the correlation system to the right , the by - products are subject to a random walk on . because is finite, every element will occur after a finite expected number of steps ( as one can easily prove ) .the group structure opens up a way of dealing with the randomness .indeed , assume that initially the state vector of the correlation system is given by , for some unwanted .transferring the state along the chain will introduce the additional by - product operator after some finite expected number of steps , leaving us with as desired .the technique outlined here proves to be extremely general and we will encounter it in further examples presented below . [obs : randomness ] possible sets of by - product operators are not limited to the pauli group .a way of compensating randomness for other finite by - product operator groups is to adopt a `` trial - until - success strategy '' , which gives rise to a random length of the computation .this length is in each case shown to be bounded on average by a constant in the system size . by the preceding paragraphs, we can implement any element of on the correlation space .we next address the problem of realizing a phase gate for some . to this end , consider a measurement on the -basis .there are three cases * the outcome corresponds to . in this case, we get on the correlation space and are hence done . *the outcome corresponds to .we get , which is the desired operation , up to an element of the by - product group , which we can rid ourselves of as described above . *lastly , in case of , we implement on the correlation space . as ,we can `` undo '' it and then re - try to implement the phase gate .hence , we can implement any element of as well as on the correlation space .this implies that is also realizable and therefore any single - qubit unitary , as is generated by operations of the form and .the state of the correlation system can be prepared by measuring in the computational basis . in case oneobtains a result of `` '' or `` '' , the state of the correlation system will be or respectively , irrespective of its previous state .a `` ''-outcome will not leave the correlation system in a definite state .however , after a finite expected number of steps , a measurement will give a non-``0''-result .lastly , a read - out scheme can be realized similarly ( c.f .section [ sec:1dcluster ] ) .ground states of one - dimensional gapped nearest - neighbor hamiltonians may serve as resources for transport and arbitrary rotations .[ sec : aklt2d ] a universal resource deriving from the aklt - model.,width=245 ] several horizontal 1-d aklt - type states can be coupled to become a universal 2-d resource .the coupling can be facilitated by performing a controlled - z operation , embedded into the three - dimensional spin-1 space , between vertically adjacent nearest neighbors .more specifically , we will use the operation , which introduces a -phase between two systems exactly if both are in the state .the tensor network representation of this resource is given by &{*+[f]{a[0]}}\ar[u]\ar[r ] & \\ & \ar[u ] & } } \end{xy } & = & h_{l\to r}\otimes { \mbox{}}_u{\mbox{}}_d , \\ \begin{xy } * !c\xybox{\xymatrix=3mm=3 mm { & & \\\ar[r]&{*+[f]{a[1]}}\ar[u]\ar[r ] & \\ & \ar[u]&\ } } \end{xy } & = & 2^{-1/2 } { \mbox{}}_r{\mbox{}}_l\otimes { \mbox{}}_u{\mbox{}}_d , \\ \begin{xy } * !c\xybox{\xymatrix=3mm=3 mm { & & \\ \ar[r]&{*+[f]{a[2]}}\ar[u]\ar[r ] & \\ & \ar[u]&\ } } \end{xy } & = & 2^{-1/2 } { \mbox{}}_r{\mbox{}}_l\otimes { \mbox{}}_u{\mbox{}}_d,\end{aligned}\ ] ] as one can check in analogy to sec . [ sec:2dcluster ] . here , to verify that the resulting 2-d state constitutes a universal resource , we need to check that a ) one can isolate the correlation system of a horizontal line from the rest of the lattice , so that it may be interpreted as a logical qubit and b ) one can couple these logical qubits to perform an entangling gate .the first step works in complete analogy to section [ sec:2dcluster ] , see fig .[ fig : aklt ] .indeed , one simply confirms that {a[z_u]}}\ar@{-}[d]&\\ \ar[r]&{*+[f]{a[s]}}\ar[r]&\\ & { * + [ f]{a[z_l]}}\ar@{-}[u]&\\ } } \end{xy } = \pm { \xymatrix=5mm{\ar[r]&{*+[f]{a[s]}}\ar[r ] & } } , \ ] ] where and denotes a measurement in the -basis .so measuring the vertically adjacent nodes in the computational basis gives us back the 1-d state , up to a possible sign .a controlled- gate can be realized in five steps : &{*+[f]{a[x]}}\ar@{-}[r ] & { * + [ f]{a[x]}}\ar@{-}[r]&{*+[f]{a[x]}}\ar@{-}[r ] & { * + [ f]{a[x]}}\ar@{-}[r]&{*+[f]{a[x]}}\ar[r ] & \\ \ar[r]&{*+[f]{a[z]}}\ar@{-}[r]\ar@{-}[u]\ar@{-}[d]&{*+[f]{a[z]}}\ar@{-}[u]\ar@{-}[d]\ar@{-}[r]&{*+[f]{a[y]}}\ar@{-}[u]\ar@{-}[d]\ar@{-}[r ] & { * + [ f]{a[z]}}\ar@{-}[u]\ar@{-}[d]\ar@{-}[r]&{*+[f]{a[z]}}\ar@{-}[u]\ar@{-}[d]\ar[r]&\\ \ar[r]&{*+[f]{a[x]}}\ar@{-}[r ] & { * + [ f]{a[x]}}\ar@{-}[r]&{*+[f]{a[x]}}\ar@{-}[r ] & { * + [ f]{a[x]}}\ar@{-}[r]&{*+[f]{a[x]}}\ar[r ] & } } \end{xy}.\nonumber\\\end{aligned}\ ] ] the pauli matrices are understood as being embedded into the -subspace .so , e.g. , denotes a measurement in the -basis . when operating the gate , we first measure all sites of the upper and lower lines in the -eigenbasis . in casethe result for the sites at position `` 0 '' ( refer to labeling above ) is different from , the gate failed . in that caseall sites on the middle line are measured in the computational basis and we restart the procedure five steps to the right .otherwise , the systems labeled by a are measured .we accept the outcome only if we obtained on sites and on sites should a different result occur , the gate is once again considered a failure and we proceed as above .lastly , the measurement on the central site is performed . in case of a result corresponding to , it is easy to see that no interaction between the upper and the lower part takes place , so this is the last possibility for the gate to fail . let us assume now that the desired measurement outcomes were realized . at site on the middle line, we obtained {a[1]}}\ar[r ] & } } , \ ] ] which prepares the correlation system of the middle line in . at site , in turn , a hadamard gate has been realized , which causes the output of site to be .the situation is similar on the r.h.s ., so that the above network at site can be re - written as & { * + [ f]{a[+]}}\ar[r ] & \\ { * + [ f]{+}}{\ar@{-}[r]}&{*+[f]{a[y]}}{\ar@{-}[r]}\ar@{-}[u ] & { * + [ f]{+}}\\ \ar[r ] & { * + [ f]{a[+]}}\ar[r]\ar@{-}[u ] & \\ } } \end{xy}.\ ] ] we will now analyze the tensor network in eq.([eqn : akltentangling ] ) step by step . for proving its functionality , there is no loss of generality in restricting attention to the situation where the correlation system of the lower line is initially in state , for .we compute for the lower part of the tensor network {{\mbox{}}}}\ar@{-}[r]&{*+[f]{a[+]}}\ar[r]\ar[u ] & } } \end{xy } = x{\mbox{}}_r z^c { \mbox{}}_u.\ ] ] further , plugging the output of the lower stage into the middle part , we find {+}}{\ar@{-}[r]}&{*+[f]{a[y]}}{\ar@{-}[r]}\ar[u ] & { * + [ f]{+}}\\ & { * + [ f]{z^c{\mbox{}}}}\ar@{-}[u ] } } \end{xy } \propto z^{c+y } ( { \mbox{}}+i{\mbox{}}),\ ] ] where reflects the outcome of the -measurement on the central site : in case of and for .lastly , & { * + [ f]{a[+]}}\ar[r ] & \\ & { * + [ f]{z^{c+y } ( { \mbox{}}+i{\mbox{}})}}\ar@{-}[u ] } } \end{xy } \propto s z^{c+y } x.\ ] ] in summary , the evolution afforded on the upper line is , equivalent to up to by - products .this completes the proof of universality . for completeness ,note that we never need the by - products to vanish for all logical qubits of the full computation simultaneously .hence the expected number of steps for the realization of one- or two - qubit gates is a constant in the number of total logical qubits . in the following , we present two mbqc resource states which are motivated by kitaev s toric code states contrasts with a result in ref . that mbqc on the planar toric code state itself can be simulated efficiently classically .different from the other schemes presented , the natural gate in these schemes is a two - qubit interaction , whereas local operations have to be implemented indirectly . also , individual qubits are decoupled not by erasing sites but by switching off the coupling between them .toric code states are states with non - trivial topological properties and have been introduced in the context of quantum error correction .they have a particularly simple representation in terms of peps or ctns on two centered square lattices , } & { \ar@{-}[d ] } & & & \\ & { \ar@{-}[r]}{\ar@{-}[d ] } & { * + [ f]{k_v}}{\ar@{-}[r]}{\ar@{-}[d ] } & { * + [ f]{k_h}}{\ar@{-}[r]}{\ar@{-}[d ] } & { \ar@{-}[d ] } & & \\ { \ar@{-}[r ] } & { * + [ f]{k_v}}{\ar@{-}[r]}{\ar@{-}[d ] } & { * + [ f]{k_h}}{\ar@{-}[r]}{\ar@{-}[d ] } & { * + [ f]{k_v}}{\ar@{-}[r]}{\ar@{-}[d ] } & { * + [ f]{k_h}}{\ar@{-}[r]}{\ar@{-}[d ] } & { \ar@{-}[d ] } & \\ & { \ar@{-}[r ] } & { * + [ f]{k_v}}{\ar@{-}[r]}{\ar@{-}[d ] } & { * + [ f]{k_h}}{\ar@{-}[r]}{\ar@{-}[d ] } & { * + [ f]{k_v}}{\ar@{-}[r]}{\ar@{-}[d ] } & { * + [ f]{k_h}}{\ar@{-}[r]}{\ar@{-}[d ] } & \\ & & { \ar@{-}[r ] } & { * + [ f]{k_v}}{\ar@{-}[r]}{\ar@{-}[d ] } & { * + [ f]{k_h}}{\ar@{-}[r]}{\ar@{-}[d ] } & & \\ & & & & & & } } \end{xy}\ ] ] where } & & \\ & { * + [ f]{k_h[s]}}{\ar@{-}[ru]}{\ar@{-}[rd ] } & \\ { \ar@{-}[ru ] } & & } } \end{xy}&= & \begin{xy } * !c\xybox{\xymatrix=3mm=3 mm { { \ar@{-}[rd ] } & & \\ & { * + [ f]{z^s } } { \ar@{-}[ru ] } & \\ & { * + [ f]{z^s } } { \ar@{-}[rd ] } & \\ { \ar@{-}[ru ] } & & } } \end{xy}\end{aligned}\ ] ] and } & & \\ & { * + [ f]{k_v[s]}}{\ar@{-}[ru]}{\ar@{-}[rd ] } & \\ { \ar@{-}[ru ] } & & } } \end{xy}&= & \begin{xy } * !c\xybox{\xymatrix=3mm=3 mm { { \ar@{-}[rd ] } & & & \\ & { * + [ f]{z^s } } & { * + [ f]{z^s}}{\ar@{-}[rd]}{\ar@{-}[ru ] } & \\ { \ar@{-}[ru ] } & & & } } \end{xy}\ , \end{aligned}\ ] ] i.e. , and are identical up to a rotation by degrees .let us first see how acts on two qubits in correlation space coming from the left .the most basic operation is a measurement in the computational basis , which simply transports both qubits to the right ( up to a correlated by - product operator ) .generalizing this to measurements in the - plane , we find that & & \\ & { * + [ f]{k_h[\phi]}}\ar[ru]\ar[rd ] & \\ \ar[ru ] & & } } \end{xy}&= & \begin{xy } * ! c\xybox{\xymatrix=3mm=3 mm { \ar[rd ] & & \\ & { * + [ f]{zz(\phi)}}\ar[ru]\ar[rd ] & \\ \ar[ru ] & & } } \end{xy}\end{aligned}\ ] ] where is the angle with the axis , and ( note that this gate is locally equivalent to the cnot gate for . )thus , the tensors in kitaev s toric code state have a _ two_-qubit operation as their natural gate in correlation space , rather than a _ single_-qubit gate . in mbqc schemes which base on these projectors ,two - qubit gates are easy to realize , whereas in order to get one - qubit gates , tricks have to be used . in the first example , we obtain single - qubit operations by introducing ancillae : a controlled phase between a logical qubit and an ancilla in a computational basis state yields a local rotation on the logical qubit . in the second example , we use a different approach : we encode each logical qubit in _ two _ qubits in correlation space . using this nonlocal encoding ,we obtain an easy implementation of both one- and two - qubit operations ; furthermore , the scheme allows for an arbitrary parallelization of the two - qubit interactions .[ obs : kitaev ] there is no need to have a one - one correspondance between logical qubits and a single correlation system .our first scheme consists of the modified tensor & & \\ & { * + [ f]{\tilde k_h[s]}}\ar[ru]\ar[rd ] & \\ \ar[ru ] & & } } \end{xy}&= & \begin{xy } * ! c\xybox{\xymatrix=3mm=3 mm { \ar[rd ] & & \\ & { * + [ f]{k_h[s]}}\ar[ru]\ar@{-}[rd ] & & \\ \ar[ru ] & & { * + [ f]{\sqrt{z}h}}\ar[rd]&\\ & & & & } } \end{xy } \label{eq : nys : mod1}\\ & = & \begin{xy } * ! c\xybox{\xymatrix=3mm=3 mm { \ar[rd ] & & \\ & { * + [ f]{\hspace*{1.2em}z^s\hspace*{1.2em } } } \ar[ru ] & \\ & { * + [ f]{\sqrt{z}hz^s } } \ar[rd ] & \\ \ar[ru ] & & } } \end{xy}\nonumber\end{aligned}\ ] ] [ with , arranged as in ( [ eqn : nys : toriccode ] ) where _ both _ and are replaced by .the extra serves the same purpose as in other schemes : it allows to leave the subspace of diagonal operations and thus to implement rotations .the need for the will become clear later ; it is connected to the fact that in the following , we show how this state can be used for mbqc .the qubits run from left to right in correlation space in zig - zag lines in eq .( [ eqn : nys : toriccode ] ) ; for the illustration in fig .[ fig : toric1 ] , we have straightened these lines , and marked the measurement - induced interactions coming from the |0,0\rangle\langle0|+ no physical system associated to them ) , and the 1-d cluster projector , cf . eqs .( [ eqn : clustermatrix0 ] ) and ( [ eqn : clustermatrix1 ] ) .thus , takes two qubits in correlation space , projects them onto the subspace , implements the 1-d cluster map up to a hadamard , and duplicates the output to two qubits .concatenating these tensors horizontally [ this takes place in ( [ eqn : nys : toriccode ] ) if all s are measured in , and one neglects pauli errors ] therefore implements a single logical qubit line , encoded in two qubits in correlation space . by removing the hadamard gate from , we obtain a 1-d cluster state encoded in two qubits which is thus capable of implementing any one - qubit operation on the logical qubit ; in particular , this includes intialization and read - out .we thus define the tensor & & \\ & { * + [ f]{\tilde k_v[s]}}\ar[ru]\ar[rd ] & \\ \ar[ru ] & & } } \end{xy}= \begin{xy } * ! c\xybox{\xymatrix=3mm=3 mm { \ar[rd ] & & & & \\ &{ * + [ f]{\textsc{copy}^\dagger}}\ar@{-}[r ] & { * + [ f]{a[s]}}\ar@{-}[r ] & { * + [ f]{\textsc{copy}}}\ar[rd]\ar[ru ] & \\ \ar[ru ] & & & & } } \end{xy}\ .\ ] ] then , the toric code state ( [ eqn : nys : toriccode ] ) with replaced by is universal for mbqc : initialization , one - qubit operations , and read - out are done exacly as in the 1-d cluster state .the logical qubits are decoupled up to by - product operators in correlation space by measuring the tensors in the basis .the by - products in correlation space correspond to errors on the encoded logical qubits and thus can again be dealt with as in the cluster . in order to couple two logical qubits ,we measure a tensor in the basis and obtain a controlled phase gate in correlation space , which translates to the same gate on the logical qubits . note that this model has the additional feature that as as many controlled phases ( between nearest neighbors ) as desired can be implemented simultaneously . in the light of the discussion on the initialization of the first scheme, one might see similarities between the two schemes , since in both cases the information is effectively encoded in pairs of qubits .note however that in the first scheme , the information is stored in the parity of the two qubits , and the full -dimensional space is being used ; the reason for this encoding came from the properties of the tensor used as a map in horizontal direction .in contrast , the second scheme only populates the -dimensional even parity subspace , and the qubit is rather stored in two copies of the same state ; finally , the encoding is motivated by the properties of the tensor as a map on correlation space in horizontal direction . in this section, we will consider instances of _ weighted graph states _ forming universal resources . to motivate the construction , recall that the cluster state can be prepared by applying a controlled - phase gate with phase between any two nearest neighbors of a two - dimensional lattice of qubits initially in the state .if one wants to physically implement this operation using _ linear optics _ , one encounters the situation that the controlled phase gate can be implemented only probabilistically , with the probability of success decreasing as increases .it is hence natural to ask whether one can build a universal resource using gates , in order to minimize the probability of failure weighted graph state as a universal resource .solid lines correspond to edges that have been entangled using phase gates with phase , dotted lines correspond to edges entangled with phase gates with .this shows that one can replace some edges with weakly entangled bonds.,width=241 ] expanding the discussion presented in ref . , we treat the weighted graph state shown in fig .[ fig : weighted ] .a tensor network representation of these states can be derived along the same lines as for the original cluster in section [ sec:1dcluster ] . set .the relevant tensors are given by &{*+[f]{a[0]}}\ar[lu]\ar[r]\ar[ru ] & \\\ar[ru ] & & \ar[lu ] } } \end{xy } & = & { \mbox{}}_{ru}\,{\mbox{}}_{lu}\,{\mbox{}}_r{\mbox{}}_{ld}{\mbox{}}_{rd}{\mbox{}}_l , \\\nonumber\\ \hspace{-1 mm } \begin{xy } * !c\xybox{\xymatrix=3mm=1 mm { & & \\\ar[r]&{*+[f]{a[1]}}\ar[lu]\ar[r]\ar[ru ] & \\\ar[ru ] & & \ar[lu ] } } \end{xy } & = & { \mbox{}}_{ru}\,{\mbox{}}_{lu}\,{\mbox{}}_r{\mbox{}}_{ld}{\mbox{}}_{rd}{\mbox{}}_l .\end{aligned}\ ] ] indices are labeled for `` right - up '' to for `` left - down '' . the boundary conditions are for the -directions ; otherwise .we will first describe how to realize isolated evolutions of single logical qubits in the sense of fig .[ fig : flow ] .again the strategy will be to measure the sites of one horizontal line of the lattice in the -basis and all vertically adjacent systems in the -basis .the analysis of the situation proceeds in perfect analogy to the one given in section [ sec:2dcluster ] .one obtains {a[z_{i-1,u } ] } } & & { * + [ f]{a[z_{i+1,u } ] } } \\\ar[r ] & { * + [ f]{a[x_i]}}\ar@{-}[lu]\ar@{-}[ru]\ar[r ] & \\ { * + [ f]{a[z_{i-1,d}]}}\ar@{-}[ru ] & & { * + [ f]{a[z_{i+1,d}]}}\ar@{-}[lu ] \\ } } \end{xy } = h s^{2x_i+z_i},\ ] ] where and denotes the _ gate_. the operators and generate the 24-element single qubit clifford group .following the approach of section [ sec : aklt ] , we take this as the model s by - product group .now choose some phase .re - doing the calculation which led to eq .( [ eqn:2dto1d ] ) , where we now measure in the -basis instead of on the central node , shows that the evolution of the correlation space is given by , up to by - products . in complete analogy to section [ sec : aklt ] , we see that the model allows for the realization of arbitrary operations .how to prepare the state of the correlation system for a single horizontal line and how to read read it out has already been discussed in section [ sec:1dcluster ] .hence the only piece missing for universal quantum computation is a single entangling two - qubit gate .the schematics for a controlled- gate between two horizontal lines in the lattice are given below .we implicitly assume that all adjacent sites not shown are measured in the -basis , &{*+[f]{a[x]}}\ar@{-}[r ] & { * + [ f]{a[x]}}\ar@{-}[r ] & { * + [ f]{a[x]}}\ar[r ] & \\ & & { * + [ f]{a[y]}}\ar@{-}[lu]\ar@{-}[ru]\\ \ar[r]&{*+[f]{a[x]}}\ar@{-}[r]\ar@{-}[ru]&{*+[f]{a[x]}}\ar@{-}[r]&{*+[f]{a[x]}}\ar@{-}[lu]\ar[r ] & } } \end{xy}.\ ] ] the measurement scheme realizes a controlled- gate , where the correlation system of the lower line carries the control qubit and the upper line the target qubit . in detail one would proceed as follows :first one performs the -measurements on the sites shown and the -measurements on the adjacent ones .if any of these measurements yields the result `` '' , we apply a -measurement to the central site and restart the procedure three sites to the right .this approach has been chosen for convenience : it allows us to forget about possible phases introduced by other measurement outcomes .still , the `` correct '' result will occur after a finite expected number of steps , so the overhead caused due to this simplification is only linear .it is also not hard to see that most other outcomes can be compensated for so for practical purposes the scheme could be vastly optimized .now assume that all measurements yielded `` '' .then a -measurement is performed on the central site , obtaining the result . as we did in section [ sec : aklt2d ] , we assume that the ( lower ) control line is in the basis state , for .the contraction of the lower - most three tensors gives {{\mbox{}}}}\ar@{-}[r]&{*+[f]{a[x]}}\ar[u]\ar@{-}[r ] & { * + [ f]{a[x]}}\ar@{-}[r]&{*+[f]{a[x]}}\ar[r]\ar[u ] & \\ } } \end{xy } \\ \nonumber\\ & = & s^c { \mbox{}}_{lu}s^c { \mbox{}}_{ru } h{\mbox{}}_r , \nonumber\end{aligned}\ ] ] where as before .we plug this result into the | + \rangle| + \rangle| + \rangle| + \rangle| + \rangle| + \rangle| + \rangle| + \rangle| + \rangle| + \rangle\langle 0 |\langle 0 || - \rangle| i \rangle\langle 1 |\langle 1 || s \rangle\langle s | ] is the standard binary entropy function . using the concavity of the entropy function, we find such that .this means that for two fixed sites , the rate at which one can distill maximally entangled pairs by performing measurements on the remaining systems is arbitrarily small .this can be seen as follows .we will aim at preparing a maximally entangled state between any two constituents of two different blocks .it is easy to see that within the same block , the probability of success can be made arbitrarily small .we hence look at a locc distillation scheme , a _ measurement - based scheme _, taking the input and producing outputs with probability , .this corresponds to a locc procedure , where each of the measurements may depend on all outcomes of the previous local measurements .let us assume that outcomes labeled for some are successful in distilling a maximally entangled state .we start by exploiting the permutation symmetry of the code words .choose a block of .assume there exists a measurement - based scheme with the property that with probability , the scheme will leave _ at least one _ system of block in a state of maximal local entropy .then there exists a scheme such that with probability , the scheme will leave _ the first _ system of block in a state of maximal local entropy . at some point of timethe scheme is going to perform the first measurement on the -th block .because of permutation invariance , we may assume that it does so on the -th system of the block .the remaining state is still invariant under permutations of the first systems .hence there is no loss of generality in assuming that the next measurement on the -th block will be performed on the -st system .if the local entropy of any of the unmeasured systems is now maximal , then the same will be true for the first one once again , by permutation invariance .also , it is easy to see that the probability that a measurement - based scheme will leave any system of block in a locally maximally mixed state is bounded from above by let be the initial probability of obtaining the outcome for a measurement on this qubit , . clearly , we consider now a local scheme potentially acting on all qubits except this distinguished one , with branches labeled , aiming at preparing this qubit in a maximally mixed state .let be the probability of the qubit ending up in a locally maximally mixed state . in case of success , so in case of the preparation of a locally maximally entangled state , we have that , in case of failure . combining these inequalities, we get we can hence show that there exists a family of universal resource states such that the probability that a local measurement scheme can prepare a maximally entangled qubit pair ( up to l.u .equivalence ) out of any element of that family is strictly smaller than .let be the probability that a site of block will end up as a part of a maximally entangled pair .this means that when we fix the procedure , and label as before all sequences of measurement outcomes with , one does not perform measurements on all constituents .let denote the index set labeling the cases where somewhere on the lattice a maximally entangled pair appears , so the probability for this to happen is bounded from above by according to the above bound , , giving a strict upper bound of for the overall probability of success .the family for is clearly universal , involves only a linear overhead as compared to the original cluster state and satisfies the assumptions advertised above .in this work , we have shown how to construct a plethora of novel models for measurement - based quantum computation .our methods were taken from many - body theory .the new models for quantum computation follow the paradigm of locally measuring single sites and hence abandoning any need for unitary control during the computation .other than that , however , they can be quite different from the one - way model .we have found models where the randomness is compensated in a novel manner , the length of the computation can be random , gates are performed by routing flows of quantum information towards one another , and logical information may be encoded in many correlation systems at the same time .what is more , the resource states can in fact be radically different from the cluster states , in that they may display correlations as typical in ground states , can be weakly entangled . a number of properties of resource states that we found reasonable to assume to be necessary for a state to form a universal resource could be eventually relaxed .so after all , it seems that much less is needed for measurement - based quantum computation than one could reasonably have anticipated .this new degree of flexibility may well pave the way towards tailoring computational model towards many - body states that are particularly feasible to prepare , rather than trying to experimentally realize a specific model .this work has benefited from fruitful discussions with a number of people , including k. audenaert , i. bloch , h .- j .briegel , j.i .cirac , c. dawson , w. dr , d. leung , a. miyake , m. van den nest , f. verstraete , m.b .plenio , t. rudolph , m.m . wolf , and a. zeilinger .it has been supported by the eu ( qap , qovaqial ) , the elite - netzwerk bayern , the epsrc , the qip - irc , microsoft research , and the euryi award scheme .what is the value of the two - point correlation function ? in this work , we have only introduced the behavior of the correlation system when subject to a local measurement of a rank - one observable . however , in order to evaluate the correlation function , we need `` measure the identity '' on the intermediate systems or , equivalently , trace them out .without going into the general theory , we just state that tracing out a system will cause the completely positive map \rho a[i]^\dagger\ ] ] to act on the correlation system . for the cluster state , using the fact that the bases and are unbiased , we can easily show that is the completely depolarizing channel , sending any to .this causes any correlation function to vanish for .how does the situation look like for the state vector defined by eq.([eqn : gstate ] ) ?we compute : so for : where and . in other words : when acting on the computational basis , implements a simple two - state markov process , which remains in the same state with probability and switches its state with probability .now , equals if an even number of state changes occurred and if that number is odd .so for the expectation value we find in section [ sec : aklt ] we discussed an aklt - type matrix product state .it was claimed that the state constitutes the unique ground - state of a spin-1 nearest neighbor frustration free gapped hamiltonian .it must be noted that in this work , we have not introduced the technical tools needed to cope with boundary effects at the end of the chain .there are at least three ways to make the above statement rigorous : a ) treat the statement as being valid asymptotically in the limit of large chains , b ) work directly with infinite - volume states , or c ) look at sufficiently large rings with periodic boundary conditions .once one chooses one of the options outlined above , the proof of this fact proceeds along the same lines as the one of the original aklt state , as presented in example 7 of ref . ( see also ref . ) . indeed , using the notions of refs . one verifies that a[i_2 ] ) { \mbox{}}&\end{aligned}\ ] ] is injective .further , if , it is checked by direct computation that .all claims follow as detailed in refs . . in particular ,let be a positive operator supported on the vector space spanned by : set , where translates its argument sites along the chain .then is a non - degenerate , gapped , frustration free , nearest neighbor hamiltonian ( called _ parent hamiltonian _ in ref . ) , whose energy is minimized by the state at hand .m. hein , j. eisert , and h .- j .briegel , phys .a * 69 * , 062311 ( 2004 ) ; r. raussendorf , d.e .browne , and h .- j .briegel , ibid . * 68 * , 022312 ( 2003 ) ; d. schlingemann and r.f .werner , ibid .* 65 * , 012308 ( 2002 ) .m. fannes , b. nachtergaele , and r.f .werner , commun .phys . * 144 * , 443 ( 1992 ) ; y.s .stlund and s. rommer , phys .lett . * 75 * , 3537 ( 1995 ) ; u. schollwck , rev .phys . * 77 * , 259 ( 2005 ) ; d. perez - garcia , f. verstraete , m.m . wolf , and j.i .cirac , quant - ph/0608197 ; j. eisert , phys .lett . * 97 * , 260501 ( 2006 ) .f. verstraete and j.i .cirac , cond - mat/0407066 ; s. richter ( phd thesis , osnabrck , 1994 ) , supervised by r.f .werner ; f. verstraete , m.m .wolf , d. perez - garcia , j.i .cirac , phys .* 96 * , 220601 ( 2006 ) .nielsen and i.l .chuang , _ quantum computation and quantum information _( cambridge university press , cambridge , 2000 ) ; j. eisert and m.m .wolf , _ quantum computing _ , in _ handbook of nature - inspired and innovative computing _ ( springer , new york , 2006 ) .j. eisert , k. jacobs , p. papadopoulos , and m.b .plenio , phys .a * 62 * , 052317 ( 2000 ) ; d. collins , n. linden , and s. popescu , ibid .* 64 * , 032302 ( 2001 ) ; d. gottesman , _ the heisenberg representation of quantum computers _ , in s.p .corney et .xxii int .group theor .( international press , cambridge , 1999 ) ; j.i .cirac , w. dr , b. kraus , and m. lewenstein , phys . rev .lett . * 86 * , 544 ( 2001 ) . | we introduce novel schemes for quantum computing based on local measurements on entangled resource states . this work elaborates on the framework established in [ phys . rev . lett . * 98 * , 220503 ( 2007 ) , quant - ph/0609149 ] . our method makes use of tools from many - body physics matrix product states , finitely correlated states or projected entangled pairs states to show how measurements on entangled states can be viewed as processing quantum information . this work hence constitutes an instance where a quantum information problem how to realize quantum computation was approached using tools from many - body theory and not vice versa . we give a more detailed description of the setting , and present a large number of new examples . we find novel computational schemes , which differ from the original one - way computer for example in the way the randomness of measurement outcomes is handled . also , schemes are presented where the logical qubits are no longer strictly localized on the resource state . notably , we find a great flexibility in the properties of the universal resource states : they may for example exhibit non - vanishing long - range correlation functions or be locally arbitrarily close to a pure state . we discuss variants of kitaev s toric code states as universal resources , and contrast this with situations where they can be efficiently classically simulated . this framework opens up a way of thinking of tailoring resource states to specific physical systems , such as cold atoms in optical lattices or linear optical systems . [ theorem]definition [ theorem]lemma [ theorem]corollary [ theorem]property [ theorem]proposition [ theorem]remark [ theorem]example [ theorem]assumption [ theorem]observation h o |
in this article , we consider the limiting behaviour for a class of stochastic recursions .these recursions are natural approximations of continuous time stochastic equations .they arise as models for fast , discretely evolving random phenomena and also as numerical discretizations of continuous stochastic equations .the class is similar to the rough path schemes of ( see also , section 8.5 ) , but more general in the sense that the noise driving the recursion is not required to be a rough path , but may be an approximation ( or discretization ) of a rough path .let and where is defined by for and for , .for each , define by the recursion where , are noise sources and we use the notation to denote the matrix inner product .let be a partition of a finite time interval ] by where is the largest mesh point in with [ note that we could equally define by linear interpolation , without altering the results of the article ] .our objective is to show that the path converges to the solution of a stochastic differential equation ( sde ) driven by as tends to infinity . in the case where are the increments of a rough path ,that is , and , the recursions we consider are precisely the rough path schemes defined in .however , we only require that be _ approximations _ of rough paths .this means the class of recursions we consider is much more general than the class of rough path recursions and includes many natural approximations that do not fall under .this fact will be illustrated by the examples below . to see why such diffusion approximations should be possible ,it is best to look at a few examples .the most common variant of ( [ erecu ] ) is the `` first - order '' recursion , where , so that this resembles an euler scheme with approximated noise .hence , it is reasonable to believe that there should be a diffusion approximation ( where denotes weak convergence of random variables ) , where satisfies the sde and denotes some method of stochastic integration ( e.g. , it , stratonovich or otherwise ) .it turns out that the choice of approximating sequence of the increment has a huge influence as to what type of stochastic integral arises in the limit .we now explore this idea with a few examples .the first four examples are first - order recursions as in ( [ erec ] ) and the final two are higher order recursions , as in ( [ erecu ] ) .suppose that is a -dimensional brownian motion , let and define the partition with .then clearly defines the usual euler maruyama scheme on the time window ] where ] with mesh size . as stated above, one should regard as an approximation of the increment likewise , one should regard as an approximation of the iterated integral the only consequence of this analogy is that it influences how we define the _ path _ corresponding to the incremental processes .indeed , the increments can be anything at all , provided they satisfy the convergence properties stated in the theorem below .to recap , the recursions we consider in this article are of the form where and for some and the implied constant is uniform in .we now define the _ rough step - function _ corresponding to the increments .if is the largest grid point in such that then we similarly define the incremental paths where is the largest grid point in such that .it is easy to check that this is the natural choice , given the motivation ( [ einc1 ] ) and ( [ einc2 ] ) .the main theorem is as follows .[ thmmain ] let satisfy ( [ erecursion ] ) and let be cdl ` ag paths defined by ( [ epaths1 ] ) .suppose that where is a continuous semi - martingale and is of the form where the integral is defined in the stratonovich sense and .suppose that the pair satisfy the estimates \label{eestimates } \\[-8pt ] \nonumber \bigl(\mathbf{e}\bigl|\mathbb { x}^n \bigl(\tau ^n_j,\tau^n_k \bigr)\bigr|^{q/2 } \bigr)^{2/q } & \lesssim & \bigl|\tau^n_j - \tau ^n_k\bigr|^{2\gamma}\end{aligned}\ ] ] for all where , ] that we always have .[ rmkqsmall ] if the estimates ( [ eestimates ] ) hold for and all , then the condition can be relaxed to .this follows using the standard techniques of rough paths ( see and , chapter 12 ) .additional details will be given in remark [ rmkqlarge ] .[ rmkdrift1 ] the result naturally extends to the case with an additional `` drift '' vector field in this setting , the limiting sde is given by this is a more natural way to treat the problem introduced in example [ egfastslow ] . as with remark [ rmkqsmall ] , this extension is a standard application of rough paths .the next result is not so much a theorem as it is a guide for other theorems .it applies to situations where the noise driving the limiting equation is not a semi - martingale , such as the sub - diffusions encountered in example [ egsubdiff ] .[ thmmeta ] in the same context as above .suppose that where is some continuous stochastic process and where denotes some constructible method of integration .suppose moreover that satisfy the estimates ( [ eestimates ] ) .then where satisfies the stochastic equation theorem [ thmmain ] and meta theorem [ thmmeta ] will be proved in section [ sconvergence ] .the proof of the meta theorem indicates what we mean by a `` constructible method of integration '' . finally , the main tool used to derive the results above is the approximation theorem , which should be thought of as a pathwise version of backward error analysis ( or the method of modified equations ) .the rate estimate depends on the _ discrete _ -hlder norm which is the smallest number such that for all . sincethis number can be achieved by taking the maximum over a finite set , it is clear that each is finite , regardless of the path .we will always need some kind of asymptotic estimate on to make use of the approximation theorem .[ thmmod ] suppose that is the path defined by the recursion ( [ erecursion ] ) and that the pair defined by ( [ epaths1 ] ) satisfy the estimates ( [ eestimates ] ) for some as in theorem [ thmmain ] .then for each we can find a pair of piecewise smooth paths \to\mathbb{r } ^d\times\mathbb{r}^{d\times d} ] and the limit is taken as the maximum of tends to zero .the theory of good semi - martingales provides a class of semi - martingale sequences for which the limit of a sequence of it integrals is an it integral .since the partial sum process is clearly a martingale with respect to the filtration generated by the sequence , we can appeal to , theorem 2.2 .in particular , since the quadratic variation = n^{-1 } \sum_{i=0}^{\lfloor nt \rfloor-1 } \xi_i \otimes \xi_i,\ ] ] we have that = d\lfloor nt \rfloor / n ] is nonempty ,so we do indeed satisfy the requirements of theorem [ thmmain ] .it follows that where and we obtain the required expression by converting stratonovich to it . instead of showing how the tools can be used on each of the examples given in the, we concentrate on the fast slow systems , since it is the least understood . the tools of this article are applied to fast slow systems in a companion paper ( see also ) , to yield new results for fast slow systems .the dynamical system theory required is slightly too involved to be included in this paper , thus we will only sketch the ideas behind the result .we will restrict our attention to the fast slow system the general case is treated in . setting , and we see that the rough step function is defined by we will also introduce the sigma algebra which is whatever sigma algebra we chose to go with the measure space . under `` sufficient '' mixing conditions on ,the pair satisfy the assumptions of theorem [ thmmain ] with where and in particular , where where is defined precisely as , but in terms of .sketch of proof to identify the limit of the pair , we proceed similarly to the random walk recursion case , namely identify the limit of and then lift it to . to identify the limit of , we will use a martingale central limit theorem on the _ time reversal _ of the partial sum process . by applying the _ natural extension _ of a dynamical system, we can assume without loss of generality that the map is invertible .now , fix a time window ] , theorem [ thmmain ] reproduces the diffusion approximation ( [ ekplimit ] ) .it is quite possible to extend the theorem [ thmmain ] so that it only requires , for instance , -h + k ] . when this is of course the ( forward ) euler scheme , when this is the stratonovich mid - point scheme and when this is the backward euler scheme . in , one can find a plethora of schemes that also fit into the class of recursions defined by ( [ erecursion ] ) , using similar arguments to that given below . in the context of numerical schemes , we see two key areas where the ability to identify weak limits is beneficial . well - posedness of numerical schemes . when the noise is not a semi - martingale , it may not be clear whether a limit exists and if it does how it should be interpreted .theorem [ thmmeta ] provides a quick criterion for this situation .in particular , since one need only identify the limit of . if the limit of corresponds to a reasonable type of integral ( it should correspond to the method of integration used by the numerical scheme ) then the limiting equation can be interpreted in the sense of that integral . numerical schemes that depend on an approximation of the noise , rather than the exact distributionsuch situations arise if the noise is difficult to simulate and must instead be approximated , a common scenario when gaussianity is not present .one also encounters this situation in the context of _ stochastic climate modeling _ , where ocean atmosphere equations are driven by an under - resolved source of noise with persistent correlations in time and also in _ data assimilation _ , where a perturbation of a stochastic observation is fed into the numerical simulation of a forecast model .the article contains a brief overview of the latter idea . finally , the approximation theory above clearly has applications to determining the pathwise order of numerical schemes .for example , suppose that is defined by the euler scheme since does not depend on the weak limit is determined by the weak limit of using theorem [ thmmod ] as well as the tools from rough path theory ( lemma [ thmitomap ] ) it is easy to show that where only depends on through the discrete hlder norm of . if were brownian motion , then one can trivially calculate moments of exactly , thus obtaining a rate of convergence is simple .however , obtaining the _ optimal _ rate of convergence is slightly more subtle .the topic of convergence rates will not be discussed further in this article but is the subject of a future article .in this section , we will serve an appetizer in rough path theory .for the full course , we recommend , which is closely aligned with the exposition below .a rough path has two components to its definition , an algebraic one and an analytic one .the algebraic component ensures that the objects do indeed behave like the increments they hope to imitate .the analytic component describes the hlder condition that is required to construct solution maps . in the definition below , we _always _ require that the exponent .we use the notation for the step-2 tensor product algebra .we say that \times[0,t ] \to t^2(\mathbb{r}^d) ] .these are known as _chen s relations_. if moreover we have that for all ] .every rough path defines a path \to t^2(\mathbb{r}^d) ] and then simply taken chen s relations as a definition for .this identification between paths and increments will be used frequently throughout the article .we will make use of two metric spaces of rough paths .first , ; \mathbb{r}^d) ] .clearly , we have that for all ] .the second metric space we make use of is the set of continuous rough paths \to t^2(\mathbb{r}^d) ] and , there is a class of paths \to\mathbb{r}^e ] is a -hlder path . for a thorough treatment of controlled rough paths and their use in defining the above integrals ,see , section 4 .the integral is defined as a compensated riemann sum where \in\mathcal{p } } v^i \bigl(y(t_k ) \bigr)x(t_k , t_{k+1 } ) + \sum_{j=1}^e \bigl ( y'_j ( t_k ) \otimes\partial_j v^i \bigl(y(t_k ) \bigr ) \bigr ) \dvtx \mathbb { x}(t_k , t_{k+1})\ ] ] and denotes a partition of ] .a controlled rough path is said to solve the rde with initial condition if it solves the integral equation for all ] then for each initial condition and any , there exists a unique global solution .moreover , for all ] satisfying .then , on any time window ] .the proof is a standard modification of a similar statement found in .fix a sequence .use arzela ascoli to find a subsequence that converges in the sup - norm topology .use the interpolation ( [ einterpolation ] ) between to show that this subsequence also converges in . since is a metric space ,sequential compactness implies compactness .the final result , which is a direct corollary of , theorem 17.3 , allows us to translate rde solutions to stratonovich sdes . [ lemrpsde ] suppose that , ; \mathbb{r}^d) ] and let be a partition of ] , we will also use the notation to denote the largest mesh point with .it follows from ( [ eassp ] ) that as .we will now define rough step functions and rough path recursions rigorously . fix a partition of ] defined by where .we similarly define the incremental paths where and . we will often employ the shorthand and .we define the discrete -hlder `` norm '' by in particular , we see that for all mesh points .since it is a maximum over a finite set , the discrete hlder norm is finite for every fixed .it will only play a role in an asymptotic sense .[ drpr ] fix a sequence of partitions .a _ rough path recursion _ with \to\mathbb{r}^e ] be the space of paths \to\mathbb{r}^d ] , the signature is defined by where the integral is constructed in the riemann stieltjes sense .the carnot caratheodory ( cc ) norm is defined by ; \mathbb{r}^d \bigr ) \mbox { and } g = \mathbf{s}(\gamma ) ( 0,1 ) \biggr\}.\ ] ] the following result ( which is a refinement of chow s theorem ) shows that the norm is well defined ( see , theorem 7.32 , for a simple proof ) .[ lemgeodesic ] if then there exists a path \to \mathbb{r}^d ] .for each , there exists ; \mathbb { r}^d) ] with .first , note that since , we have the decomposition where and is defined by we will now define first , define by a simple linear interpolation now , to define , we know from lemma [ lemgeodesic ] that there exists a path \to\mathbb { r}^d ] with .suppose without loss of generality that where and . then we define using chen s relations and we will now check that satisfies the requirements of the theorem .first , we will show that chen s relations hold .it is easy to see that chen s relations hold when restricted to the interval ] with . then using the comparison ( [ ecceq ] ) and the construction of , we have that & \lesssim & \bigl{\vert}g(s , t ) \bigr{\vert}_{\mathrm{cc } } = \frac{(t - s)}{(\tau ^n_{j+1}-\tau^n_j ) } \bigl{\vert}g\bigl(\tau^n_j,\tau^n_{j+1 } \bigr ) \bigr{\vert}_{\mathrm{cc}}\end{aligned}\ ] ] and again by ( [ ecceq ] ) we have that \label{efeb9a } & \lesssim & \bigl|x^n_{j , j+1}\bigr| + \bigl(\bigl|\mathbb{x}^n_{j , j+1}\bigr| + \bigl|x^n_{j , j+1}\bigr|^2 \bigr)^{1/2 } \\[-2pt ] & \lesssim & c_n \bigl(\tau^n_{j+1 } - \tau^n_j \bigr)^\gamma.\nonumber\end{aligned}\ ] ] it follows that where in the last inequality we use the fact that . by a similar argument , we can show that and hence now suppose ] , where the implied constant is uniform in .we will again use the shorthand . for any ] is derived in lemma [ thmbxtilde ] . by definition and by construction of have that where is a piecewise smooth path ( obtained from the signature realizing ) and where the integral is of riemann stieltjes type and where is constructed by concatenating the increments , in particular is piecewise lipschitz .by , theorem 12.14 , it follows that satisfies ( [ exz ] ) .note that , theorem 12.14 , is basically lemma [ lemrpsde ] but under the assumption that the driving path is piecewise smooth rather than a semi - martingale . in section [ sconvergence ], we will employ the standard method of lifting weak convergence in the sup - norm topology to weak convergence in some -hlder topology , using a tightness condition . in the _ continuous time _setting ( which we can not use ) , the kolmogorov lamperti criterion is the usual method for checking this tightness condition .the following is a slight modification of a version of the criterion found in corollary a12 .[ thmctskolm ] let define a sequence of rough paths .suppose that .then for any .in particular , we have that and moreover is tight in the topology for every . in the case of geometric rough paths [ where takes valued in ] ,the result is simply corollary a12 of . to extend the result to general rough paths ,one simply applies the garcia rumsey interpolation result to the components and individually .this argument can be found in , corollary 4 .obviously , this result can not be used directly on rough step functions , since step functions have no hope of satisfying the kolmogorov estimates .fortunately , a discrete version of the above result turns out to be equally as useful .we define the _ discrete tightness condition _ as this essentially says that the rough step functions are `` hlder continuous , '' provided we do not look at them too closely ( i.e. , near the jumps ) .we will now show that the discrete tightness criterion can likewise be checked using a discrete version of the continuous kolmogorov criterion .in particular , we need only check the estimate on the partition .[ thmkolm ] suppose that for each uniformly in , for some ] , uniformly in .assume without loss of generality that ] is essentially a sub - argument of the arguments below ) . from chen s relations , we know that but from ( [ efeb9a ] ) , we see that by assumption , we have that the remaining term in ( [ efeb9 ] ) can be bounded similarly . by chen s relations ( and hlder s inequality ) , we also have that but from ( [ efeb10a ] ) we have that as above , it follows that the other terms in ( [ efeb10 ] ) can be bounded similarly .this completes the proof .the discrete criterion differs from the continuous case in the assumption , which was not required in the continuous case .however , this assumption only becomes a restriction when the diffusion approximation is driven by a path with hlder exponent .of course , one can always resolve the problem by treating the path as having the weaker hlder exponent . on the other hand , in these higher regularity situationsthe iterated integrals become unnecessary and a much simpler theory of young integration ( with much weaker assumptions ) would suffice .we can now prove the main result of the article .[ thmweak ] suppose that and that satisfies the discrete tightness condition ( [ edtight ] ) for some ] satisfying the obvious condition now suppose that is defined by for each .then , using the theory of controlled rough paths , it can be shown that solves ( [ erdefinal ] ) if and only if is a fixed point of the equation the assumptions on are generic enough to include virtually any reasonable construction of an integration map ( for integrators with h " older exponent ) .david kelly would like to thank i. melbourne for introducing the motivating problem and p. friz for several constructive comments . | in this article , we consider diffusion approximations for a general class of stochastic recursions . such recursions arise as models for population growth , genetics , financial securities , multiplicative time series , numerical schemes and mcmc algorithms . we make no particular probabilistic assumptions on the type of noise appearing in these recursions . thus , our technique is well suited to recursions where the noise sequence is not a semi - martingale , even though the limiting noise may be . our main theorem assumes a weak limit theorem on the noise process appearing in the random recursions and lifts it to diffusion approximation for the recursion itself . to achieve this , we approximate the recursion ( pathwise ) by the solution to a stochastic equation driven by piecewise smooth paths ; this can be thought of as a pathwise version of backward error analysis for sdes . we then identify the limit of this stochastic equation , and hence the original recursion , using tools from rough path theory . we provide several examples of diffusion approximations , both new and old , to illustrate this technique . ./style / arxiv - general.cfg |
approximate bayesian computation ( abc ) represents an elaborate statistical approach to model - based inference in a bayesian setting in which model likelihoods are difficult to calculate ( due to the complexity of the models considered ) . since its introduction in population genetics , the method has found an ever increasing range of applications covering diverse types of complex models in various scientific fields ( see , e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?the principle of abc is to conduct bayesian inference on a dataset through comparisons with numerous simulated datasets .however , it suffers from two major difficulties .first , to ensure reliability of the method , the number of simulations is large ; hence , it proves difficult to apply abc for large datasets ( e.g. , in population genomics where tens to hundred thousand markers are commonly genotyped ) .second , calibration has always been a critical step in abc implementation .more specifically , the major feature in this calibration process involves selecting a vector of summary statistics that quantifies the difference between the observed data and the simulated data .the construction of this vector is therefore paramount and examples abound about poor performances of abc model choice algorithms related with specific choices of those statistics , even though there also are instances of successful implementations .we advocate a drastic modification in the way abc model selection is conducted : we propose both to step away from selecting the most probable model from estimated posterior probabilities , and to reconsider the very problem of constructing efficient summary statistics .first , given an arbitrary pool of available statistics , we now completely bypass selecting among those .this new perspective directly proceeds from machine learning methodology .second , we postpone the approximation of model posterior probabilities to a second stage , as we deem the standard numerical abc approximations of such probabilities fundamentally untrustworthy . we instead advocate selecting the posterior most probable model by constructing a ( machine learning ) classifier from simulations from the prior predictive distribution ( or other distributions in more advanced versions of abc ) , known as the abc _reference table_. the statistical technique of random forests ( rf ) represents a trustworthy machine learning tool well adapted to complex settings as is typical for abc treatments .once the classifier is constructed and applied to the actual data , an approximation of the posterior probability of the resulting model can be produced through a secondary random forest that regresses the selection error over the available summary statistics .we show here how rf improves upon existing classification methods in significantly reducing both the classification error and the computational expense . after presenting theoretical arguments, we illustrate the power of the abc - rf methodology by analyzing controlled experiments as well as genuine population genetics datasets .bayesian model choice compares the fit of models to an observed dataset .it relies on a hierarchical modelling , setting first prior probabilities on model indices and then prior distributions on the parameter of each model , characterized by a likelihood function .inferences and decisions are based on the posterior probabilities of each model .while we can not cover in much details the principles of approximate bayesian computation ( abc ) , let us recall here that abc was introduced in and for solving intractable likelihood issues in population genetics .the reader is referred to , e.g. , , , , and for thorough reviews on this approximation method .the fundamental principle at work in abc is that the value of the intractable likelihood function at the observed data and for a current parameter can be evaluated by the proximity between and pseudo - data simulated from . in discrete settings ,the indicator is an unbiased estimator of . for realistic settings ,the equality constraint is replaced with a tolerance region , where is a measure of divergence between the two vectors and is a tolerance value .the implementation of this principle is straightforward : the abc algorithm produces a large number of pairs from the prior predictive , a collection called the _ reference table _ , and extracts from the table the pairs for which . to approximate posterior probabilities of competing models , abc methods observed data with a massive collection of pseudo - data , generated from the prior predictive distribution in the most standard versions of abc ; the comparison proceeds via a normalized euclidean distance on a vector of statistics computed for both observed and simulated data .standard abc estimates posterior probabilities at stage ( b ) of algorithm [ algo : general ] below as the frequencies of those models within the nearest - to- simulations , proximity being defined by the distance between and the simulated s . selecting a model means choosing the model with the highest frequency in the sample of size produced by abc , such frequencies being approximations to posterior probabilities of models . we stress that this solution means resorting to a -nearest neighbor ( -nn ) estimate of those probabilities , for a set of simulations drawn at stage ( a ) , whose records constitute the so - called _ reference table _ , see or .* generate a reference table including simulations from * learn from this set to infer about at selecting a set of summary statistics that are informative for model choice is an important issue .the abc approximation to the posterior probabilities will eventually produce a right ordering of the fit of competing models to the observed data and thus select the right model for a specific class of statistics on large datasets .this most recent theoretical abc model choice results indeed show that some statistics produce nonsensical decisions and that there exist sufficient conditions for statistics to produce consistent model prediction , albeit at the cost of an information loss due to summaries that may be substantial .the toy example comparing ma(1 ) and ma(2 ) models in appendix and figure [ fig : trueppvssummaries ] clearly exhibits this potential loss in using only the first two autocorrelations as summary statistics . developed an interesting methodology to select the summary statistics , but with the requirement to aggregate estimation and model pseudo - sufficient statistics for every model under comparison .that induces a deeply inefficient dimension inflation and can be very time consuming .-axis ) and on only the first two autocorrelations ( -axis ) .[ fig : trueppvssummaries ] , scaledwidth=50.0% ] it may seem tempting to collect the largest possible number of summary statistics to capture more information from the data .this brings closer to but increases the dimension of .abc algorithms , like -nn and other local methods suffer from the curse of dimensionality ( see e.g. section 2.5 in ) so that the estimate of based on the simulations is poor when the dimension of is too large . selecting summary statistics correctly and sparsely is therefore paramount , as shown by the literature in the recent years .( see surveying abc parameter estimation . ) for abc model choice , two main projection techniques have been considered so far .first , show that the bayes factor itself is an acceptable summary ( of dimension one ) when comparing two models , but its practical evaluation via a pilot abc simulation induces a poor approximation of model evidences .the recourse to a regression layer like linear discriminant analysis ( lda , ) is discussed below and in appendix section a. other projection techniques have been proposed in the context of parameter estimation : see , e.g. , . given the fundamental difficulty in producing reliable tools for model choice based on summary statistics , we now propose to switch to a different approach based on an adapted classification method .we recall in the next section the most important features of the random forest ( rf ) algorithm .the classification and regression trees ( cart ) algorithm at the core of the rf scheme produces a binary tree that sets allocation rules for entries as labels of the internal nodes and classification or predictions of as values of the tips ( terminal nodes ) . at a given internal node, the binary rule compares a selected covariate with a bound , with a left - hand branch rising from that vertex defined by . predicting the value of given the covariate implies following a path from the tree root that is driven by applying these binary rules .the outcome of the prediction is the value found at the final leaf reached at the end of the path : majority rule for classification and average for regression . to find the best split and the best variable at each node of the tree ,we minimize a criterium : for classification , the gini index and , for regression , the -loss . in the randomized version of the cart algorithm ( see algorithm a1 in the appendix ) , only a random subset of covariates of size is considered at each node of the tree .the rf algorithm consists in bagging ( which stands for bootstrap aggregating ) randomized cart .it produces randomized cart trained on samples or sub - samples of size produced by bootstrapping the original training database .each tree provides a classification or a regression rule that returns a class or a prediction .then , for classification we use the majority vote across all trees in the forest , and , for regression , the response values are averaged .three tuning parameters need be calibrated : the number of trees in the forest , the number of covariates that are sampled at a given node of the randomized cart , and the size of the bootstrap sub - sample .this point will be discussed in section [ sec : prac ] . for classification ,a very useful indicator is the _ out - of - bag _ error ( * ? ? ?* chapter 15 ) . without any recourse to a test set, it gives you some idea on how good is your rf classifier .for each element of the training set , we can define the out - of - bag classifier : the aggregation of votes over the trees not constructed using this element .the out - of - bag error is the error rate of the out - of - bag classifier on the training set .the out - of - bag error estimate is as accurate as using a test set of the same size as the training set . the above - mentioned difficulties in abc model choice drives us to a paradigm shift in the practice of model choice , namely to rely on a classification algorithm for model selection , rather than a poorly estimated vector of probabilities .as shown in the example described in section 3.1 , the standard abc approximations to posterior probabilities can significantly differ from the true .indeed , our version of stage ( b ) in algorithm [ algo : general ] relies on a rf classifier whose goal is to predict the suited model at each possible value of the summary statistics .the random forest is trained on the simulations produced by stage ( a ) of algorithm [ algo : general ] , which constitute the reference table .once the model is selected as , we opt to approximate by another random forest , obtained from regressing the probability of error on the ( same ) covariates , as explained below . a practical way to evaluate the performance of an abc model choice algorithm and to check whether both a given set of summary statistics and a given classifier is to check whether it provides a better answer than others .the aim is to come near the so - called _ bayesian classifier _ , which , for the observed , selects the model having the largest posterior probability .it is well known that the bayesian classifier ( which can not be derived ) minimizes the 01 integrated loss or error . in the abc framework , we call the integrated loss ( or risk ) the _ prior error rate _ , since it provides an indication of the global quality of a given classifier on the entire space weighted by the prior . this rate is the expected value of the misclassification error over the hierarchical prior it can be evaluated from simulations drawn as in stage ( a ) of algorithm [ algo : general ] , independently of the reference table , or with the out - of - bag error in rf that , as explained above , requires no further simulation .both classifiers and sets of summary statistics can be compared via this error scale : the pair that minimizes the prior error rate achieves the best approximation of the ideal bayesian classifier . in that sense it stands closest to the decision we would take were we able to compute the true .we seek a classifier in stage ( b ) of algorithm [ algo : general ] that can handle an arbitrary number of statistics and extract the maximal information from the reference table obtained at stage ( a ) .as introduced above , random forest ( rf ) classifiers are perfectly suited for that purpose .the way we build both a rf classifier given a collection of statistical models and an associated rf regression function for predicting the allocation error is to start from a simulated abc _reference table _ made of a set of simulation records made of model indices and summary statistics for the associated simulated data .this table then serves as training database for a rf that forecasts model index based on the summary statistics .the resulting algorithm , presented in algorithm [ algo : abc - rf ] and called abc - rf , is implemented in the r package abcrf associated with this paper .generate a reference table including simulation from construct randomized cart which predict using * draw * a bootstrap ( sub-)sample of size from the reference table * grow * a randomized cart ( algorithm a1 in the appendix ) determine the predicted indexes for and the trees affect according to a majority vote among the predicted indexes the justification for choosing rf to conduct an abc model selection is that , both formally and experimentally ( * ? ? ?* chapter 5 ) , rf classification was shown to be mostly insensitive both to strong correlations between predictors ( here the summary statistics ) and to the presence of noisy variables , even in relatively large numbers , a characteristic that -nn classifiers miss .this type of robustness justifies adopting an rf strategy to learn from an abc reference table for bayesian model selection . within an arbitrary ( and arbitrarily large ) collection of summary statistics ,some may exhibit strong correlations and others may be uninformative about the model index , with no terminal consequences on the rf performances . for model selection ,rf thus competes with both local classifiers commonly implemented within abc : it provides a more non - parametric modelling than local logistic regression , which is implemented in the diyabc software which is extremely costly see , e.g. , which reduces the dimension using linear discriminant projection before resorting to local logistic regression .this software also includes a standard -nn selection procedure which suffers from the curse of dimensionality and thus forces selection among statistics .the outcome of rf computation applied to a given target dataset is a classification vote for each model which represents the number of times a model is selected in a forest of n trees .the model with the highest classification vote corresponds to the model best suited to the target dataset .it is worth stressing here that there is no direct connection between the frequencies of the model allocations of the data among the tree classifiers ( i.e. the classification vote ) and the posterior probabilities of the competing models .machine learning classifiers hence miss a distinct advantage of posterior probabilities , namely that the latter evaluate a confidence degree in the selected ( map ) model .an alternative to those probabilities is the prior error rate . aside from its use to select the best classifier and set of summary statistics , this indicator remains , however , poorly relevant since the only point of importance in the data space is the observed dataset .a first step addressing this issue is to obtain error rates conditional on the data as in .however , the statistical methodology considered therein suffers from the curse of dimensionality and we here consider a different approach to precisely estimate this error .we recall that the posterior probability of a model is the natural bayesian uncertainty quantification since it is the complement of the posterior error associated with the loss . while the proposal of for estimating the conditional error rate induced a classifier given involves non - parametric kernel regression , we suggest to rely instead on a rf regression to undertake this estimation .the curse of dimensionality is then felt much less acutely , given that random forests can accommodate large dimensional summary statistics .furthermore , the inclusion of many summary statistics does not induce a reduced efficiency in the rf predictors , while practically compensating for insufficiency . before describing in more details the implementation of this concept, we stress that the perspective of leads to effectively estimate the posterior probability that the true model is the map , thus providing us with a non - parametric estimation of this quantity , an alternative to the classical abc solutions we found we could not trust .indeed , the posterior expectation satisfies & = \sum_{i=1}^k \mathbb{e}[\mathbb{i}(\hat{m}(s({\mathbf{x}}^0))\ne m = i)|s({\mathbf{x}}^0)]\\ & = \sum_{i=1}^k \mathbb{p}[m = i)|s({\mathbf{x}}^0)]\times\mathbb{i}(\hat{m}(s({\mathbf{x}}^0))\ne i)\\ & = \mathbb{p}[m\ne \hat{m}(s({\mathbf{x}}^0))|s({\mathbf{x}}^0)]\\ & = 1-\mathbb{p}[m = \hat{m}(s({\mathbf{x}}^0))|s({\mathbf{x}}^0)]\,.\end{aligned}\ ] ] it therefore provides the complement of the posterior probability that the true model is the selected model . to produce our estimate of the posterior probability ] ; 3 .we apply this rf function to the actual observations summarized as and return as our estimate of ]to illustrate the power of the abc - rf methodology , we now report several controlled experiments as well as two genuine population genetic examples .the appendix details controlled experiments on a toy problem , comparing ma(1 ) and ma(2 ) time - series models , and two controlled synthetic examples from population genetics , based on single nucleotide polymorphism ( snp ) and microsatellite data .the toy example is particularly revealing with regard to the discrepancy between the posterior probability of a model and the version conditioning on the summary statistics .figure [ fig : trueppvssummaries ] shows how far from the diagonal are realizations of the pairs , even though the autocorrelation statistic is quite informative .note in particular the vertical accumulation of points near .table [ tab : mama ] in the appendix demonstrates the further gap in predictive power for the full bayes solution with a true error rate of 12% versus the best solution ( rf ) based on the summaries barely achieving a 16% error rate . for both controlled genetics experiments in the appendix ,the computation of the true posterior probabilities of the three models is impossible .the predictive performances of the competing classifiers can nonetheless be compared on a test sample .results , summarized in tables [ tab : snp ] and [ tab : microsat ] in the appendix , legitimize the use of rf , as this method achieves the most efficient classification in all genetic experiments . note that that the prior error rate of any classifier is always bounded from below by the error rate associated with the ( ideal ) bayesian classifier .therefore , a mere gain of a few percents may well constitute an important improvement when the prior error rate is low . as an aside, we also stress that , since the prior error rate is an expectation over the entire sampling space , the reported gain may occult much better performances over some areas of this space .figure [ fig : mama.post ] in the appendix displays differences between the true posterior probability of the model selected by algorithm [ algo : abc - rf ] and its approximation with algorithm [ algo : posterior ] . the original challenge was to conduct inference about the introduction pathway of the invasive harlequin ladybird _( harmonia axyridis _ ) for the first recorded outbreak of this species in eastern north america .the dataset , first analyzed in and via abc , includes samples from three natural and two biocontrol populations genotyped at 18 microsatellite markers .the model selection requires the formalization and comparison of 10 complex competing scenarios corresponding to various possible routes of introduction ( see appendix for details and analysis 1 in ) .we now compare our results from the abc - rf algorithm with other classification methods for three sizes of the reference table and with the original solutions by and .we included all summary statistics computed by the diyabc software for microsatellite markers , namely 130 statistics , complemented by the nine lda axes as additional summary statistics ( see appendix section g ) . in this example, discriminating among models based on the observation of summary statistics is difficult .the overlapping groups of figure [ fig : cox_lda ] in the appendix reflect that difficulty , the source of which is the relatively low information carried by the 18 autosomal microsatellite loci considered here .prior error rates of learning methods on the whole reference table are given in table [ tab : asian ] .as expected in such a high dimension settings ( * ? ? ?* section 2.5 ) , -nn classifiers behind the standard abc methods are all defeated by rf for the three sizes of the reference table , even when -nn is trained on the much smaller set of covariates composed of the nine lda axes .the classifier and set of summary statistics showing the lowest prior error rate is rf trained on the 130 summaries and the nine lda axes .figure [ fig : cox_viss ] in the appendix shows that rfs are able to automatically determine the ( most ) relevant statistics for model comparison , including in particular some crude estimates of admixture rate defined in , some of them not selected by the experts in .we stress here that the level of information of the summary statistics displayed in figure [ fig : cox_viss ] in the appendix is relevant for model choice but not for parameter estimation issues . in other words ,the set of best summaries found with abc - rf should not be considered as an optimal set for further parameter estimations under a given model with standard abc techniques ..*harlequin ladybird data * : estimated prior error rates for various classification methods and sizes of the reference table.[tab : asian ] [ cols="^ , > , > , > " , ]we here illustrate the potential of our abc - rf algorithm for the statistical processing of massive single nucleotide polymorphism ( snp ) datasets , whose production is on the increase within the field of population genetics . to this aim, we analyzed a snp dataset obtained from individuals originating from four human populations ( 30 unrelated individuals per population ) using the freely accessible public 1000 genome databases ( i.e. , the vcf format files including variant calls available at ` http://www.1000genomes.org/data ` ) .the goal of the 1000 genomes project is to find most genetic variants that have frequencies of at least 1% in the studied populations by sequencing many individuals lightly ( i.e. , at a coverage ) .a major interest of using snp data from the 1000 genomes project is that such data does not suffer from any ascertainment bias ( i.e. , the deviations from expected theoretical results due to the snp discovery process in which a small number of individuals from selected populations are used as discovery panel ) , which is a prerequisite when using the diyabc simulator of snp data .the four human populations included the yoruba population ( nigeria ) as representative of africa ( encoded yri in the 1000 genome database ) , the han chinese population ( china ) as representative of the east asia ( encoded chb ) , the british population ( england and scotland ) as representative of europe ( encoded gbr ) , and the population composed of americans of african ancestry in sw usa ( encoded asw ) .the snp loci were selected from the 22 autosomal chromosomes using the following criteria : ( i ) all analyzed individuals have a genotype characterized by a quality score ( on a phred scale ) , ( ii ) polymorphism is present in at least one of the individuals in order to fit the snp simulation algorithm of diyabc , ( iii ) the minimum distance between two consecutive snps is 1 kb in order to minimize linkage disequilibrium between snps , and ( iv ) snp loci showing significant deviation from hardy - weinberg equilibrium at a 1% threshold in at least one of the four populations have been removed ( 35 snp loci involved ) . after applying the above criteria, we obtained a dataset including 51,250 snp loci scattered over the 22 autosomes ( with a median distance between two consecutive snps equal to 7 kb ) among which 50,000 were randomly chosen for applying the proposed abc - rf methods . in this application , we compared six scenarios ( i.e. , models ) of evolution of the four human populations genotyped at the above mentioned 50,000 snps .the six scenarios differ from each other by one ancient and one recent historical event : _( i ) _ a single out - of - africa colonization event giving an ancestral out - of - africa population which secondarily splits into one european and one east asia population lineage , versus two independent out - of - africa colonization events , one giving the european lineage and the other one giving the east asia lineage .the possibility of a second ancient ( i.e. , years ) out - of - africa colonization event through the arabian peninsula toward southern asia has been suggested by archaeological studies , e.g. ; _ ( ii ) _ the possibility ( or not ) of a recent genetic admixture of americans of african ancestry in sw usa between their african ancestors and individuals of european or east asia origins .the six different scenarios as well as the prior distributions of the time event and effective population size parameters used to simulate snp datasets using diyabc are detailed in figure [ fig : outofaf ] .we stress here that our intention is not to bring new insights into human population history , which has been and is still studied in greater details in a number of studies using genetic data , but to illustrate the potential of the proposed abc - rf methods for the statistical processing of large size snp datasets in the context of complex evolutionary histories .rf computations to discriminate among the six scenarios of figure [ fig : outofaf ] and evaluate error rates were processed on 10,000 , 20,000 , and 50,000 simulated datasets .we used all summary statistics offered by the diyabc software for snp markers ( see section g below ) , namely 130 summary statistics in this setting plus the five lda axes as additional summary statistics .for all illustrations based on genetic data , we used the program diyabc v2.0 to generate the abc reference tables including a set of simulation records made of model indices , parameter values and summary statistics for the associated simulated data .diyabc v2.0 is a multithreaded program which runs on three operating systems : gnu / linux , microsoft windows and apple mac os x. computational procedures are written in c++ and the graphical user interface is based on pyqt , a python binding of the qt framework .the program is freely available to academic users with a detailed notice document , example projects , and code sources ( linux ) from : ` http://www1.montpellier.inra.fr/cbgp/diyabc ` .the reference table generated this way then served as training database for the random forest constructions .for a given reference table , computations were performed using the r package ` randomforest ` .we have implemented all the proposed methodologies in the r package abcrf available on the cran .single population statistics + ` hp0_i ` : proportion of monomorphic loci for population i + ` hm1_i ` : mean gene diversity across polymorphic loci + ` hv1_i ` : variance of gene diversity across polymorphic loci + ` hmo_i ` : mean gene diversity across all loci two population statistics + ` fp0_i&j ` : proportion of loci with null fst distance between the two samples for populations i and j + ` fm1_i&j ` : mean across loci of non null fst distances + ` fv1_i&j ` : variance across loci of non null fst distances + ` fmo_i&j ` : mean across loci of fst distances + ` np0_i&j ` : proportion of 1 loci with null nei s distance ` nm1_i&j ` : mean across loci of non null nei s distances + ` nv1_i&j ` : variance across loci of non null nei s distances + ` nmo_i&j ` : mean across loci of nei s distances three population statistics + ` ap0_i_j&k ` : proportion of loci with null admixture estimate when pop .i comes from an admixture between j and k + ` am1_i_j&k ` : mean across loci of non null admixture estimate + ` av1_i_j&k ` : variance across loci of non null admixture estimated + ` amo_i_j&k ` : mean across all locus admixture estimates single population statistics + ` nal_i ` : mean number of alleles across loci for population i + ` het_i ` : mean gene diversity across loci + ` var_i ` : mean allele size variance across loci + ` mgw_i ` : mean m index across loci two population statistics + ` n2p_i&j ` : mean number of alleles across loci for populations i and j + ` h2p_i&j ` : mean gene diversity across loci + ` v2p_i&j ` : mean allele size variance across loci + ` fst_i&j ` : fst + ` lik_i&j ` : mean index of classification + ` das_i&j ` : shared allele distance + ` dm2_i&j ` : distance | approximate bayesian computation ( abc ) methods provide an elaborate approach to bayesian inference on complex models , including model choice . both theoretical arguments and simulation experiments indicate , however , that model posterior probabilities may be poorly evaluated by standard abc techniques . we propose a novel approach based on a machine learning tool named random forests to conduct selection among the highly complex models covered by abc algorithms . we thus modify the way bayesian model selection is both understood and operated , in that we rephrase the inferential goal as a classification problem , first predicting the model that best fits the data with random forests and postponing the approximation of the posterior probability of the predicted map for a second stage also relying on random forests . compared with earlier implementations of abc model choice , the abc random forest approach offers several potential improvements : _ ( i ) _ it often has a larger discriminative power among the competing models , _ ( ii ) _ it is more robust against the number and choice of statistics summarizing the data , _ ( iii ) _ the computing effort is drastically reduced ( with a gain in computation efficiency of at least fifty ) , and _ ( iv ) _ it includes an approximation of the posterior probability of the selected model . the call to random forests will undoubtedly extend the range of size of datasets and complexity of models that abc can handle . we illustrate the power of this novel methodology by analyzing controlled experiments as well as genuine population genetics datasets . the proposed methodologies are implemented in the r package abcrf available on the cran . * keywords : * approximate bayesian computation , model selection , summary statistics , -nearest neighbors , likelihood - free methods , random forests |
two way relaying communications have recently attracted considerable attentions due to their various applications . in this communication scenario ,two users attempt to communicate with each other with the help of a relay . to this end, physical layer network coding ( plnc ) [ 1 ] along with the conventional df or af relaying strategy has been commonly considered [ 2 - 4 ] .it has been shown that plnc can achieve within 1/2 bit of the capacity of a gaussian twrc ( two way relay channel ) and this is asymptotically optimal at high snrs [ 5 - 6 ] . in [ 2 - 3 ] , based on df startegy , two - step and three - step two - way relaying schemes are proposed . in the two - step scheme , in the first step , both users simultaneously transmit their messages , and the relay recovers both messages in turn , using a linear receiver structure like successive interference cancellation ( sic ) [ 7 ] . in the second step , the relay sends a combination of the recovered messages to the users .the problem with the scheme proposed is that , when recovering one of the messages , the other message is considered as noise , which results in a performance loss . as a solution , an optimum ml decoder can be utilized at the relay at the expense of a very high complexity [ 3 ] .the three - step df two - way relaying proposed in [ 2 ] requires three time slots that results in a throughput reduction . as an alternative, the relay can exploit af strategy to simply amplify the received signal from the users , and then forward it to the users .due to noise amplification in the relay , this scheme shows a poor performance [ 2 ] .the novel relaying strategy known as compute - and - forward ( cmf ) , proposed by nazer and gastpar [ 8 ] , is proved to be efficient for multiuser communication scenarios .the cmf strategy can exploit the interference to achieve a higher throughput .cmf strategy is also known as a reliable physical layer network coding [ 9 ] . in cmf strategy ,all sources transmit simultaneously .each relay , based on its received signal ( a noisy and channel weighted combination of the users codewords ) and its knowledge of the channel coefficients , decodes an equation , which is an integer - linear combination of the users transmitted messages .the integer coefficients of the equation are presented by a vector called an equation coefficient vector ( ecv ) .the relay has to find the ecv with the highest possible rate .the relay then transmits the decoded equation to the destination .the destination recovers the desired messages by receiving sufficient number of decoded equations from the relays . for codewords , lattice codesare commonly utilized , which can achieve the capacity of additive white gaussian noise ( awgn ) channels [ 10 ] . while cmf strategy has been considered in different scenarios in the literature , such as multi - antenna systems [ 11 ] , cooperative distributed antenna systems[ 12 ] , multi - access relay channels [ 13 ] , generalized multi - way relay channels [ 14 ] , two transmitter multi - relay systems [ 15 ] , and finally multi - source multi - relay network [ 16 ] ; however , to our best knowledge the application of cmf in two - way relaying hasnt been considered so far , just from information theory aspect in [ 17 ] . in this paper, we propose a new practical framework for two - way relaying based on cmf strategy , in which we use a linear receiver and a general lattice encoding previously proposed by nazer [ 8 ] .we consider this framework for two cases .first , we investigate the relays without the capability of sending any feedback to the users .we call the corresponding proposed scheme as max compute - and - forward ( m - cmf ). then we consider the relays that have feedback capability ; the related scheme is called as aligned compute - and - forward ( a - cmf ) . for the latter case , the power can be efficiently allocated to the users in a way to increase the computation rate through aligning the scaled channels to the integer coefficients , under a maximum power constraint for each user .the proposed schemes , in contrast to df and af based schemes , can handle both the interference and noise , and thus enhance the network throughput considerably . to achieve a higher order of diversity , multiple relays along with a simple relay selection technique are employed .we consider a block rayleigh fading channels between the users and the relays .the channels have phase variations in addition to the amplitude variations . while the proposed schemes have been considered and work quite well for general complex gaussian channels with variation in both phase and amplitude , for the sake of simplicity and tractability of the analytical performance evaluation , in this paper , we focus on addressing the amplitude variation of the channels and do not consider the carrier phase offset for the analytical performance analysis . in the other words , we assume that the phase offset between two received users signals has been compensated at the best relay .this makes the channels realized by the best relay be real - valued rayleigh channels .this assumption was commonly used in the literature when considering the performance analysis of cmf based strategies , for instance please see [ 15 ] and [ 18 ] .however for simulation evaluations , we consider general rayliegh fading channel ( complex gaussian coefficient ) with both phase and amplitude variations .we analitically derive , an upper bound on the computation rate of each relay . then using this bound, we derive a bound on the average sum rate and the outage probability of the proposed scheme .based on the bound obtained for the outage probability , we derive the system diversity order .numerical results show that a - cmf reaches the bound in all snrs , m - cmf is tight on the bound only in low snrs ; in the other words , a - cmf improves the compute rate at high snr . to have a fair comparison with m - cmf and the other schemes , a - cmf under a power constraint on the total powers of both users rather than on the power of each useris also considered . as expected , the numerical results verifies that a - cmf under the new constraint performs better . for the latter case , a - cmf and m - cmfare in fact compared under the same total system power .we also evaluate the symbol rate of the system , and compare the results with those of the conventional schemes , which indicates the substantial superiority of the proposed schemes .the remainder of this paper is organized as follows . in sectionii , the system model is described .section iii presents the proposed method and the performance analysis is given in section iv .numerical results are presented in section v. finally , section vi concludes the paper .we consider a two way relay channel with two users and relays , as shown in fig .1 . user , exploits a lattice encoder with power constraint to project its message to a length - n complex - valued codeword such that . is considered as the maximum power constraint of the user ( ) .we assume that each relay has a power constraint equal to that is more than or equal to .the channel coefficient from user , to relay denoted by and assumed to be equal to the reverse link coefficient , follows a real - valued rayleigh distribution with variance , here we only focus on the the real channel . for the numerical part , we also consider more general complex gaussian channel . ] .all channel coefficients for different and are assumed to be independent .we assume a block fading such that the coefficients remain constant during total transmission time slots required for the message exchanges .there is no direct link between two users .the noise received at the relay and at the user , denoted by and respectively , are i.i.d . according to the zero - mean gaussian distribution with variance 1 .in our proposed cmf based scheme , multiple access broadcast ( mabc ) protocol for two way transmission is used [ 19 ] . in the first time slot ( named multiple access phase ) , two users simultaneously transmit their codewords to the relays .each relay receives a noisy linear combination of users codewords , and using the cmf strategy [ 8 ] , the relay decodes an equation , i.e. , an integer linear combination of users messages ( see subsection a ) . then in the second time slot ( broadcast phase ) ,the best relay is selected ( see subsection c ) to transmit its decoded equation to the users .finally , by receiving the equation , each user recovers the other user s message ( see subsection b ) .the received signal in each relay in the multiple access phase can be written as + for each relay , vector and matrix are defined as \end{aligned}\ ] ] and , based on cmf strategy [ 8 ] , each relay has to decode the equation coefficient vector ( ecv ) , i.e. ^t } \in { z^2} ] and ^t} ] and $ ] .we define the variable as which with some straightforward simplifications and by defining and , results in without loss of generality , we assume that .then for the right side of the inequality , we can minimize with respect to as follows which leads to by substituting ( 39 ) in ( 36 ) and some straightforward simplifications , we obtain thus , from ( 37 ) we get since we have the right - hand side of ( 35 ) can be written as this proves ( 35 ) , and then ( 33 ) .now , we consider the case that or .it is clear that hence , the theorem is proved .the bound derived is tight , specially at low snrs .please note that at low snrs , the ecv as a solution of ( 4 ) is usually either or [ 15 ] . in this case , from ( 36 ) can be written as which certainly is lower than , ( please note that we have assumed as the solution ) . for lower than the bound , i.e. , we have that leads to and .hence , at low snrs , the rate is very close to the bound given in ( 32 ) with a high probability . according to this theorem, we have using ( 47 ) and with the assumation of we can easily rewrite ( 31 ) as now , the best relay , after the multiple access phase , is selected based on ( 48 ) , using the approach similar to [ 22 ] . that is the relay ,sets a timer with the value proportional to the inverse of its corresponding rate , i.e. . the first relay that its timer reaches zero ( which has the highest rate ) broadcasts a flag , to inform other relays , and is selected as the best relay for the broadcast phase .from ( 30 ) , the outage probability of the proposed scheme can be computed as where denotes the target rate . according to the theorem 1 and withthe assumation of , we have moreover , from theorem 1 , a lower bound for the outage probability is derived as follows with the defination of and since are independent exponential random variables with the following cdf : we can find the cdf of as hence , the outage probability lower bound can be easily computed as from taylor series expansion , in high snrs , when , we can approximate ( 54 ) as hence , the acheivable diversity order from the outage bound with the definition [ 23 ] is equal to m , i.e. the number of relays . according to the theorem1 and with the assumation of , an upperbound on the sum rate conditioned on each channel realizationscan be derived as the unconditional sum rate can be computed by taking the expectation of ( 56 ) as where is the pdf of , which can be easily obtained from its cdf given in ( 52)-(53 ) . with some straightforward simplifications , leads to where and is the upper incomplete gamma function defined in [ 24 ] .for numerical evaluation , target rate is considered .the rayleigh channel parameters equal to , are assumed .the parameter in algorithm 1 is setteled as . in fig .2 , the outage probability of the proposed schemes along with the derived lower bound given in ( 54 ) , versus snr , is plotted for relays and for equal maximum transmit powers for the users and the relay . as observed , for both m - cmf and a - cmf schemes ,the derived lower bound is quite tight especially at high snrs .moreover , as expected , by the increase of the number of relays , the outage performance as well as the diversity order improve significantly . it is observed that the proposed schemes provides a diversity order of , i.e. , the number of relays employed . in fig .3 , the average sum rates of the proposed schemes along with the derived upper bound in ( 58 ) are plotted for and for equal maximum powers for the users . as observed , the m - cmf reaches the bound only in low snrs , while the a - cmf approaches the bound in all snr values . in other words , a - cmf outperforms the m - cmf in high snr at the cost of using feedback transmission .4 compares the symbol error rate ( ser ) of the proposed schemes with the ones introduced in [ 2 ] , including af and two - step df , for with bpsk modulation and for equal maximum transmit powers for the users .our proposed scheme indiciates significantly better performance about 6db in ser equal to 0.02 .please note that the ser has been evaluated by simulation , as it is not easy at all to analytically derive the ser when using the cmf based strategy , due to an integer optimization problem being solved numerically within this strategy .5 compares the outage probability of the proposed schemes with the conventional strategies and also three - step df [ 2 ] , for and for equal maximum transmit powers for the users .the same relay selection strategy is used for all schemes .although all of the methods provide the same order of diversity , our proposed schemes demonstrates a better performance about 2db in high snr values .6 compares the average sum rate of the proposed schemes with the conventional strategies , for and for equal transmit powers .as it is observed , our proposed schemes perform significantly better than the conventional strategies in all snrs .for example , in sum rate 4 , a - cmf has 4db and m - cmf has 2db improvement in comparison with the best conventional relaying scheme .7 compares the average sum rate of the a - cmf scheme under two different power constraints , one in each user power and the other on the total power , for relays . for the first case ,maximum transmission power of each user is considered to be equal to , i.e. , while for the second case , the maximum total transmission powers of both users is considered to be equal to . as it is observed from this figure , in the latter case, the system has a better performance .the reason is that in the second case , the feasible region of the optimization in ( 14 ) is larger than the one of the first case , that results in a higher rate .when comparing with the other scheme , the latter constraint is more reasonable , as different schemes should be compared under the same total transmission powers .since our results above indicate that the a - cmf under the maximum power constraint on each user transmission performs better than the other schemes , specially in term of the average sum rate , we did nt bring the comparison results with the other schemes , when the a - cmf is designed under the constraint on the total transmission power . in fig .8 , we evalute the perfromance of proposed schemes along with the conventional strategies for when all link are modeled as complex gaussian channels with variance one .the figure shows that the outage probability of the peroposed schemes , specially a - cmf , are better than those of the conventional strategies . in fig .9 , the performance of proposed scheme has been evaluated when the channels variances are not identical .this fig . shows the outage probability of proposed schemes and two - step df versus snr for different values of , where .delta in fact indicates the difference between the two users channel variances . in this fig ., we have . for a fair camparison , the sum of the two channels variances is set equal to two , i.e. . as can be observed , the lower delta makes better perfromance , however diversity order does not change with delta . from this fig . , the perfromance of the proposed schemes are better than two - step df , which shows the best performance among the conventional schemes ( please see fig .as expected , the amount of the improvement decreases by the increase of the delta .for example , in outage , while at delta equal to 0.5 , the proposed schemes have 1.8db better perfromance than the two - step df , the improvement is 1.4db at delta equal to 1 .in this paper , based on cmf strategy , a novel two - way relaying scheme , for two cases of relays with and without capability of feedback transmission , is proposed that improves the network throughput significantly .furthermore , a relay selection scheme is exploited to achieve a higher order of diversity through employing multiple relays . by theoretical analysis , an upper bound on the computation rate of each relayis derived and based on that , a tight lower bound on the outage probability and an upper bound on the average sum rate of the system are presented .our numerical results showed that the proposed scheme , in both cases of with and without using feedback , performs significantly better than the af and df strategies in terms of the outage probability , average sum rate , and the symbol error rate , and also provides a diversity order equal to the number of relays employed . 1 zhang , s. , liew , s.c . ,lam . p.p . : hot topic : physical - layer network coding. mobicom 06 .proc . of international conference on mobile computing and networking , 2006 ,358 - 365 popovski , p. , yomo , h. : physical network coding in two - way wireless relay channels. icc 07 .ieee international conference on communications , 2007 , pp .707 - 712 zhou , q.f . ,yonghui li , lau , f.c.m . ,vucetic , b. : decode - and - forward two - way relaying with network coding and opportunistic relay selection , _ ieee trans .commun . _ , 2010 , , ( 11 ) , pp .3070 - 3076 tao cui , tracey ho , kliewer , j. : memoryless relay strategies for two - way relay channels , _ ieee trans . on commun . _ , 2009 , , ( 10 ) ,3132 - 3143 wooseok nam , sae - young chung , lee , yong h. : capacity of the gaussian two - way relay channel to within 1/2 bit , _ ieee trans .inf . theory _ , 2010 , , ( 11 ) ,5488 - 5494 wilson , m.p . ,narayanan , k. , pfister , h.d . ,sprintson , a. : joint physical layer coding and network coding for bidirectional relaying , _ ieee trans .inf . theory _, 2010 , , ( 11 ) , pp .5641 - 5654 varanasi , m.k . ,guess , t. : optimum decision feedback multiuser equalization with successive decoding achieves the total capacity of the gaussian multiple - access channel. proceedings of the 31st asilomar conference on signals , systems and computers , 1997 , pp .1405 - 1409 nazer , b. , gastpar , m. : compute - and - forward : harnessing interference through structured codes , _ ieee trans .inf . theory _ , 2011 , , ( 10 ) ,6463 - 6486 nazer , b. , gastpar , m. : reliable physical layer network coding , _ proceedings of the ieee _, 2011 , , ( 3 ) , pp .438 - 460 erez , u. , zamir , r. : achieving 1/2 log ( 1+snr ) on the awgn channel with lattice encoding and decoding , _ ieee trans .inf . theory _ , 2004 , , ( 10 ) ,2293 - 2314 jiening zhan , nazer , b. , erez , u. , gastpar , m. : integer - forcing linear receivers. ieee international symposium on information theory proceedings ( isit ) , 2010 , pp .1022 - 1026 song - nam hong , caire , g. : compute - and - forward strategies for cooperative distributed antenna systems , _ ieee trans .inf . theory _ , 2013 , , ( 9 ) , pp .5227 - 5243 el soussi , m. , zaidi , a. , vandendorpe , l. : compute - and - forward on a multiaccess relay channel : coding and symmetric - rate optimization , _ ieee trans . on wireless commun ._ , 2014 , , ( 4 ) , pp . 1932 - 1947gengkun wang , wei xiang , jinhong yuan : outage performance for compute - and - forward in generalized multi - way relay channels , _ ieee commun .letters _ , 2012 , , ( 12 ) , pp .2099 - 2102 hejazi , m. , nasiri - kenari , m. : simplified compute - and - forward and its performance analysis , _ iet commun ._ , 2013 , , ( 18 ) , pp .2054 - 2063 chen , z. , fan , p. , letaief , k.b .: compute - and - forward : optimization over multi - source - multi - relay networks , _ ieee trans . on veh. tech _ , 2014 , , ( 99 ) , pp. 1 hern , b. , narayanan , k. : an analysis of the joint compute - and - forward decoder for the binary - input two - way relay channel. 51st annual allerton conference on communication , control , and computing ( allerton ) , 2013 , pp .1314 - 1320 tao yang , collings , i.b . :asymptotically optimal error - rate performance of linear physical - layer network coding in rayleigh fading two - way relay channels , _ ieee commun .letters _ , 2012 , , ( 7 ) , pp .1068 - 1071 sang joon kim , mitran , p. , tarokh , v. : performance bounds for bi - directional coded cooperation protocols , _ ieee trans ., 2008 , , ( 11 ) , pp .5235 - 5241 lili wei , wen chen : efficient compute - and - forward network codes search for two - way relay channel , _ ieee commun . letter _ , 2012 , , ( 8) , pp .1204 - 1207 boyd , s. , vandenberghe , l. : convex optimization ( cambridge university press , 2004 ) bletsas , a. , khisti , a. , reed , d.p . ,lippman , a : a simple cooperative diversity method based on network path selection , _ ieee j. select .areas commun ._ , 2006 , , ( 3 ) , pp .659 - 672 jafarkhani , h. : space - time coding theory and practice ( cambridge university press , 2005 ) alouini , m .- s . ,goldsmith , a.j .: capacity of rayleigh fading channels under different adaptive transmission and diversity - combining techniques , _ ieee trans . veh .technol . _ , 1999 , , ( 4 ) , pp .1165 - 1181 | * in this paper , a new two - way relaying scheme based on compute - and - forward ( cmf ) framework and relay selection strategies is proposed , which provides a higher throughput than the conventional two - way relaying schemes . two cases of relays with or without feedback transmission capability are considered . an upper bound on the computation rate of each relay is derived , and based on that , a lower bound on the outage probability of the system is presented assuming block rayleigh fading channels . numerical results show that while the average sum rate of the system without feedback , named as max compute - and - forward ( m - cmf ) , reaches the derived upper bound only in low snrs , that of the system with feedback , named as aligned compute - and - forward ( a - cmf ) reaches the bound in all snrs . however , both schemes approach the derived lower bound on the outage probability in all snrs . for the a - cmf , another power assignment based on applying the constraint on the total powers of both users rather than on the power of each separately , is introduced . the result shows that the a - cmf performs better under the new constraint . moreover , the numerical results show that the outage performance , average sum rate , and symbol error rate of the proposed schemes are significantly better than those of two - step and three - step decode - and - forward ( df ) and amplify - and - forward ( af ) strategies for the examples considered . * * index terms- compute and forward , max compute - and - forward , aligned compute - and - forward , feedback , two - way relaying , relay selection , outage probability , average sum rate , symbol error rate . * |
the fractional brownian motion with hurst index is a centered gaussian process with covariance function =\tfrac{1}{2}(|s|^{2h}+|t|^{2h}-|t - s|^{2h}).\ ] ] it follows from ( [ def : fbm ] ) that is self - similar with index and has stationary increments .unless ( i.e. , is brownian motion ) , is not markovian .moreover , it is known that has long - range dependence if and short - range dependence if ( see samorodnitsky and taqqu ) .these properties have made not only important theoretically , but also very popular as stochastic models in many areas including telecommunications , biology , hydrology and finance .weak convergence to fractional brownian motion has been studied extensively since the works of davydov and taqqu . in recent yearsmany new results on approximations of fractional brownian motion have been established .for example , enriquez showed that fractional brownian motion can be approximated in law by appropriately normalized correlated random walks .meyer , sellan and taqqu proved that the law of can be approximated by those of a random wavelet series . by extending stroock , bardina _et al . _ and delgado and jolis have established approximations in law to fractional brownian motions by processes constructed using poisson processes .let be a standard poisson process , and for all , define the processes by stroock proved that as tends to zero , the laws of converge weakly in the banach space ] ) to the law of brownian motion .delgado and jolis proved that every gaussian process of the form where is a one - dimensional brownian motion and a sufficiently regular deterministic kernel , can be weakly approximated by the family of processes their result can be applied to fractional brownian motion .in addition , bardina and jolis proved that as tends to , the family of two - parameter random fields defined by converges in law in the space of continuous functions on ^{2}-y < x , y<-t-t - y < x\leq - y , y<-t-y < x , -t\leq y<0,0<x\leq - y , -t\leq y<0, ] during which the session is active . therefore , measures the length of the time interval contained in ] as follows : for . here , is a poisson process with intensity ( see definition [ d21 ] ) .the main purpose of this paper is to show that the law of converges to the law of for .we note that the kernel in ( [ intr-5 ] ) can not be separated by the arguments , unlike the kernel function in ( [ intr-11 ] ) .this difference is not trivial . as we will see in remark [ rem3.1 ] , it causes many real technical difficulties .the rest of the paper is organized as follows .section [ 2 ] is devoted to introducing the necessary definitions , notation and the main result . in section[ 3 ] we prove the family of processes given by ( [ intr-5 ] ) is tight in ] to the fractional brownian motion .we now give the definitions of the brownian sheet and poisson processes on .let be the borel algebra on . and denote a -finite measure and the lebesgue measure on , respectively .[ d21 ] given a positive constant , a random set function on the measure space is called the _ poisson random measure with density measure _ if it satisfies the following conditions : a. for every with , is a poisson random variable with parameter defined on the same probability space ; b. if are disjoint and all have finite measure , then the random variables are independent ; c. if are disjoint and , then a.s .if is a poisson random measure with density measure , then we define \times[0,t ] ) , & \quad \cr n([0,s]\times[t,0]),&\quad \cr n([s,0]\times[0,t]),&\quad \cr n([s,0]\times[t,0 ] ) , & \quad } \ ] ] and call the two - parameter poisson process with intensity in .it is not hard to see that is independent with , which is the ordinary two - parameter process in , and for any , has the same finite - dimensional distributions as those of consider a random set function on the measure space such that : a. for every with , is a centered gaussian random variable defined on the same probability space with variance ; b. if are disjoint and have finite measure , then the random variables are independent ; c. if are disjoint and , then a.s .we then call a _ gaussian random measure on with control measure _ . in particular , if is a gaussian random measure on with control measure , then we define \times[0,t ] ) , & \quad \cr w([0,s]\times[t,0]),&\quad \cr w([s,0]\times[0,t]),&\quad \cr w([s,0]\times[t,0]),&\quad } \ ] ] and call the _ two - parameter brownian sheet in _ . similarly , we have that is independent of , which is the ordinary brownian sheet in , and for any , we have hence , from ( [ intr-3 ] ) it is easy to check that let if and if .we have the following conclusion , which essentially parallels bardina and jolis , theorem 1.1 .the proof is omitted .[ pre - lem-1 ] suppose that is a two - parameter poisson process with intensity in .for any and , let for any , .the finite - dimensional distributions of then converge weakly to those of a two - parameter brownian sheet .naturally , ( [ pre-1 ] ) and ( [ pre-2 ] ) suggest that we consider the following approximation of for : for and ] given by ( [ pre-3 ] ) converges weakly to the law of \} ] .[ r21 ] if we define , then has a more natural physical interpretation .it measures the length of the time interval contained in ] , and , we let define a function as follows : }\,\mathrm{d}x_2\,\mathrm{d}y_2,\ ] ] where , and is a measurable function such that the integral is meaningful . obviously , if , then \} ] . to prove the proposition , we need the following lemmas . [ s3-lem1 ]let and .for a non - negative function , if , then for any , \nonumber \\[-8pt]\\[-8pt ] & & \quad \leq2\bigl(i_1(n , f)+i_2(n , f)\bigr ) , \nonumber\end{aligned}\ ] ] where }\,\mathrm{d}x_1\,\mathrm{d}y_1\,\mathrm{d}x_2\,\mathrm{d}y_2.\quad\end{aligned}\ ] ] let ] then completes the proof of proposition [ s3-prop2 ] . finally , we prove proposition [ s3-prop1 ] .proof of proposition [ s3-prop1 ] to prove the tightness of in ] , \\ & & \quad\leq\tilde { m}(t - t')^{\eta},\end{aligned}\ ] ] which follows from the criterion given by billingsley ( see , theorem 12.3 ) and the fact that our processes are null at the origin . without loss of generality ,let .note that from ( [ def : g ] ) , we have by proposition [ s3-prop2 ] , it is easy to check that the above inequality holds for , and ]in this section , we proceed with the identification of the limit law. we will prove the following proposition .[ s4-prop1 ] the finite - dimensional distributions of \} ] with hurst index . for each , and , we define it suffices to prove that for any , as , -{\mathbb{e}}[\exp ( \mathrm{i}\xi u)]|\to0.\end{aligned}\ ] ] for , let and where is given by lemma [ pre - lem-1 ] .let below , we estimate , and , respectively .\(1 ) we estimate .noting that is a non - negative measurable function on , we can find a sequence of elementary functions such that then , by the dominated convergence theorem , it follows from the fact that ^ 2\,\mathrm{d}x\,\mathrm{d}y < \infty$ ] that as , for any , define by lemma [ pre - lem-1 ] , we can readily verify that for fixed , as , -{\mathbb{e}}\biggl[\exp\biggl(\mathrm{i}\xi \sum_{j=1}^{k}a_{j}b^{m , j}\biggr)\biggr]\biggr|\to0\end{aligned}\ ] ] because is essentially a linear combination of increments of defined by ( [ pre-2 ] ) , and is the same linear combination of the corresponding limits of increments of .we further define -{\mathbb{e}}\biggl[\exp\biggl(\mathrm{i}\xi\sum_{j=1}^{k}a_{j } y_{n}^{m , j}\biggr)\biggr]\biggr| , \\j_{13}(t , m ) & : = & \biggl|{\mathbb{e}}[\exp(\mathrm{i}\xi u(t))]-{\mathbb{e}}\biggl[\exp\biggl(\mathrm{i}\xi\sum_{j=1}^{k}a_{j}b^{m , j}\biggr)\biggr]\biggr|.\end{aligned}\ ] ] then , for any , below , we will show that for all fixed , there exists some such that for any as . to this end , let .define \times[-t_j - s , 0)}(x , y ) , \\\hat{f}_{m , j}(x , y)&:=&f_{m , j}(x , y)\mathbf{1}_{[0 , t]\times[-t , -t_j - s)}(x , y).\end{aligned}\ ] ] by ( [ s4-p1 - 3 ] ) , as , \times[-t , 0].\ ] ] define .\ ] ] then , by lemma [ s3-lem1 ] , ^ 2 & \leq & n^2e\biggl[\biggl(\int_{0}^{t}\!\!\!\int_{-t}^{0}\bigl(\hat{f}_{m , j}(x , y)+\tilde{f}_{m , j}(x , y)\bigr)\sqrt{x|y|}(-1)^{n_{n}(x , y)}\,\mathrm{d}x\,\mathrm{d}y\biggr)^2\biggr]\quad \nonumber \\ & \leq & 2\bigl(i(n , \hat{f}_{m , j})+i(n , \tilde{f}_{m , j})\bigr ) \\ & \leq & 4\bigl(i_1(n , \hat{f}_{m , j})+i_2(n , \hat{f}_{m , j})+i_1(n,\tilde{f}_{m , j})+i_2(n , \tilde{f}_{m , j})\bigr ) .\nonumber\end{aligned}\ ] ] lemma [ s3-lem2 ] and ( [ s4-p1 - 4 ] ) show that for any , as note that a.e . as . from lemma [ s3-lema ] we know that and that by the dominated convergence theorem , as , in a similar way , we know there are , such that for all , as , on the other hand , using the mean value theorem , we obtain that \nonumber \\[-6pt]\\[-6pt ] & = & k|\xi| \max_{1\leq j\leq k}(|a_j|r_j(n , m , t)).\nonumber\end{aligned}\ ] ] hence , ( [ s4-p1 - 8 ] ) follows from ( [ s4-p1 - 10])([s4-p1 - 14 ] ) with for , we apply the mean value theorem again . then , as , \nonumber \\[3pt ] & \leq & k\xi\max_{1\leq j\leq k}{\mathbb{e}}\biggl[\biggl|a_j\int_0^t\!\!\!\int_{-t}^0\bigl(\phi_{t_j}(x , y)-q^{m , j}(x , j)\bigr)b(\mathrm{d}x,\mathrm{d}y)\biggr|\biggr ] \\[3pt ] & \leq & k\xi\max_{1\leq j\leq k}\biggl\{\biggl[\int_{0}^{t}\!\!\!\int_{-t}^{0}\bigl(\phi_{t_j}(x , y)-q^{m , j}(x , y)\bigr)^{2}\,\mathrm{d}x\,\mathrm{d}y\biggr]^{1/2}\biggr\}\to0 . \nonumber\end{aligned}\ ] ] from ( [ s4-p1 - 5])([s4-p1 - 8 ] ) and ( [ s4-p1 - 15 ] ) , we obtain that for any fixed , as , \(2 ) in a similar way as was used to prove ( [ s4-p1 - 8 ] ) , there exists some such that for , as , \(3 ) in a similar way as was used to prove ( [ s4-p1 - 15 ] ) , we obtain that there exists some such that as , therefore , combining ( [ s4-p1 - 2 ] ) and ( [ s4-p1 - 16])([s4-p1 - 18 ] ) , we can obtain that as , completing the proof of proposition [ s4-prop1 ] .note that theorem [ s4-thhm1 ] is an immediate result of propositions [ s3-prop1 ] and [ s4-prop1 ] .therefore , the proof of theorem [ s4-thhm1 ] is complete .this work was done during our visit to the department of statistics and probability at michigan state university .the authors thank professor yimin xiao for stimulating discussions and the department for its good working conditions .this research was supported in part by the national natural science foundation of china ( no .10901054 ) .thanks is also due to the anonymous referees for their careful reading and detailed comments which improved the quality of the paper .kaj , i. and taqqu , m.s .convergence to fractional brownian motion and to the telecom process : the integral representation approach . in _ in and out of equilibrium 2_.probab . _ * 60 * 383427 .basel : birkhuser . | approximations of fractional brownian motion using poisson processes whose parameter sets have the same dimensions as the approximated processes have been studied in the literature . in this paper , a special approximation to the one - parameter fractional brownian motion is constructed using a two - parameter poisson process . the proof involves the tightness and identification of finite - dimensional distributions . |
listening to `` folklore versions '' of quantum mechanics one may consider it as a key statement of quantum theory that there is no measurement without disturbing the measured system .however , the fact that this is not true is well - known in modern quantum information theory and is , in some sense , the reason why classical information exists at all although our world is quantum . consider a two - level system , i.e. , a quantum system with hilbert space and denote its upper or lower state by and , respectively .assume that we know by prior information that the system is not in a quantum superposition but only in one of the two states .then the measurement with projections and show which state is present without disturbing it at all . herewe have used the two - level system as _ classical _ bit .the situation changes if the two - level system is used as _quantum _ bit ( `` qubit '' ) and is prepared in a quantum superposition where the complex coefficients and with are unknown to the person who measures. then any von neumann measurement with projections and will on the one hand only provide some information about and will on the other hand disturb the unknown state since it `` collapses '' to the state with probability and to the orthogonal state with probability . from the point of view of the person who has prepared the state anddoes not notice the measured result ( `` non - selective operation '' ) , the measurement process changes the density matrix of the system from to i.e. , the measurement causes an entropy increase of .the general condition under which information about unknown quantum states can be gained without disturbing them is well - known and reads as follows : let be the unknown density matrix of a system . by prior informationone knows that is an element of a set of possible states .then one can get some information on if and only if there is a projection commuting with all matrices in such that the value is not the same for all . as noted in , this can never be the case if the set is the orbit of a hamiltonian system evolving according to .this holds even if and act on an infinite dimensional hilbert space . in this sense ,timing information is always to some extent quantum information that can not be read out without state disturbance .it can only become classical information if either ( 1 ) prior information tells us that the time is an element of some discrete set ( see ) or ( 2 ) in the limit of infinite system energy . at first sightthe statement that classical timing information can only exist in one of these two cases seems to be disproved by the following dissipative `` quantum clock '' : let be the upper state of a two - level system .let the system s time evolution for positive be described by the bloch relaxation ( see e.g. ) since all the states commute with the projections and one can certainly gain some information about by the measurement with projections . however , this situation is actually the infinite energy limit since semi - group dynamics of this form is generated by coupling the system to a heat bath of infinite size and infinite energy spectrum .of course the fact that well - known derivations of relaxation dynamics require heat baths with infinite energy spectrum does not prove our claim that this is necessarily the case .this claim is rather an implication of theorem [ mainth ] in section [ main ] .the paper is organized as follows . in section[ main ] we show quantitatively to what extent information on the ( non - discrete ) time is always quantum information as long as the clock is a system with limited energy bandwidth .explicitly , we prove a lower bound on the entropy increase in the clock caused by von - neumann measurements , i.e. , measurements that are described by an orthogonal family of projections where the state change of the system is described by the projection postulate .generalized measurement procedures are considered in section [ gen ] .they do not necessarily increase the entropy of the measured clock since the process can include some kind of cooling mechanisms but will lead to an increase of entropy of the total system which includes the clock s environment .this has implications for low power computation since it shows that the distribution of timing information inherent in a microscopic clock produces necessarily some phenomenological entropy . in section [ tight ]we discuss whether our bound is tight . in section [ dec ]we shall show that the result of section [ gen ] can be applied to the situation that a clock with limited energy controls the switching process of a classical bit . here a classical bit is understood as a two state quantum system on which decoherence takes place on a time scale that is in the same order as the switching time or on a smaller scale . for low power computation, this proves which amount of dissipation is required whenever the autonomous dynamics of a microscopic device controls a classical output . in section [ group ]we apply the bound to other one - parameter groups .for the moment we will restrict our attention to von - neumann measurements .for technical reasons we will assume them to have a finite set of possible outcomes .hence the measurement is described by a family of mutual orthogonal projections on the system s hilbert space that may be infinite dimensional .the state of the system is described by a density matrix , i.e. , a positive operator with trace acting on . according to the projection postulate any state changes to the post - measurement state if the _ unselected _ state is considered , i.e. , the outcome is ignored .the post - measurement state coincides with if and only if commutes with all projections .now we will compare the von - neumann entropy of and .for any state the von - neumann entropy is defined as the following lemma shows that the measurement can never decrease the entropy : [ kulllemma ] let be an arbitrary density matrix on a hilbert space and be a family of orthogonal projections with .set then we have where is the kullback - leibler distance ( or relative entropy ) between and .it is defined by proof : according to the definition of entropy it is sufficient to show the equation .the second term on the right in eq . ( [ kull ] ) is equal to note that commutes with each .hence we get this completes the proof . since the entropy increase is the kullback - leibler distance between the pre- and the post - measurement state we can obtain a lower bound on in terms of the trace - norm distance between them : [ delta ] for the entropy increase we obtain the lower bound where is the trace - norm of an arbitrary matrix . the proof is immediate using ( see ) .now we consider the states on the orbit of the time evolution and show that the norm distance between and is large at every moment where the outcome probabilities of the measurement change quickly , i.e. , where the values \rho_t) ] be the smallest interval supporting this spectral measure .then is the energy bandwidth of the system in the state . by rescaling the hamiltonian it is easy to see that the time evolution of the state can equivalently be described by since .clearly , .the energy spread is decisive for our lower bound on the trace - norm distance between and .[ distancegeschw ]let be the probability of the measurement outcome at time .let be the energy bandwidth of .set then we have proof : for a specific moment define the operator by where if and if .note that we have \rho_t)\,.\ ] ] furthermore easy computation shows that \tilde{\rho}_t)=0 ] for arbitrary .let the prior probability for be the uniform distribution on ] there is a decision rule based on the measurement outcome that decides whether the state or is present with error probability at most .then the mean entropy increase ( averaged over the interval ] for all .therefore obviously we have since .we conclude hence therefore we find that the average of over the interval ] the average of over the whole interval ] .let be a von - neumann measurement acting on the hilbert space .let be the energy bandwidth of the left component of the composed system ( given by the state with hamiltonian ) .let the measurement have the time resolution in the sense of theorem [ mainth ] .then we have the following lower bound on the average entropy increase of the composed system caused by the measurement : proof : clearly , the hamiltonian is irrelevant for the evolution of the state due to hence we can treat the evolution as if it was implemented by the hamiltonian , which can assumed to be bounded as in section [ main ] . . note that our arguments do not require that the time evolution takes place on a smaller time scale than .the reason is that we have argued that may formally be considered as projections of a measurement performed _ before _ the interaction was switched on . our results in section [ main ] and [ gen ]may be given an additional interpretation as information - disturbance trade - off relation .lemma [ kulllemma ] shows that the entropy increase caused by the measurement is the kullback - leibler distance between pre- and post - measurement state .information - disturbance trade - off relations are an important part of quantum information theory .unfortunately , theorem [ mainth ] is restricted to von - neumann measurements .theorem [ mainthgen ] extends the statement to general measurements as far as the disturbance of the total state of the `` clock '' and the ancilla system is considered .however , this is interesting from the thermodynamical point of view taken in this article but not in the setting of information - disturbance trade - off . in the lattersetting , the disturbance of the ancilla state that is used to implement a non - von - neumann measurement is not of interest .therfore we admit , that our bound on the state disturbance does only hold for von - neumann measurements .the entropy increase predicted by our results can never be greater than .this can be seen by the following argument . using the definitions of the proof of lemma [ distancegeschw ] we have \rho_t ) \leq 2\|h\|\,\|r(t)\| = \delta e \|r(t)\|\\&= & \delta e\,.\end{aligned}\ ] ] note also the connection to the heisenberg uncertainty principle where is the energy spread of the system , i.e. , the standard deviation of the energy values and the standard deviation of the time estimation based on any measurement .however , using the symbols instead of , we emphasize that these definitions do not agree with ours .note that the energy spread is _ at most _ the energy bandwidth .but can exceed our time resolution by an arbitrary large value .this shows the following example : assume the clock to be a two - level system with period .set with given the prior information that the time is in an interval ] . therefore our results suggest the following interpretation : every measurement that allows to estimate the time up to an error that is not far away from the heisenberg limit produces a non - negligible amount of entropy .the following example suggests that the necessary entropy generation may even be much above our lower bound if `` time measurements '' are used that are extremely close to the heisenberg limit .consider the wave function of a free schrdinger particle moving on the real line .a natural way to measure the time would be to measure its position .we may use , for instance , von - neumann measurements that correspond to a partition of the real line into intervals of length .assume one wants to improve the time accuracy by decreasing arbitrarily .the advantage for the time estimation is small if is smaller than the actual position uncertainty of the particle .however , if the state is pure it is easy to see that the generated entropy goes to infinity for .the following example shows that the entropy generation in a time measurement can really go to zero when goes to infinity .consider the hilbert space of square sumable functions over the set of integers .let with be the canonical basis vectors .let the hamiltonian be the diagonal operator consider the state by fourier transformation is isomorphic to the set of square integrable functions on the unit circle and the dynamical evolution is the cyclic shift on .the period of the dynamics is . in this picture , is a wave package that has its maximum at the angle with an uncertainty in the order .we assume to know that the true time is in the interval and construct a measurement that allows to decide for each whether the true time is or with confidence that is increasing with .we use a measurement with projections projecting on the four sectors of the unit circle .if the angle uncertainty is considerably smaller than this measurement can clearly distinguish between and with high confidence .hence , for large enough , we get as time resolution of the measurement .note that we have only non - negligible entropy generation at that moments where the main part of the wave packet crosses the border between the sectors , i.e. , when there are two sectors containing a non - negligible part of the wave function .if , for instance , each sector contains about one half of the probability , we generate the entropy one bit , i.e. , the entropy in natural units . the probability that the measurement is performed at a time in which non - negligible entropy generation takes place is about .the average entropy generation decreases therefore for increasing , i.e. , for increasing .let for be the probabilities for the possible outcomes when the wave packet with energy spread is measured at the time instant .there are four moments where the maximum of the wave packet is exactly on the border of the sectors .the probability to meet these times is zero .for all the other times there is one such that tends to zero with by standard fourier analysis arguments .note that a measurement at time produces the entropy we conclude by elementary analysis that tends to zero with for all times except from irrelevant values and therefore the average entropy generation tends to zero with .note that the decrease of the lower bound of theorem [ mainth ] is asymptotically a little bit faster since it is ( due to ) .now we consider the system consisting of the clock , its environment and the classical bit . for the momentwe ignore the fact that the bit is quantum and claim that the two logical states and correspond to an orthogonal decomposition of subspaces of the hilbert space of the composed system . if and are non - isomorphic as hilbert spaces we extent the smaller space such that they are isomorphic .then we can assume without loss of generality to be of the form without assuming that the bit is physically realized by a two - level quantum system .the fact that the bit is classical will now be described by the fact that it is subjected to decoherence , i.e. , that all superpositions between and are destroyed and changed to mixtures on a time scale that is not larger than the switching time .decoherence keeps the diagonal values of the density matrix whereas its non - diagonal entries decay .if the process is a uniform time evolution , i.e. , given by a semi - group dynamics , decoherence of the two - level system is given by an exponential decay of both non - diagonal entries : where are the coefficients of the density matrix .note that it is assumed that the effect of the environment does not cause any bit - flips in the system but only destroys coherence .the parameter defines the decoherence rate .let and be the projections onto the states and , respectively .the decoherence process can be simulated by measurement processes that are performed at randomly chosen time instants .this analogy is explicitly given as follows : let be the effect of the measurement that distinguishes the two logical states .then we have the second expression provides the following intuitive approximation of the process : in each small time interval of length a measurement is performed with probability .let be the extension of to the total system .we assume that the dynamical evolution of the total system is generated by + ( g - id)\,,\ ] ] i.e. , the decoherence of the bit is the only contact of the system to its environment . define the switching time as the length of the time interval ] .since entropy is convex the entropy generated by the switching process is at least times the entropy that is generated if a measurement has occurred during the switching process . also by convexity arguments ,we conclude that the entropy generated by the process `` perform a measurement at a randomly chosen ( unknown ) time instant in ] .let be the time evolution of the total system implemented by a possibly unbounded hamiltonian .the entropy generated by `` measurements '' performed on the state during this period due to the decoherence is the same as produced by performed on the state .for the family of states } ] .let be the particle s density matrix and the state translated by .then we conclude that every measurement that is suitable to distinguish between and in the sense of theorem [ mainth ] produces at least the entropy .note that the measurement is not necessarily a position measurement , it has not even to be compatible with the position operator . in this sense, our results may be interpreted as a kind of `` generalized uncertainty relation '' .another very natural application is to consider the group of rotations around a specific axis .consider a spin - k/2 particle and the group of rotations on its hilbert space , where is the operator of its angular momentum in -direction .we have and conclude the following : each measurement that distinguishes between and with error probability at most produces at least the entropy .our bound on the entropy that is generated when information about the actual time is extracted from quantum system holds only for hamiltonian time evolution .a simple counterexample in section [ in ] has shown that time readout without state disturbance is possible for some dissipative semi - group dynamics .this leads to an interesting question : in physical systems , each dissipative semi - group dynamics that is induced by weak coupling to a reservoir in thermal equilibrium is unavoidably accompanied by some loss of free energy .this shows that the loss of free energy caused by the time measurement can only be avoided by systems that loose energy during its autonomous evolution. it would be desirable to know whether there is a general lower bound ( including dissipative dynamics ) on the total amount of free energy that is lost as soon as timing information is converted to classical information .thanks to thomas decker for helpful comments . this work has been supported by grants of the dfg project `` komplexitt und energie '' of the `` schwerpunktprogramm verlustmarme informationsverarbeitung '' be 887/12 . | we consider hamiltonian quantum systems with energy bandwidth and show that each measurement that determines the time up to an error generates at least the entropy . our result describes quantitatively to what extent all timing information is quantum information in systems with limited energy . it provides a lower bound on the dissipated energy when timing information of microscopic systems is converted to classical information . this is relevant for low power computation since it shows the amount of heat generated whenever a band limited signal controls a classical bit switch . our result provides a general bound on the information - disturbance trade - off for von - neumann measurements that distinguish states on the orbits of continuous unitary one - parameter groups with bounded spectrum . in contrast , information gain without disturbance is possible for some completely positive semi - groups . this shows that readout of timing information can be possible without entropy generation if the autonomous dynamical evolution of the `` clock '' is dissipative itself . 2 |
we study a reaction - diffusion equation with a nonlocal reaction term : this problem , introduced by bnichou , calvez , meunier , and voituriez , models a population structured by a space variable and a motility trait .our analysis is focused on the case where the trait space is a bounded subset of .the parameters and are positive and represent , respectively , the rate of mutation and the net reproduction rate in the absence of competition .the problem ( [ n ] ) is of fisher - kpp type . the classical fisher - kpp equation, describes the growth and spread of a population structured by a space variable .the diffusion coefficient is a positive constant .the behavior of solutions to ( [ kpp ] ) has been widely studied , starting from its introduction in . the most important difference between ([ n ] ) and the fisher - kpp equation is that the reaction term in ( [ n ] ) is nonlocal in the trait variable .this is because competition for resources , which is represented by the reaction term , occurs between individuals of _ all _ traits that are present in a certain location .another key feature of ( [ n ] ) is that the trait affects how fast an individual moves this is why the coefficient of the spacial diffusion in ( [ n ] ) depends on .in addition , the trait is subject to mutation , which is modeled by the diffusion term in .thus , ( [ n ] ) describes the interaction between dispersion of a population and the evolution of the motility trait .we further discuss the biological motivation for ( [ n ] ) and review the relevant literature in more detail later on in the introduction . throughout our paper , we assume that the trait space is a bounded interval where and are positive constants .we study classical solutions of ( [ n ] ) with initial condition that is non - negative and regular enough . " we state these assumptions precisely as ( a[asump c2 ] ) , ( a[asm n0 ] ) and ( a[asm theta ] ) in section [ sec : asm ] .a significant challenge for us is that ( [ n ] ) does not enjoy the maximum principle .this is due to the presence of the nonlocal reaction term .nevertheless , we are able to establish a global upper bound for solutions of ( [ n ] ) .we prove : [ sup bd intro ] suppose is nonnegative , twice differentiable on and satisfies ( [ n ] ) in the classical sense , with initial condition that satisfies ( a[asm n0 ] ) .there exists a constant such that theorem [ sup bd intro ] is a key element in the proof of our second main result .previous work on the fisher - kpp equation and formal computations concerning ( [ n ] ) suggest that solutions to ( [ n ] ) should converge to a travelling front in and . in order to study the motion of the front, we perform the rescaling .this rescaling leads us to consider solutions of the -dependent problem , ( we point out that ( [ nep ] ) is _ not _ the rescaled version of the -independent problem ( [ n ] ) , as we consider initial data , instead of , in ( [ nep ] ) .please see remark [ remark : relationship ] for more about this . )we study the limit of the as .we find that there exists a set on which the sequence converges locally uniformly to zero , and another set on which a certain limit of stays strictly positive .these sets are determined by two viscosity solutions of the hamilton - jacobi equation , the function arises from the eigenvalue problem ( [ spectral ] ) and is determined by , , , and ( see proposition [ prop : spectral ] ) .one may view the function as encoding the effect of the motility trait on the limiting behavior of the .our main result , theorem ( [ result on n ] ) , says that the hamilton - jacobi equation ( [ hj ] ) describes the motion of an interface which separates areas with and without individuals .viscosity solutions of ( [ hj ] ) with infinite initial data play a key role in our analysis and we provide a short appendix where we discuss the relevant known results . for the purposes of the introduction, we state the following lemma : [ lem intro ] for any , there exists a unique continuous function that is a viscosity solution of ( [ hj ] ) in and satisfies the initial condition in addition , we have for all and . we are interested in for two sets determined by the initial data .we define these two sets , and , by we see that belongs to if initially there is at least _ some _ individual living at , and belongs to if individuals with _ all _ traits are present at .our main result says that the limiting behavior of the is determined by and : [ result on n ] assume ( a[asump c2 ] ) , ( a[asm n0 ] ) and ( a[asm theta ] ) .let and be the functions given by lemma [ lem intro ] .then , and let us remark on a special case of theorem [ result on n ] .suppose the initial data is such that the two sets and are equal ( this occurs if , for example , is independent of ) .in this case , and so , which means that theorem [ result on n ] gives information about the limiting behavior of _ almost everywhere _ on , for all times .we present the following corollary : [ corc*informal ] assume ( a[asump c2 ] ) , ( a[asm n0 ] ) and ( a[asm theta ] ) .let us also suppose that each of and is a bounded interval. there exists a positive constant , which depends only on , , and , such that * if , then for all ; and , * if , then .the proof of corollary [ corc*informal ] is in subsection [ subsect : pf of cors ] .it uses the work of majda and souganidis concerning a class of equations of the form ( [ hj ] ) but with more general hamiltonians .[ remarkbc ] corollary [ corc*informal ] directly connects our result to the main result of bouin and calvez .indeed , theorem 3 of says that there exists a travelling wave solution of ( [ n ] ) of speed , for the same as in corollary [ corc*informal ] . while it is not known whether solutions of ( [ n ] ) converge to a travelling wave , corollary [ corc*informal ] is a result in this direction it says that , in the limit as , the regions where the is positive and zero travel with speed .the biological question is , as time goes on , which territory will be occupied by the species and which will be left empty ? to answer this question , it is enough to determine where the functions and are zero . in fact , corollary [ corc*informal ] gives information about the limit of the simply in terms of the sets , , and a constant .indeed , we see that at time , if we stand a point that is far " from , then there are no individuals at . on the other hand , if we stand at a point that is pretty close " to , then there are some individuals living near .we formulate another corollary : [ cor : onn ] assume ( a[asump c2 ] ) , ( a[asm n0 ] ) , ( a[asm theta ] ) and that and are bounded intervals . then : * if , then for all ; and , * if , then .we remind the reader of the fact , due to aronson and weinberger , that is the asymptotic speed of propagation of fronts for ( [ kpp ] ) .thus , a consequence of corollary [ cor : onn ] is that , in the limit , the population we re considering spreads slower than one with constant motility and faster than one with constant motility .we give the proof of corollary [ cor : onn ] in subsection [ subsect : pf of cors ] .in addition , please see remark [ rem : interpret ] for further comments on the biological implications of our results .biologists are interested in the interplay between traits present in a species and how the species interacts with its environment in other words , between evolution and ecology .it has been observed , for example in butterflies in britain , that an expansion of the territory that a species occupies may coincide with changes in a certain trait the butterflies that spread to new territory were able to lay eggs on a larger variety of plants than the butterflies in previous generations .the phenotypical trait in this case is related to adaptation to a fragmented habitat .some biologists have focused specifically on the interaction between ecology and traits that affect motility .phillips et al recently discovered a species of cane toads whose territory has , over the past 70 years , spread with a speed that _ increases _ in time .this is very interesting because this is contrary to what is predicted by the fisher - kpp equation and has previously been observed empirically .spacial sorting was also observed the toads that arrive first in the new areas have longer legs than those in the areas that have been occupied for a long time .in addition , it was discovered that toads with longer legs are able to travel further than toads with shorter legs .it is hypothesised that the presence of this trait length of legs is responsible for both the front acceleration and the spacial sorting .similar phenomena were observed in crickets in britain over a shorter time period . in that case , the motility trait was wingspan .the cases we describe demonstrate the need to understand the influence of a trait in particular , a motility trait on the dynamics of a population .the fisher - kpp equation has been extensively studied , and we refer the reader to for an introduction .hamilton - jacobi equations similar to ( [ hj ] ) are known to arise in the analysis of the long - time and long - range behavior of ( [ kpp ] ) , other reaction - diffusion pde , and systems of such equations see , for example , friedlin , evans and souganidis and barles , evans and souganidis and fleming and souganidis .the methods of are a key part of our analysis of ( [ nep ] ) .as we previously mentioned , ( [ kpp ] ) describes populations structured by space alone , while there is a need to study the interaction of dispersion and phenotypical traits ( in particular , motility traits ) .most models of populations structured by space and trait either consider a trait that does not affect motility or do not consider the effect of mutations .champagnat and mlard start with an individual - based model of such a population and derive a pde that describes its dynamics .in the case that the trait affects only the growth rate and not the motility , alfaro , coville and raoul study this pde , which is a reaction - diffusion equation with constant diffusion coefficient : the population modeled by ( [ acr ] ) has a preferred trait that varies in space .berestycki , jin and silvestre analyze an equation similar to ( [ acr ] ) , but with a different kernel and growth term that represent the existence of a trait that is favorable for all individuals .the aims and methods of are quite different from those in this paper .the main result of is the existence of traveling wave solutions of ( [ acr ] ) for speeds above a critical threshold . in , the authors establish the existence and uniqueness of travelling wave solutions and prove an asymptotic speed of propagation result for the equation that they consider .we also mention that a local version of ( [ acr ] ) was investigated by berestycki and chapuisat .desvillettes , ferriere and prvost and arnold , desvillettes and prvost study a model in which the dispersal rate does depend on the trait and the trait is subject to mutation , but the mutations are represented by a nonlocal linear term , not a diffusive term .there has also been analysis of traveling waves and steady states for equations of the form where the reaction term is the convolution of with some kernel .we refer the reader to berestycki , nadin , perthame and ryzhik , hamel and ryzhik , fang and zhao , alfaro and coville and the references therein .an important difference between ( [ kernel ] ) and ( [ n ] ) is that the reaction term of ( [ n ] ) is local in the space variable and nonlocal in the trait variable , while the reaction term in ( [ kernel ] ) is fully nonlocal .the long time behavior of solutions to ( [ kernel ] ) is studied in .in addition , ( * ? ? ? * theorem 1.2 ) establishes a supremum bound for solutions of ( [ kernel ] ) .bouin and mirrahimi analyze the reaction - diffusion equation with neumann conditions on on the boundary of .the main difference between ( [ eqnbm ] ) and ( [ n ] ) is that the coefficient of spacial diffusion in ( [ eqnbm ] ) is constant , which means that ( [ eqnbm ] ) models a population where the trait does not affect motility .the methods we use here are similar to those of ( and , in turn , both ours and those of are similar to those used in to study ( [ kpp ] ) ) .however , in general it is easier to obtain certain bounds for solutions of ( [ eqnbm ] ) than for solutions of ( [ n ] ) .for example , because the coefficient of in ( [ eqnbm ] ) is constant , integrating ( [ eqnbm ] ) in implies that is a subsolution of a local equation in and that enjoys the maximum principle .this immediately implies that is globally bounded ( * ? ? ?* lemma 2 ) .this strategy does not work for ( [ n ] ) .indeed , a serious challenge in studying ( [ n ] ) , as opposed to ( [ kpp ] ) or nonlocal reaction diffusion equations with constant diffusion coefficient such as ( [ eqnbm ] ) , is obtaining a global supremum bound for solutions of ( [ n ] ) .another challenge that arises in our situation but not in is in establishing certain gradient estimates in ( see remark [ rem : gradbdbm ] ) . in addition, we compare our main result , theorem [ result on n ] , with that of in remark [ remrho ] .let us discuss the literature that directly concerns ( [ n ] ) and ( [ nep ] ) .the problem ( [ n ] ) was introduced in .the rescaling leading to ( [ nep ] ) was suggested by bouin , calvez , meunier , mirrahimi , perthame , raoul , and voituriez in .in addition , formal results about the asymptotic behavior of solutions to ( [ n ] ) were obtained in . in particular , part ( i ) of proposition [ main result ] of our paper was predicted in ( * ? ? ?* section 2 ) .( we briefly remark on the term motility . "it was used heavily in , which is where we first learned of the problem ( [ n ] ) . however , one of the referees of our paper pointed out that this term applies mainly to unicellular organisms . )bouin and calvez also study ( [ n ] ) .they prove that there exist traveling wave solutions to ( [ n ] ) but do not analyze whether solutions converge to a traveling wave .in fact , to our knowledge , there are no previous rigorous results about the asymptotic behavior of solutions of ( [ n ] ) or the limiting behavior of solutions to ( [ nep ] ) .the main difficulty is the lack of comparison principle for ( [ n ] ). please see corollary [ corc*informal ] , remark [ remarkbc ] , as well as the fourth point below , for further discussion of the connections between the results of our paper and those of .* to the best of our knowledge , theorem [ sup bd intro ] is the first global supremum bound for a fisher - kpp type equation with a nonlocal reaction term and non - constant diffusion . *theorem [ result on n ] completes the program that was proposed in for analyzing the asymptotic behavior of the model ( [ n ] ) in the case where the trait space is bounded .* we view our main result , theorem [ result on n ] , as evidence that the presence of a motility trait does affect the limiting behavior of populations .* corollary [ corc*informal ] provides a direct connection between our work and the main result of . indeed , ( * ? ? ?* theorem 3 ) states that there exist travelling wave solutions to ( [ n ] ) of a certain speed , while corollary [ corc*informal ] shows that this exact speed characterizes the limiting behavior of the .* we hope that our work is a step towards analyzing ( [ n ] ) in the case where the trait space is unbounded .it is in this case that the phenomena of accelerating fronts is predicted to occur .the proof of theorem [ sup bd intro ] is quite involved .the difficulty comes from the combination of the nonlocal reaction term and non - constant diffusion .our proof of theorem [ sup bd intro ] uses regularity estimates for solutions of elliptic pde , a heat kernel estimate , and an averaging technique similar to that of ( * ? ? ?* theorem 1.2 ) .we believe this combination of methods is new and may be useful in other contexts .we include a detailed outline in subsection [ subsec : outline ] .to analyze the limit of the , we preform the transformation such a transformation is used in .we prove locally uniform estimates on : [ bd on utheta ] assume ( a[asump c2 ] ) , ( a[asm n0 ] ) and ( a[asm theta ] ) and let be given by ( [ def : up ] ) .suppose is compactly contained in .there exists a constant that depends on , , , and such that for all and for all such that and , we have and we define the half - relaxed limits and of by , the supremum estimates of proposition [ bd on utheta ] imply that and are finite everywhere on .moreover , the gradient estimate of proposition [ bd on utheta ] implies becomes independent of as approaches zero .thus , it is natural that and should be independent of .there is also a connection to homogenization theory we can think of as the fast " variable , which disappears in the limit .it is the hamilton - jacobi equation ( [ hj ] ) , which arises in the limit , that captures the effect of the fast " variable .we use a perturbed test function argument ( evans ) and techniques similar to the proofs of ( * ? ? ?* theorem 1.1 ) , ( * ? ? ?* propositions 3.1 and 3.2 ) , and ( * ? ? ?* proposition 1 ) to establish : [ main result ] assume ( a[asump c2 ] ) , ( a[asm n0 ] ) and ( a[asm theta ] ) and let and be given by ( [ halfrelaxed ] ) .then : is a viscosity subsolution and is a viscosity supersolution of ( [ hj ] ) in ; and we have and part ( i ) of proposition [ main result ] was predicted via formal arguments in ( * ? ? ?* section 2 ) .theorem [ result on n ] follows easily from proposition [ main result ] by arguments similar to those in the proofs of ( * ? ? ?* theorem 1.1 ) and ( * ? ? ? * theorem 1 ) .[ remrho ] an interesting question is whether theorem [ result on n ] can be refined to obtain better information about the limit of the in the interior of the set .for instance , is bounded from below in this set ?when the diffusion coefficient is constant , which is the situation studied in , the answer is yes . indeed , ( * ? ? ?* theorem 1 ) provides a lower bound on , where is a rescaling of the solution of ( [ eqnbm ] ) .this lower bound of ( * ? ? ?* theorem 1 ) is obtained using an argument that relies on the diffusion coefficient of being constant , and thus does not work in our case . in the next subsection, we state our assumptions , give notation , and provide the definition of .the rather lengthy section [ sec : sup bd ] is devoted to the proof of theorem [ sup bd intro ] .this section is self - contained . in section [ sec : bdutheta ]we prove proposition [ bd on utheta ] .the proof of proposition [ main result ] is in section [ sec : limitsuep ] .the proof of theorem [ result on n ] is given in section [ sec : result on n ] , and the proofs of corollaries [ corc*informal ] and [ cor : onn ] are in subsection [ subsect : pf of cors ] .we also provide an appendix with a discussion of results on existence , uniqueness , and comparison for hamilton - jacobi equations with infinite initial data .we have organized our paper so that a reader who is interested mainly in our proof of theorem [ sup bd intro ] may only read section [ sec : sup bd ] . on the other hand ,a reader who is interested in our results about the limit of the , and not in the proofs of the supremum bound on and the estimates on , may skip ahead to sections [ sec : limitsuep ] and [ sec : result on n ] after finishing the introduction .our results hold under the following assumptions : 1 .[ asump c2 ] is a non - negative classical solution of ( [ nep ] ) .[ asm n0 ] for some , where ] , where } ] .however , this example does not satisfy the regularity assumption ( a[asm n0 ] ) .we leave for future work the possibility of establishing a supremum bound on that does not require such regular initial data .let us also explain why we consider the problem ( [ nep ] ) with initial data independent of .a key element of our study is the transformation .we know that , but it is not necessarily true that .it is therefore possible that , , or take on the value " . in order for our analysis to make sense ,we need to eliminate this possibility for times . to do this , we can make one of two assumptions regarding the initial data .one option is to assume that holds everywhere ( this is essentially the assumption made in ( * ? ? ?* line ( 1.5 ) ) ) .another option is to assume that the initial data does not depend on .this is the assumption made in , as well as in this paper .we find that is bounded from below on compact sets ( see proposition [ bd on utheta ] ) .such a bound is proven by a barrier argument similar to those in , but this argument does not work when the initial data depends on .we will slightly abuse notation in the following way .if is a subset of , then we will use to denote the set of such that and .we record this as : next we state the spectral problem ( [ spectral ] ) of ( * ? ? ?* proposition 5 ) , which describes the speed and allows us to define the hamiltonian .[ prop : spectralbc ] for all , there exists a unique solution of the spectral problem the map is continuous for , and satisfies , for all , in addition , the function achieves its minimum , which we denote , at some . in the next proposition, we define the hamiltonian and list the properties of that we will use .[ prop : spectral ] we define the hamiltonian by for all , there exists a unique solution of the spectral problem moreover , we have that the map is continuous , convex , and satisfies , for all , we have , for all , where is as in proposition [ prop : spectralbc ] .finally , we have first , let us verify that is continuous .since the bound ( [ bdoncbc ] ) holds for all , we have and in particular this limit is finite , so that is continuous on all of .we also note that ( [ bdonc ] ) does indeed hold for all .we remark that is the even reflection of the map . for , we define to be , where the latter is the solution of ( [ spectralbc ] ) corresponding to .if , we see that solves ( [ spectral ] ) .hence , ( [ spectral ] ) holds for all . in addition , according to the proof of proposition [ spectralbc ] , the map is given by , where is the eigenvalue of a certain eigenvalue problem with parameter . from this characterization , one can check that is convex in .hence is also convex in .the even reflection of a convex function is not in general convex , due to a possible issue at 0 .however , the fact that satisfies ( [ bdonc ] ) allows us to deduce that is indeed convex .finally , we verify ( [ hc * ] ) .according to the definition of we have , according to proposition [ prop : spectralbc ] , achieves its minimum , , at some .thus , the infimum in the previous line is achieved at .hence we have established ( [ hc * ] ) . to verify ( [ boundsonc * ] ) , we use ( [ bdoncbc ] ) to find, hence the minimum value of must be between the minimum values of the functions on the left - hand side and the right - hand side .these are exactly and , respectively .this establishes ( [ boundsonc * ] ) and completes the proof of the proposition .[ rem : interpret ] we give a further ( slightly informal ) interpretation of theorem [ result on n ] .according to ( * ? ? ?* theorem 1.1 ) , the behavior of solutions to ( [ kpp ] ) is characterized by a solution of the hamilton - jacobi equation ( we remark that this is the negative of the equation that appears in ; we write it this way to be consistent with the signs employed in the rest of this paper . )let us compare ( [ forkpp ] ) and ( [ hj ] ) .we see that both are of the form , but for different hamiltonians .it is this difference that captures the effect of the trait . indeed , according to proposition [ prop : spectral ] , the term satisfies ( [ bdonc ] ) .thus , we see that the term in ( [ hj ] ) is like a quadratic " , but of different size than the quadratic in ( [ forkpp ] ) . in addition , according to ( [ candgamma ] ) and the definition of , we have where is _ positive _ for all .hence the hamiltonian in ( [ hj ] ) is never exactly a quadratic , and hence must be different from that of ( [ forkpp ] ) .since ( [ hj ] ) and ( [ forkpp ] ) characterize the behavior of populations with and without a motility trait , respectively , we interpret our results as evidence that the presence of the trait does affect the asymptotic behavior of a population .this section is devoted to the proof of : [ prop : supbd ] suppose is nonnegative , twice differentiable on and satisfies ( [ n ] ) in the classical sense .assume that satisfies ( a[asm theta ] ) and the initial condition satisfies ( a[asm n0 ] ) .there exists a constant that depends only on , , , and such that we briefly explain notation for norms and seminorms , which we will be using only in this section . for denote : {\eta , u } } } = \sup_{(x , t ) , ( y , s)\in u}\frac{|u(x , t)-u(y , s)|}{(|x - y|+|s - t|^{1/2})^{\eta } } , \ \ \{ \ensuremath{|u|_{\eta , u}}}= { \ensuremath{||u||_{l^{\infty}(u)}}}+{\ensuremath{[u]_{\eta , u}}},\ ] ] and we present the supremum bound for solutions of ( [ nep ] ) : [ supbd : cor ] assume ( a[asump c2 ] ) , ( a[asm n0 ] ) and ( a[asm theta ] ) .there exists a constant that depends only on , , , , , and such that for all , let us explain how corollary [ supbd : cor ] follows from theorem [ prop : supbd ] .let us fix some and suppose satisfies ( [ nep ] ) in the classical sense and satisfies ( a[asm n0 ] ) .let us define . then satisfies ( [ n ] ) with initial data .we have that satisfies ( a[asm n0 ] ) .therefore , according to theorem [ prop : supbd ] , we have that the estimate ( [ supbdconclusion ] ) holds , with instead of on the right - hand side .since we have and , we obtain the conclusion of corollary [ supbd : cor ] .we introduce the following piecewise function ] .there exists a positive constant that depends on , , and so that } } } \leq\tilde { c } ( m^{2+\eta/2}+{\ensuremath{|n_0(x , a(\theta))|_{2+\eta , { \ensuremath{\mathbb{r}}}\times{\ensuremath{\mathbb{r}}}}}}).\ ] ] we apply proposition [ classical ] with right - hand side , initial condition , and the matrix of diffusion coefficients being the diagonal matrix with entries and .the assumption ( a[asm n0 ] ) on says exactly that the assumption of proposition [ classical ] on the initial condition is satisfied . by proposition [ classical ], there exists a unique solution of and we have the estimate }}}\leq c({\ensuremath{|r \bar{n}(1-\rho)|_{\eta , { \ensuremath{\mathbb{r}}}^2\times [ 0,t ] } } } + { \ensuremath{|{\ensuremath{\tilde{u}}}|_{\eta , { \ensuremath{\mathbb{r}}}^2\times [ 0,t]}}}+{\ensuremath{|{\ensuremath{\tilde{u}}}_0|_{2+\eta , { \ensuremath{\mathbb{r}}}^2}}}),\ ] ] where depends on , and .let us remark that the maps and satisfy together with the fact that the solution is _ unique _ , this implies has this symmetry as well .thus , for all and for all .therefore , with satisfies since also satisfies this equation , lemma [ lem : unique ] implies for all , all , and all ] . for the remainder of the proof of this proposition , we drop writing the domains in the semi - norms and norms ( it is always ] and .we use this to estimate the right - hand side of the previous line and find {\eta}}}+m+m^2.\end{aligned}\ ] ] similarly we estimate the second term on the right - hand side of ( [ firstneta ] ) : {\eta}}}+||{\ensuremath{\bar{n}}}||_{\infty}\leq { \ensuremath{[{\ensuremath{\bar{n}}}]_{\eta}}}+m.\ ] ] we use ( [ estsemifirstterm ] ) and the previous line to bound from above the first and second terms , respectively , on the right - hand side of ( [ firstneta ] ) and obtain , {\eta}}}+m+m^2)+{\ensuremath{[{\ensuremath{\bar{n}}}]_{\eta } } } + m+{\ensuremath{|n_0(x , a(\theta))|_{2+\eta } } } ) .\ ] ] we now use and obtain , {\eta } } } + m^2+{\ensuremath{|n_0(x , a(\theta))|_{2+\eta}}}).\ ] ] by interpolation estimates for the seminorms ] , we have } s(t)\leq e\sup_{x\in { \ensuremath{\mathbb{r}}},\theta\in \theta } n_0(x,\theta)\leq m_0<m,\ ] ] where the second inequality follows from the definition of . line ( [ bdbn ] ) implies .since is continuous and for , there exists a first time for which .so , we have } n = m \text { and } \sup_{{\ensuremath{\mathbb{r}}}\times\theta } n(\cdot , \cdot , t ) = m.\ ] ] we will now work with the extension defined in proposition [ deriv bd in terms of m ] . by the previous line , we have } { \ensuremath{\bar{n}}}=m \text { and } \sup_{{\ensuremath{\mathbb{r}}}\times{\ensuremath{\mathbb{r } } } } { \ensuremath{\bar{n}}}(\cdot , \cdot , t ) = m.\ ] ] we apply proposition [ deriv bd in terms of m ] to .part [ item : bneqn ] implies that satisfies equation ( [ eqnbn ] ) .part ( [ item : bdbn ] ) gives us the estimate } } } \leq \tilde{c } ( m^{2+\eta/2}+{\ensuremath{|n_0(x , a(\theta))|_{2+\eta , { \ensuremath{\mathbb{r}}}\times{\ensuremath{\mathbb{r}}}}}}).\ ] ] since we have , the second term on the right - hand side of the previous line is smaller than so we find , } } } \leq \tilde{c}(m^{2+\eta/2}+{\ensuremath{|n_0(x , a(\theta))|_{2+\eta , { \ensuremath{\mathbb{r}}}\times{\ensuremath{\mathbb{r}}}}}}m^{2+\eta/2}).\ ] ] we use our choice of in ( [ barc ] ) to bound the right - hand side from the previous line from above and obtain , } } } \leq \bar{c}m^{2+\eta/2}.\ ] ] let us take and define by * first step : * we will prove since satisfies ( [ n ] ) , we have that satisfies let us explain why we may assume , without loss of generality , that the supremum of on is achieved at some . since is periodic of period , and we are considering times in the bounded interval , we know that there exist , ] .hence , there exists a subsequence ( still denoted by ) and functions , , and such that , and converge locally uniformly to , and , respectively ; the derivatives of converge locally uniformly to those of ; and .moreover , satisfies on and we have , for all , all , and all , in other words , achieves its supremum on .we now drop the superscript . at the point where achieves its supremum , we have and we point out that does not appear in ( [ eqnv ] ) , the equation that satisfies .we will bound from above the corresponding term that does appear in ( [ eqnv ] ) .this term is : the inequality ( [ vxx ] ) implies that there exists with .in addition , let ] .we thus use the estimate ( [ seminm ] ) on the seminorm of and the fact that to estimate the right - hand side of the previous line from above and obtain since , we have . in addition , is lipschitz with lipschitz constant , so we have .we use these two inequalities to bound the right - hand side of the previous line from above and find the last inequality follows by an elementary calculus computation that relies on the fact that is contained in ] .the second step is to use this lower bound to build a barrier for on the remainder of ] , then , according to ( [ deftau ] ) , we have since and for all , a comparison principle argument similar to that of ( * ? ? ?* lemma 2.1 ) implies together with ( [ ugeqphi ] ) , the previous estimate implies that if , then there exists a constant that depends on , , , and such that for all and for all and all , we have according to corollary [ supbd : cor ] , the estimate holds for a constant depending only on , , and .the upper bound on follows since , by definition we have .the proof of the gradient bound proceeds via a bernstein argument in other words , we use that the derivatives of also satisfy certain pdes in order to obtain estimates on them .we are most interested in obtaining an estimate on the derivative in . to this end , let us denote and differentiate the equation ( [ eqn : uep ] ) for in to find that satisfies , we bring the reader s attention to the last term , which involves a _second _ derivative of in the space variable .this means that we can not ignore " the space variable and must try to estimate and at the same time . in particular , the essense of our strategy is to consider the pde satisfied not by but by , and obtain estimates on this quantity .our proof is similar to that of ( * ? ? ?* lemma 2.4 ) and ( * ? ? ?* lemma 2.1 ) , which use a bernstein argument to obtain gradient bounds for hamilton - jacobi equations with variable coefficients . however , our situation is more delicate because of the different scaling in the space and trait variable .[ rem : gradbdbm ] the term in ( [ eq : demo ] ) only arises because the diffusion coefficient is not constant in .let us compare our method to that of the proof of lemma 2 of .we recall that the authors of consider the pde ( [ eqnbm ] ) , which , after a rescaling and an exponential transformation , yields an equation similar to our equation ( [ eqn : uep ] ) for , but with constant diffusion coefficient .thus , they are able to carry out the strategy we first describe essentially , they differentiate their pde in and , because extraneous second derivative terms do not appear , they are able to use this equation to obtain the desired estimate .most of this subsection is devoted to the proof of : [ prop : gradbd ] fix .we denote by .suppose satisfies where the coefficients and are given by and is independent of and satisfies and given , there exists a constant that depends only on , , , , , , and such that and we remark that we only use the estimate ( [ eq : lemuy2 ] ) in the remainder of the paper .nevertheless , our proof automatically " yields estimate ( [ eq : lemuy1 ] ) as well , so we state it for the sake of completeness .the gradient bound of proposition [ bd on utheta ] follows from proposition [ prop : gradbd ] once we verify its hypotheses : we apply proposition [ prop : gradbd ] to with . according to the supremum bound of proposition [ bd on utheta ] , which we have just established ,we have that is bounded on , uniformly in . according to corollary [ supbd : cor ] we have that , and hence ,is uniformly bounded from above .in addition , is non - negative .hence the hypothesis ( [ linff ] ) is satisfied .now let us demonstrate that ( [ fyasump ] ) holds : since we have and , this amounts to establishing for some constant .let us define . since satisfies ( [ nep ] ) and satisfies ( a[asm n0 ] ) , we have that satisfies ( [ n ] ) with initial data that satisfies ( a[asm n0 ] ) .thus , according to proposition [ deriv bd in terms of m ] and theorem [ prop : supbd ] , is bounded in . therefore , there exists a constant so that , for all , integrating in yields the desired estimate for . for the proof of proposition [ prop : gradbd ] we will need the following elementary facts .we summarize them as a lemma and omit the proof .[ matrixlem ] for any diagonal positive definite matrix , any matrices , , and any , we have latexmath:[\[\label{matrixineqlem } in addition , if is symmetric then .let be a cutoff function supported on and identically inside .let us define and we record for future use the following facts : where we denote . throughout the remainder of the argument we will use to denote constants that may depend on , , , , , and and that may change from line to line .we also define the constant by we remark that is of the form , we define the function and we claim * why ( [ locmaxz ] ) implies the estimates ( [ eq : lemuy2 ] ) and ( [ eq : lemuy1 ] ) .* let us assume ( [ locmaxz ] ) holds , and we will establish ( [ eq : lemuy2 ] ) . * first step : * we will first prove the following upper bound on holds : to this end , we use ( [ defz ] ) and the fact that is the maximum of to obtain , next we consider two cases : the first is , and the second is .in the first case , we have , so we find .therefore , the estimate ( [ thzy ] ) says , since we have , this establishes the estimate ( [ thatmax ] ) in the case .now let us suppose , so that , according to ( [ locmaxz ] ) , we have we now use ( [ defz ] ) to express the right - hand side of ( [ thzy ] ) in terms of and : we use that and the estimate ( [ hy0t0 ] ) to bound the first term on the right - hand side of the previous line of the above by , and that to bound the last two terms .this yields ( [ thatmax ] ) .* second step : * next we will use ( [ thatmax ] ) to verify the following upper bound on the derivatives of in terms of and : to this end , we use the definition of and the fact that is identically inside to obtain , we use the estimate ( [ thatmax ] ) to bound the first term on the right - hand side of the previous line from above by and obtain the estimate ( [ eq : lemalmost ] ) . *third step : * there are two possible values for , and we will show that , together with the estimate ( [ eq : lemalmost ] ) that we just established , either one implies the desired estimate ( [ eq : lemuy2 ] ) . let us first suppose . upon substituting this on the right - hand side of ( [ eq : lemalmost ] ) we find , which yields the desired estimates ( [ eq : lemuy2 ] ) and ( [ eq : lemuy1 ] ) by multiplying both sides by .now let us suppose that takes on the other possible value , so that .we use that , the definition of , the line ( [ thatmax ] ) and the value of to obtain , for any , from this we deduce and hence we have ( and , since on , ( [ eq : lemuy1 ] ) holds ) .using this on the right - hand side of ( [ eq : lemalmost ] ) implies that ( [ eq : lemuy2 ] ) holds . *the proof of ( [ locmaxz ] ) .* let us suppose is the maximum of on and .there are two cases to consider :the first is that is in the interior of , and the second is that .let us tackle the first case , so that we have , we seek to establish as we have just shown , once we establish ( [ wanth ] ) the proof of the proposition will be complete . to this end , we compute , where is the sum of the left - over terms from and : .\ ] ] ( throughout , we use to denote the derivative in . )we use that satisfies ( [ eqnu ] ) to write the first term on the right - hand side of ( [ eqforz ] ) as where we have used ( [ linff ] ) to obtain the inequality .now let us look at the second term on the right - hand side of ( [ eqforz ] ) .we recognize that is almost " the derivative of , up to a term that involves a derivative of .in addition , rearranging the equation that solves implies .we find : multiplying by we obtain , so far ,our computations hold on all of .now we will specialize to and obtain an alternate expression for the first term on the right - hand side of the previous line .we recall the derivative of is zero at , so that , which , upon rearranging becomes , we substitute the right - hand side of the previous line for the first term of the right - hand side of ( [ secondterm ] ) to obtain , at , we take dot product with and use the previous line to find : where is the sum of the left - over terms : next , according to ( [ hpdu ] ) , we have .we use this on the right - hand side of ( [ secondtermuse ] ) and find , let us now consider ( [ eqforz ] ) evaluated at . according to ( [ atmaxz ] ) , the left - hand side of ( [ eqforz ] ) is non - negative. we use ( [ firstterm ] ) to estimate the first term on the right - hand side of ( [ eqforz ] ) , and we use ( [ secondtermagain ] ) for the second term , and find , we now claim that the sum of the leftover terms and is bounded : we point out that once the bound ( [ esterror ] ) is established , the proof of ( [ wanth ] ) , and hence of the proposition , will be complete . indeed , using ( [ esterror ] ) to estimate the right - hand side of ( [ handdph ] ) yields which , upon rearranging and dividing by yields ( [ wanth ] ) .* proof of bound ( [ esterror ] ) on error terms .* let us start with .we will prove , + \frac{\lambda}{4 } g(du , y).\ ] ] we use the expressions ( [ thpp ] ) for and to rewrite as - \frac{4\psi}{{\ensuremath{\varepsilon}}}\operatorname{tr}\left[a^2\cdot du\otimes d\psi d^2u \right ] -4\zeta\operatorname{tr}\left [ a w d^2u \right ] -\operatorname{tr}\left[a{\ensuremath{\tilde{g}}}_{yy}\right ] .\end{split}\ ] ] let us bound from above the third term in by applying the inequality of lemma [ matrixlem ] with , , and .we obtain : \leq \frac{8}{{\ensuremath{\varepsilon } } } \operatorname{tr}(a^2 ( du\otimes d\psi)(du\otimes d\psi)^t)+\frac{\psi^2}{2{\ensuremath{\varepsilon}}}\operatorname{tr}(a^2 d^2u d^2u).\ ] ] let us apply lemma [ matrixlem ] again , this time with , , and .we obtain the following upper bound for the fourth term in : \leq 8 \zeta \operatorname{tr}\left(ww^t\right)+\frac{\zeta}{2}\operatorname{tr}(a^2 d^2u d^2u ) = 8\zeta u_1 ^ 2 + \frac{\zeta}{2}\operatorname{tr}(a^2 d^2u d^2u),\ ] ] where we have also used the second statement in lemma [ matrixlem ] .we use ( [ estwithd2u ] ) and the previous line to bound from above the third and fourth terms , respectively , in on the right - hand side of ( [ traceinstar ] ) . notice that the terms involving in the previous line and in ( [ estwithd2u ] ) will be absorbed " by the second term in ( we are using that ) .we obtain : + \frac{8}{{\ensuremath{\varepsilon}}}\operatorname{tr}(a^2 ( du\otimes d\psi)(du\otimes d\psi)^t ) + 8\zeta u_{y_1}^2 -\operatorname{tr}\left[a{\ensuremath{\tilde{g}}}_{yy}(du , y)\right].\ ] ] the first term on the right - hand side of ( [ bdforrhstracealmost ] ) is simply , which is less than .in addition , we have .we use this to bound the right - hand side of ( [ bdforrhstracealmost ] ) from above and find , + \frac{8}{{\ensuremath{\varepsilon}}}\operatorname{tr}(a^2 ( du\otimes d\psi)^2 ) -\operatorname{tr}\left[a{\ensuremath{\tilde{g}}}_{yy}(du , y)\right].\ ] ] we will show now show that each of the last two terms on the right - hand side of the previous line is less than .once we show this , the estimate ( [ estonstar ] ) will be established . to this end, we use that is independent of , to compute multiplying by and using the definitions of and we obtain , let us estimate the last term of ( [ bdforrhstrace ] ) .we compute in terms of the derivatives of and and find , we multiply both sides of the previous line by and take trace .the first term gives zero .the second is simply .we summarize this as , thus we have proved ( [ estonstar ] ) .now for .we have since since , the first term in is simply for the second term in , we use that , the cauchy - schwarz inequality , the assumption ( [ fyasump ] ) , and the definitions of and to find , finally we will bound the last term in .we have , so the last term is simply where the first inequality follows by applying cauchy schwarz , the second from the definitions of and , and the third from our choice of .we therefore find , adding our upper bound ( [ estonstar ] ) on to the previous line we obtain , as desired .thus our analysis of the case that is in the interior of is complete ._ completing the proof of ( [ locmaxz ] ) : analysis of the second case ._ we now tackle the second case , and suppose .we do not have that ( [ atmaxz ] ) holds ; let us first determine the replacement " for ( [ atmaxz ] ) in this case .since the and coordinates of the maximum of are interior , we have , , and at .we now need to consider and . to this end , we use the definitions of and to compute , since in , we have on as well . hence , on , we find taking another derivative of in and using that on , we obtain putting everything together , we find that instead of ( [ atmaxz ] ) we have , we recall that we used ( [ atmaxz ] ) in two places .the first was to find that the left - hand side of ( [ eqforz ] ) is non - negative at , which allowed us to obtain the lower bound of for the left - hand side of ( [ handdph ] ) .we see that , since ( [ atmaxzsecond ] ) holds , we still have this lower bound . the second place where we used ( [ atmaxz ] ) was line ( [ dz0 ] ) , to say .now , we instead have .let us proceed as in the proof of the first case ( we will see that , because we will soon take dot product with a vector of the form , this extra term will not affect the argument ) . line ( [ dz0 ] ) no longer has on the left - hand side but becomes , we rearrange and find , the next step is to substitute the right - hand side of the previous line for the first term of the right - hand side of ( [ secondterm ] ) . doing this we obtain , at , we take dot product with .however , the key here is that is simpler than before : we use the definition of to compute , where the second inequality follows since , and on .hence , the term disappears upon taking dot product with , and therefore ( [ secondtermuse ] ) remains unchanged .thus the remainder of the argument goes through as in the first case , and the proof of proposition [ prop : gradbd ] is complete .this section is devoted to studying the half - relaxed limits and of the that we mentioned in the introduction . for the convenience of the reader , we recall their definitions here: and we also summarize the results of the previous sections .we have established corollary [ supbd : cor ] , which says that there exists a constant so that .we have also proved proposition [ bd on utheta ] , which says that if , then there exists a constant that depends on such that for and , we have and in particular , these results imply that and are finite everywhere on ( although they may be infinite at time , as we will demonstrate in the proof of proposition [ main result ] ) .in addition , we obtain that and are non - positive : indeed , since , we have for any . taking or as implies , for the reader s convenience , we recall what it means for to be a sub- or super- viscosity solution of the constrained hamilton - jacobi equation ( [ hj ] ) .we say is a sub- ( resp .super- ) viscosity solution of if , for any point and any smooth test function such that has a local maximum ( resp .minimum ) at , we have in subsection [ sec : pf main prop ] we prove that and are , respectively , a sub- and super- viscosity solution of the hamilton - jacobi equation ( [ hj ] ) .this is part part ( i ) of proposition [ main result ] .we use a perturbed test function argument and some techniques similar to the proofs of ( * ? ? ?* theorem 1.1 ) , ( * ? ? ?* propositions 3.1 and 3.2 ) , and ( * ? ? ?* proposition 1 ) .we will perturb our test functions by , where satisfies the spectral problem ( [ spectral ] ) for an appropriate . from first glancethis is slightly different from the perturbations in , say , , due to the presence of ; however , this is natural seeing as we have defined . in subsection[ sec : at time 0 ] we study the behavior of and at and establish part ( ii ) of proposition [ main result ] .we follow the strategy of ( * ? ? ?* propositions 3.1 , 3.2 ) . throughout this sectionwe employ the notational convention we mentioned in section [ notation ] : if , then we will use to denote , we formulate the following lemma , which is similar to ( * ? ? ?* ; * ? ? ?* lemma 6.1 ) .[ lem : u * ] let and be smooth functions .suppose ( resp . ) has a local minimum ( resp .maximum ) at .define .then there exists a subsequence such that 1 .[ itemlimit ] , 2 .[ itemmin ] has a local minimum ( resp .maximum ) at for some , and 3 .[ itemu ] ( resp . ) . we postpone the proof of the lemma until the end of subsection [ sec : pf main prop ] .we place one computation into a separate lemma .[ lem : for main prop ] suppose is a smooth function that satisfies , for some , we set and take to be the solution of the spectral problem ( [ spectral ] ) corresponding to this .we define there exist positive constants , such that for all , all and all we have we postpone the proof of the lemma and proceed with : let us fix a point such that . by proposition [ bd on utheta ] , is finite , so we denote .we aim to prove that is a supersolution of ( [ hj ] ) in the viscosity sense , so let us suppose has a local minimum at for some smooth function .we shall show we proceed by contradiction and assume that ( [ supersol ] ) does not hold .therefore , there exists such that satisfies ( [ v at x0t0 ] ) .we define the perturbed test function by ( [ def vep ] ) . according to lemma [ lem : for main prop ] ,there exist and such that satisfies ( [ eqn for vep ] ) on for all . by proposition[ bd on utheta ] , there exists some positive constants , , such that for all and for all , latexmath:[\[\label{utheta specific } let be the sequence of points given by lemma [ lem : u * ] .according to item ( [ itemu ] ) of lemma [ lem : u * ] , we have therefore , for all large enough , we find , together with the estimate ( [ utheta specific ] ) on , this implies that there exists such that for and for all , the previous estimate implies that is bounded from above , uniformly in .indeed , we use the definition of , the relationship and the previous line to obtain thus there exists such that for all , we have since has a local minimum at and we remark that ( [ thetaderivs ] ) holds both if is an interior point of and if is a boundary point of . indeed ,let us suppose that . by the definition of in ( [ def vep ] ) , we have , where satisfies ( [ spectral ] ) . since is positive and satisfies neumann boundary conditions , we have that is also zero on the boundary of . in particular, we find since has a local minimum at and the previous line holds , we also deduce that holds at , as desired . using ( [ txderivs ] ) ,( [ thetaderivs ] ) , and that satisfies ( [ eqn : uep ] ) , we obtain , at , according to lemma [ lem : for main prop ] , satisfies ( [ eqn for vep ] ) in . since , we have for large enough .hence , at the point , we have , subtracting the previous line from ( [ vepgeq ] ) yields , but this contradicts ( [ rho tiny ] ) .we have reached the desired contradiction and conclude that is indeed a supersolution of ( [ hj ] ) .the proof of lemma [ lem : for main prop ] is a simple computation .we include it for the sake of completeness .we observe so that we have , thus we have , for some small , and for all and all small enough , where the last equality follows since satisfies ( [ spectral ] ) .this proof is less involved than that for .according to ( [ sign ] ) we have .next , let us suppose that has a local maximum at .we proceed as in the proof for : for contradiction , we assume for some .we set and take to be the solution of the spectral problem ( [ spectral ] ) corresponding to this . as in the previous proof , we define by ( [ def vep ] ) and find that for some small , for all , and for all small enough , according to lemma [ lem : u * ] there exists a subsequence and points such that has a local maximum at , and .since satisfies ( [ eqn : uep ] ) , we obtain , at , ( the previous line holds even if is a boundary point of , by an argument similar to that in the previous part of the proof of this proposition ) . for large enough , , so both ( [ eqn : vepsuper ] ) and ( [ eqn : vepsuper2 ] ) hold at .subtracting ( [ eqn : vepsuper ] ) from ( [ eqn : vepsuper2 ] ) yields which is impossible since .now we present the proof of lemma [ lem : u * ]. we will give the proof of the case that has a local minimum at .the proof of the other case is similar . without loss of generalitywe assume that has a strict local minimum at in for some .let be a local minimum of in .by the definition of , there exists a sequence with and such that .we proceed with the proof of item ( [ itemlimit ] ) of the lemma .let be any subsequence of with .for large enough we have .since is a local minimum of on , we obtain , we take of both sides of ( [ eq : uepveplocmin ] ) .the definition of implies that the first term on the right - hand side converges to .in addition , since is continuous and the sequences and converge to and , respectively , we find , since , the definition of implies , we use the previous line to estimate the left - hand side of ( [ afterlimsup ] ) from below and obtain . since is a strict local maximum , we see .this completes the proof of items ( [ itemlimit ] ) and ( [ itemmin ] ) .next let us take of both sides of ( [ eq : uepveplocmin ] ) . since we now know and , the terms with are equal .in addition , the definition of implies that the first term on the right - hand side converges to .thus we find since , the definition of implies that equality holds in the above .this completes the proof of the lemma . in the previous subsection we showed that and are a supersolution and subsolution , respectively , of ( [ hj ] ) on . in this sectionwe study the behavior of and at time .first we show that if then . indeed ,if then there exists a point such that . hence and, according to our observation ( [ sign ] ) we have , so we obtain . in the remainder of the proof we will analyze the behavior of for . to this end , we observe that , by the definition of , we fix a constant and a cutoff function that satisfies * first step : * we claim that is a viscosity subsolution of by which we mean if has a local maximum at for some test function , then either or for , we have and , so we find ( [ ugeq0 ] ) holds . now let us suppose , ( [ ugeq0 ] ) does nt hold , and has a local maximum at for some smooth .we proceed as in the proof of part ( i ) of proposition [ main result ] : let us assume for contradiction that ( [ eqnvx0 ] ) also does not holds , so that for some .we set and take to be the solution of the spectral problem ( [ spectral ] ) corresponding to this .we define by ( [ def vep ] ) , and find that there exist points such that as , has a local maximum at , and . we claim if ( [ tep ] ) holds , then satisfies the equation ( [ eqn : uep ] ) at .hence the argument in part ( i ) of the proof of proposition [ main result ] applies in this situation as well and leads to a contradiction ( namely , we will find that ( [ eqn : vepsuper ] ) , ( [ eqn : vepsuper2 ] ) both hold , which again yields ( [ rhoimpossible ] ) , and the latter can not hold ) . thus, once we show ( [ tep ] ) , we will find that satisfies ( [ eqnvx0 ] ) and so we will have established ( [ var ineq at 0 ] ) .we proceed by contradiction and assume that ( [ tep ] ) does not hold .thus , there exists a subsequence of , also denoted , with . since , there exists such that .we have that for all large enough , and hence for all large enough .since , by definition , we have , we obtain , but we had assumed ( [ ugeq0 ] ) does nt hold , so that .we have reached the desired contradiction , hence ( [ tep ] ) must hold , and so we conclude that is a viscosity subsolution of ( [ var ineq at 0 ] ) . *second step : * we will now use that is a viscosity subsolution of ( [ var ineq at 0 ] ) to prove on . to this end , let us fix any and assume for contradiction for , let us define the test functions where we use to denote since is upper - semicontinuous , there exist such that has a maximum at in . in particular , we have where the equalities follow since and from ( [ um ] ) .we now use that and the definition of to estimate the left - hand side of the previous line from above by .thus we obtain an estimate on the distance between and : we will now establish the inequality indeed , according to proposition [ prop : spectral ] , we have for all .we apply this with and obtain , where the second inequality follows from ( [ estxdelx0 ] ) .let us add to both sides of the previous line .the left - hand side becomes exactly the left - hand side of ( [ estonquantity ] ) .the right - hand side becomes , due to our choice of in ( [ choicemu ] ) .thus we find ( [ estonquantity ] ) holds .we now recall that has a maximum at in .let us suppose that for some . according to part ( i ) of proposition [ main result ] , is a subsolution of ( [ hj ] ) in , which implies but this is impossible , as we have just established ( [ estonquantity ] ) .therefore , we must have for all .but we also know that is a subsolution of ( [ var ineq at 0 ] ) on .therefore , we have but , again according to ( [ estonquantity ] ) , we have that the first term inside the must be strictly positive .therefore , in order for the previous line to hold , the second term inside the must be non - positive .thus we have , since is a local maximum of , we have because and , we have that the left - hand side of the previous line is exactly . in addition , we use that is non - negative and the estimate ( [ bumuzeta ] ) to bound the right - hand side of the previous line from above .we find , since as and is continuous , we obtain , which is impossible since and is arbitrary .we have obtained the desired contradiction and thus ( [ um ] ) can not hold .we conclude , and hence the proof is complete .this proof is similar to the one for .first we show that if , then . indeed ,since , there exists with .therefore , we will now prove that for .* first step : * we prove that is a supersolution of to this end , let us suppose , and has a local minimum at for some test function .we proceed as in the proof of part ( i ) of proposition [ main result ] : we define by ( [ def vep ] ) , and find that there exist points such that as , has a local minimum at , and . we claim if ( [ tep ] ) holds , then the argument in the proof of part ( i ) of proposition [ main result ] applies in this situation as well .thus , once we show ( [ tep for uunder ] ) , we will find that satisfies at and so we will have established that ( [ var ineqn underu ] ) in the viscosity sense .we proceed by contradiction and assume that ( [ tep for uunder ] ) does not hold .thus , there exists a subsequence , also denoted , with . since , there exists such that . since , we have that for all large enough , . therefore , there exists so that for all large enough .hence we find , where the first equality follows from the definition of the sequence .but the previous line contradicts our assumption ( [ uux0 ] ) .therefore , ( [ tep for uunder ] ) must hold , and we conclude that is a viscosity supersolution of ( [ var ineqn underu ] ) . * second step : * let us fix .we will prove , which , together with ( [ sign ] ) , implies .let us suppose for contradiction we point out that is finite .indeed , if then has a minimum at for all , and since we know is a supersolution of ( [ var ineqn underu ] ) , we find at for all , which is of course impossible .since is lower - semicontinuous and finite at , there exists a neighborhood of and some finite such that if , then .we define , for , the test functions where we define by ( [ choicemu ] ) as in the previous proof .there exists a sequence such that has a local minimum at .we find , as in the previous proof , that the upper bound ( [ estxdelx0 ] ) on holds here as well .we use ( [ estxdelx0 ] ) , the definition of and the properties of given in proposition [ prop : spectral ] to obtain , for all , let us suppose .since has a minimum at and , according to part ( i ) of proposition [ main result ] , is a supersolution of ( [ hj ] ) in , we find that must hold in [ negubar ] ) .but this is impossible , since we have already established ( [ negubar ] ) . therefore , we find for all .according to ( [ var ineqn underu ] ) , we have that is a supersolution of ( [ var ineqn underu ] ) on , so we find according to ( [ negubar ] ) we see that the second term in the is strictly negative . hence we obtain for all .together with , the fact that is a minimum of , and , this implies , but this contradicts our assumption ( [ undux0 ] ) .thus we find and the proof is complete .in this short section we use the results that we ve established in the rest of the paper to give the proof of our main result , theorem [ result on n ] .the arguments are similar to those in the proofs of ( * ? ? ?* theorem 1.1 ) and ( * ? ? ? * theorem 1 ) .since ( [ hj ] ) satisfies the comparison principle ( see proposition [ prop : prelem ] ) , proposition [ main result ] and the definitions of and imply and on .let us fix some .since is continuous , there exists such that for all . since , we have for all as well. therefore , for all , for all and for all small enough .thus , for all and for all , we have so we find that uniformly at an exponential rate on .now let us suppose is a point in the interior of , so that for some . according to ( [ sign ] ) , we have on . since , we have on , so we see on .we define the test function since on , we find that has a local minimum at . therefore there exists a sequence with such that has a local minimum at . since is satisfies ( [ eqn : uep ] ) in the viscosity sense , we find , at , using the definition of and rearranging yields , taking the limit of the previous line yields , recalling the definition of completes the proof .corollary [ corc*informal ] follows easily from our main result , theorem [ result on n ] , and the following lemma , which is essentially a restatement of a result of majda and sougandis . before stating the lemma, we introduce one more bit of notation . given a bounded interval , we use to denote a bounded function that is positive on , negative on , with a maximum at some , strictly increasing to the left of and strictly decreasing to the right of .[ lemfornewcor ] assume is a bounded interval .let be the unique viscosity solution of let be given by lemma 1.1 .then : 1 .[ itemms ] and ; and , 2 .[ itemchar ] and . in ,the authors analyze the properties of the front , where solves an equation of the form ( [ hj ] ) .it turns out that , for a general hamiltonian that depends on space and time , this front may not be very regular .however , our situation is rather simple and no degeneracy appears indeed , item ( [ itemms ] ) says that our front is geometric . item ( [ itemms ] ) is a special case of propositions 2.3 and 2.4 of .indeed , what we denote , , are denoted , , in . and ,if we use to denote the hamiltonian of , reserving itself for our hamiltonian , then we have the correspondence between the two notations . in addition , the velocity field of is in our situation , so the hypotheses of both propositions are satisfied .finally , we may use ( [ hc * ] ) to find that the general nonlinearity in line ( 2.5 ) of is , in our situation , simply given by .( for the benefit of the reader , we also point out two typos in section 2.2 of . first , ( 1.6 ) " should read ( 1.9 ) " throughout that section .second , " should read " in the conclusion of proposition 2.3 . ) the proof of item ( [ itemchar ] ) is based on the method of characteristics .the characteristics for equations of the form are analyzed in detail in barles .taking in order to study ( [ eqn : wom ] ) yields that the characteristic emanating from is ( where if , if , and if ) , and , let us use the properties of to continue our analysis .we recall that has a maximum at , is strictly increasing to the left of and strictly decreasing to the right .hence , if is to the left of , then the characteristic has constant slope in the plane ; while , if is to the right of , then the slope is .we remark that the characteristics never cross . and, is constant along these characteristics , so , in particular , the sign of the initial condition is preserved along them .now , let be a point with . we shall show .let us suppose lies to the right of ( the other case is similar ). then must lie on a characteristic with slope , so that for some .we compute , where the equality follows since , and the second inequality since .since lies outside , is negative there . because is constant along characterisics , we conclude .we omit the similar arguments that establish the remainder of the proposition .we now proceed with : let and be as in lemma [ lem intro ] and let and be as in lemma [ lemfornewcor ] . combining the two parts of lemma [ lem intro ] yields , on the other hand , theorem [ result on n ] implies , * if , then for all . * if is in the interior of , then . combining this with the previous line yields the corollary .we now use corollary [ corc*informal ] and the bounds ( [ boundsonc * ] ) on to present : to establish the first statement of the corollary , let be such that . according to the second inequality in ( [ boundsonc * ] ) , we also have . using corollary [ corc*informal ] completes the proof of the first statement ; the proof of the second is analogous .[ [ appendix ] ] in this appendix , we state an existence result and a comparison principle for ( [ hj ] ) with infinite initial data ( proposition [ prop : prelem ] ) .these were established in ( * ? ? ?* section 4 ) but not explicitely stated there , and we could not locate another reference in the literature . because of this , we carefully explain how to obtain them from ( * ? ? ?* section 4 ) .in addition to , we also refer the reader to crandall , lions and souganidis for more about hamilton - jacobi equations with infinite initial data .[ prop : prelem ] there exists a viscosity solution of ( [ hj ] ) on with with initial data ( [ infinite ] ) . moreover ,suppose and are , respectively , a viscosity subsolution and a viscosity supersolution of ( [ hj ] ) on , and both with the initial data ( [ infinite ] ) .then we have in particular , the solution to ( [ hj ] ) with initial data ( [ infinite ] ) is unique .we explain why proposition [ prop : prelem ] is exactly what was proven in ( * ? ? ?* section 4 ) . since both we and use as notation for the hamiltonian , for the purposes of this proof we use to denote the hamiltonian of .there is a difference in sign between our paper and ( * ? ? ?* section 4 ) , which we now address .we have that , and are , respectively , a viscosity solution , supersolution , and subsolution of with initial data thus we see that , if we take and , then we are exactly in the situation of ( * ? ? ?* section 4 ) .we may take , , and .thus we ve established the existence of a solution . plus , the conclusion of the comparison of the arguments in ( * ? ? ? * section 4 ) ( specifically , lines ( 4.2 ) and ( 4.5 ) ) is translating back to our notation , we see ( [ ws ] ) holds .the author thanks her thesis advisor , takis souganidis , for his guidance and encouragement .the author is grateful to vincent calvez and sepideh mirrahimi for reading earlier drafts of this paper very thoroughly .their remarks were invaluable .in addition , the author thanks benoit perthame for a stimulating discussion that led to the developement of corollary [ corc*informal ] .the author also thanks the anonymous referees .their comments have greatly helped improve the exposition and raised several interesting questions ; for example , remarks [ remark : relationship ] and [ remark : choice of nonlinearity ] were motivated by their reports .m. alfaro , j. coville , and g. raoul .traveling waves in a nonlocal equation as a model for a population structured by a space variable and a phenotypical trait . comm .partial differential equations ( cpde ) .volume 38 , issue 12 , 2013 a. arnold , l. desvillettes and c. prevost .existence of nontrivial steady states for populations structured with respect to space and a continuous trait .pure appl .11 ( 2012 ) , no . 1 , 83 - 96 .bouin , emeric ; calvez , vincent ; meunier , nicolas ; mirrahimi , sepideh ; perthame , benot ; raoul , gal ; voituriez , raphal . invasion fronts with variable motility : phenotype selection , spatial sorting and wave acceleration . c. r. math .paris 350 ( 2012 ) , no .15 - 16 , 761 - 766 .crandall , michael g. ; lions , pierre - louis ; souganidis , panagiotis e. maximal solutions and universal bounds for some partial differential equations of evolution . arch .105 ( 1989 ) , no . 2 , 163 - 190 .a. kolmogorov , i. petrovskii , and n. piscounov .a study of the diffusion equation with increase in the amount of substance , and its application to a biological problem . in v. m. tikhomirov ,editor , selected works of a. n. kolmogorov i , pages 248 - 270 .kluwer 1991 , isbn 90 - 277 - 2796 - 1 .translated by v. m. volosov from bull .moscow univ . , math .mech . 1 , 1 - 25 , 1937 thomas , c. d. , bodsworth , e. j. , wilson , r. j. , simmons , a. d. , davies , z. g. , musche , m. , conradt , l. ecological and evolutionary processes at expanding range margins .nature 411 , 577 - 581 ( 2001 ) . | we study a reaction - diffusion equation with a nonlocal reaction term that models a population with variable motility . we establish a global supremum bound for solutions of the equation . we investigate the asymptotic ( long - time and long - range ) behavior of the population . we perform a certain rescaling and prove that solutions of the rescaled problem converge locally uniformly to zero in a certain region and stay positive ( in some sense ) in another region . these regions are determined by two viscosity solutions of a related hamilton - jacobi equation . |
in this paper , we consider the following optimization problem : where is a linear map from to , is a proper closed function on and is twice continuously differentiable on with a bounded hessian .we also assume that the proximal ( set - valued ) mappings are well - defined and are simple to compute for all and for any . here, denotes the set of minimizers , and the simplicity is understood in the sense that _ at least one _ element of the set of minimizers can be obtained efficiently .concrete examples of such that arise in applications include functions listed in ( * ? ? ?* table 1 ) , the regularization , the regularization , and the indicator functions of the set of vectors with cardinality at most , matrices with rank at most and -sparse vectors in simplex , etc .moreover , for a large class of nonconvex functions , a general algorithm has been proposed recently in for computing the proximal mapping .the model problem with and satisfying the above assumptions encompasses many important applications in engineering and machine learning ; see , for example , .in particular , many sparse learning problems are in the form of with being a loss function , being the identity map and being a regularizer ; see , for example , for the use of the norm as a regularizer , for the use of the norm , for the use of the nuclear norm , and and the references therein for the use of various continuous difference - of - convex functions with simple proximal mappings . for the case when is not the identity map , an application in stochastic realization where is a least squares loss function , is the rank function and is the linear map that takes the variable into a block hankel matrix was discussed in ( * ? ? ?* section ii ) .when is the identity map , the proximal gradient algorithm ( also known as forward - backward splitting algorithm ) can be applied whose subproblem involves a computation of the proximal mapping of for some .it is known that when and are convex , the sequence generated from this algorithm is convergent to a globally optimal solution if the step - size is chosen from , where is any number larger than the lipschitz continuity modulus of . for nonconvex and , the step - size can be chosen from so that any cluster point of the sequence generated is stationary ( * ? ? ?* proposition 2.3 ) ( see section [ sec2 ] for the definition of stationary points ) , and convergence of the whole sequence is guaranteed if the sequence generated is bounded and satisfies the kurdyka - ojasiewicz ( kl ) property ( * ? ? ? * theorem 5.1 , remark 5.2(a ) ) . on the other hand , when is a general linear map so that the computation of the proximal mapping of , , is not necessarily simple , the proximal gradient algorithm can not be applied efficiently . in the case when and are both convex, one feasible approach is to apply the alternating direction method of multipliers ( admm ) .this has been widely used recently ; see , for example .while it is tempting to directly apply the admm to the nonconvex problem , convergence has only been shown under specific assumptions .in particular , in , the authors studied an application that can be modeled as with , being some risk measures and typically being an injective linear map coming from data .they showed that any cluster point gives a stationary point , assuming square summability of the successive changes in the dual iterates .more recently , in , the authors considered the case when is a nonconvex quadratic and is the sum of the norm and the indicator function of the euclidean norm ball .they showed that if the penalty parameter is chosen sufficiently large ( with an explicit lower bound ) and the dual iterates satisfy a particular assumption , then any cluster point gives a stationary point .in particular , their assumption is satisfied if is surjective . motivated by the findings in , in this paper , we focus on the case when is surjective and consider both the admm ( for a general surjective ) and the proximal gradient algorithm ( for being the identity ) .the contributions of this paper are as follows : * first , we characterize cluster points of the sequence generated from the admm .in particular , we show that if the ( fixed ) penalty parameter in the admm is chosen sufficiently large ( with a computable lower bound ) , and a cluster point of the sequence generated exists , then it gives a stationary point of problem .moreover , our analysis allows replacing in the admm subproblems by its local quadratic approximations so that in each iteration of this variant , the subproblems only involve computing the proximal mapping of for some and solving an unconstrained convex quadratic minimization problem .furthermore , we also give simple sufficient conditions to guarantee the boundedness of the sequence generated .these conditions are satisfied in a wide range of applications ; see examples [ examplenew:3 ] , [ examplenew:1 ] and [ examplenew:2 ] .* second , under the additional assumption that and are semi - algebraic functions , we show that if a cluster point of the sequence generated from the admm exists , it is actually convergent .our assumption on semi - algebraicity not only can be easily verified or recognized , but also covers a broad class of optimization problems such as problems involving quadratic functions , polyhedral norms and the cardinality function . *third , we give a concrete 2-dimensional counterexample in example [ ex7:nonconverge ] showing that the admm can be divergent when is assumed to be injective ( instead of surjective ) .* finally , for the particular case when equals the identity map , we show that the proximal gradient algorithm can be applied with a slightly more flexible step - size rule when is nonconvex ( see theorem [ prop : prox ] for the precise statement ) .the rest of the paper is organized as follows .we discuss notation and preliminary materials in the next section .convergence of the admm is analyzed in section [ sec : msur ] , and section [ sec : mi ] is devoted to the analysis of the proximal gradient algorithm .some numerical results are presented in section [ sec : num ] to illustrate the algorithms .we give concluding remarks and discuss future research directions in section [ sec : con ] .we denote the -dimensional euclidean space as , and use to denote the inner product and to denote the norm induced from the inner product .linear maps are denoted by scripted letters .the identity map is denoted by . for a linear map , denotes the adjoint linear map with respect to the inner product and is the induced operator norm of .a linear self - map is called symmetric if . for a symmetric linear self - map , we use to denote its induced quadratic form given by for all , and use ( resp . , ) to denote the maximum ( resp ., minimum ) eigenvalue of .a symmetric linear self - map is called positive semidefinite , denoted by ( resp . ,positive definite , ) if ( resp ., ) for all nonzero . for two symmetric linear self - maps and , we use ( resp . , ) to denote ( resp . , ) .an extended - real - valued function is called proper if it is finite somewhere and never equals .such a function is called closed if it is lower semicontinuous .given a proper function ] for all , then for any and in , we have dt\right\|^2\\ & \le \left(\int_0 ^ 1\left\|\nabla^2\phi(x_2 + t(x_1 - x_2))\cdot[x_1 - x_2]\right\| dt\right)^2\\ & = \left(\int_0 ^ 1\sqrt{\langle x_1-x_2,[\nabla^2\phi(x_2 + t(x_1 - x_2))]^2\cdot[x_1 - x_2]\rangle } dt\right)^2 \le \|x_1 - x_2\|^2_{\q}. \end{split}\ ] ] on the other hand , if there exists so that for all , then \rangle ds \\ge \frac12 \|x_1 - x_2\|^2_{\q } \end{split}\ ] ] for any and in .a semi - algebraic set is a finite union of sets of the form where and are polynomials with real coefficients in variables .in other words , is a union of finitely many sets , each defined by finitely many polynomial equalities and strict inequalities .a map is semi - algebraic if is a semi - algebraic set .semi - algebraic sets and semi - algebraic mappings enjoy many nice structural properties .one important property which we will use later on is the kurdyka - ojasiewicz ( kl ) property .[ def : kl ] * ( kl property & kl function ) * a proper function is said to have the kurdyka - ojasiewicz ( kl ) property at if there exist ] for all ; * for some ; * with ^ 2 ] and set and . then .consider the optimization problem this problem corresponds to with , where , and is the linear map so that ; the problem can be equivalently reformulated as and the admm can be applied .let and denote the multipliers corresponding to the first and second equality constraints , respectively .the iterates in ( with ) now take the form for concreteness , whenever ambiguity arises in updating via the projection onto the nonconvex ( discrete ) set , we choose the element in that is closest to the previous iterate . for each , consider the initializations , and .then it is routine to show that the admm described in will exhibit a discrete limit cycle of length .specifically , for any and .moreover , in particular , the sequence is not convergent and the successive change of the -update does not converge to zero .in this section , we look at the model problem in the case where . since the objective is the sum of a smooth and a possibly nonsmooth part with a simple proximal mapping , it is natural to consider the proximal gradient algorithm ( also known as the forward - backward splitting algorithm ) . in this approach ,one considers the update from our assumption on , the update can be performed efficiently via a computation of the proximal mapping of .when , where , it is not hard to show that any cluster point of the sequence generated above is a stationary point of ; see , for example , . in what follows, we analyze the convergence under a slightly more flexible step - size rule .[ prop : prox ] suppose that there exists a twice continuously differentiable convex function and such that for all , let be generated from with . then the algorithm is a descent algorithm. moreover , any cluster point of , if exists , is a stationary point .for the algorithm to converge faster , intuitively , a larger step - size should be chosen ; see also table [ table3 ] .condition indicates that the concave " part of the smooth objective does not impose any restrictions on the choice of step - size .this could result in an smaller than the lipschitz continuity modulus of , and hence allow a choice of a larger . on the other hand , since the algorithm is a descent algorithm by theorem [ prop : prox ] , the sequence generated from would be bounded under standard coerciveness assumptions on the objective function .notice from assumption that is lipschitz continuous with lipschitz continuity modulus at most .hence from this we see further that where the first inequality follows from , the last inequality follows from the definition of and the subdifferential inequality applied to the function .since implies , shows that the algorithm is a descent algorithm .rearranging terms in and summing from to any , we see further that now , let be a cluster point and take any convergent subsequence that converges to . taking limit on both sides of the above inequality along the convergent subsequence , one can see that .finally , we wish to show that . to this end ,note first that since , we also have .then it follows from lower semicontinuity of that . on the other hand , from, we have which gives . hence , .now , using this , , and taking limit along the convergent subsequence in the following relation obtained from we see that the conclusion concerning stationary point holds .we illustrate the above theorem in the following examples .suppose that admits an explicit representation as a difference of two convex twice continuously differentiable functions , and that has a lipschitz continuous gradient with modulus at most . then holds with and .hence , the step - size can be chosen from .a concrete example of this kind is given by , where is a symmetric indefinite matrix . then holds with , where is the projection of onto the cone of nonpositive semidefinite matrices , and .the step - size can be chosen within the open interval . in the case when is a concave quadratic , say , for example , for some linear map , it is easy to see that holds with for _ any _ positive number .thus , step - size can be chosen to be any positive number .suppose that has a lipschitz continuous gradient and it is known that all the eigenvalues of , for any , lie in the interval $ ] with . if , it is clear that is lipschitz continuous with modulus bounded by , and hence the step - size for the proximal gradient algorithm can be chosen from . on the other hand , if , then it is easy to see that holds with and .hence , the step - size can be chosen from .we next comment on the convergence of the whole sequence .we consider the conditions * h1 * through * h3 * on ( * ? ? ?* page 99 ) .first , it is easy to see from that * h1 * is satisfied with .next , notice from that if , then .moreover , from the definition of , we have for any .this shows that the condition * h2 * is satisfied with .finally , ( * ? ? ?* remark 5.2 ) shows that * h3 * is satisfied . thus , we conclude from ( * ? ? ?* theorem 2.9 ) that if is a kl - function and a cluster point of the sequence exists , then the whole sequence converges to .a line - search strategy can also be incorporated to possibly speed up the above algorithm ; see for the case when is a continuous difference - of - convex function .the convergence analysis there can be directly adapted .the result of theorem [ prop : prox ] concerning the interval of viable step - sizes can be used in designing the initial step - size for backtracking in the line - search procedure .in this section , we perform numerical experiments to illustrate our algorithms .all codes are written in matlab .all experiments are performed on a 32-bit desktop machine with an intel i7 - 3770 cpu ( 3.40 ghz ) and a 4.00 gb ram , equipped with matlab 7.13 ( 2011b ) . [[ minimizing - constraints - violation . ] ] minimizing constraints violation .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we consider the problem of finding the closest point to a given that violates at most out of equations .the problem is presented as follows : where has full row rank , , .this can be seen as a special case of by taking and to be the indicator function of the set , which is a proper closed function ; here , is the norm that counts the number of nonzero entries in the vector .we apply the admm ( i.e. , proximal admm with ) with parameters specified as in example [ example:5 ] , and pick so that . from example[ examplenew:2 ] , the sequence generated from the admm is always bounded and hence convergence of the sequence is guaranteed by theorem [ th:2 ] .we compare our model against the standard convex model with the norm replaced by the norm .this latter model is solved by sdpt3 ( version 4.0 ) , called via cvx ( version 1.22 ) , using default settings . for the admm, we consider two initializations : setting all variables at the origin ( init . ) , or setting to be the approximate solution obtained from solving the convex model , and ( init . ) . as discussed in remark [ rem1 ] , when is feasible for , this latter initialization satisfies the conditions in theorem [ thm : main](ii ) .we terminate the admm when the sum of successive changes is small , i.e. , when in our experiments , we consider random instances . in particular , to guarantee that the problem is feasible for a fixed , we generate the matrix and the right hand side using the following matlab codes : .... m = randn(m , n ) ; x_orig = randn(n,1 ) ; j = randperm(m ) ; b = randn(m,1 ) ; b(j(1:m - r ) ) = m(j(1:m - r),:)*x_orig ; % subsystem has a solution .... we then generate with i.i.d .standard gaussian entries .we consider , , , and , , , and .we generate one random instance for each and solve and the corresponding relaxation .the computational results are shown in table [ table2 ] , where we report the number of violated constraints ( vio ) by the approximate solution obtained , defined as , and the distance from ( dist ) defined as .we also report the number of iterations the admm takes , as well as the cpu time of both the admm initialized at the origin and sdpt3 called using cvx .we see that the model allows an explicit control on the number of violated constraints .in addition , comparing with the model , the model solved using the admm always gives a solution closer to .finally , the solution obtained from the admm initialized from an approximate solution of the model can be slightly closer to than the solution obtained from the zero initialization , depending on the particular problem instance ..computational results for perturbation with bounded number of violated equalities . [ cols="^,^,^,^,^,^,^,^,^,^,^,^,^ " , ]in this paper , we study the proximal admm and the proximal gradient algorithm for solving problem with a general surjective and , respectively . we prove that any cluster point of the sequence generated from the algorithms gives a stationary point by assuming merely a specific choice of parameters and the existence of a cluster point .we also show that if the functions and are in addition semi - algebraic and the sequence generated by the admm ( i.e. , proximal admm with ) clusters , then the sequence is actually convergent .furthermore , we give simple sufficient conditions for the boundedness of the sequence generated from the proximal admm .one interesting future research direction would be to adapt other splitting methods for convex problems to solve , especially in the case when is injective , and study their convergence properties .the second author would like to thank ernie esser and gabriel goh for enlightening discussions .the authors would also like to thank the anonymous referees for suggestions that help improve the manuscript .b. p. w. ames and m. hong .alternating direction method of multipliers for sparse zero - variance discriminant analysis and principal component analysis .preprint , january 2014 .available at ` http://arxiv.org/abs/1401.5492 ` .h. attouch , j. bolte , p. redont and a. soubeyran .proximal alternating minimization and projection methods for nonconvex problems .an approach based on the kurdyka - lojasiewicz inequality .35 , pp . 438457 ( 2010 ) .h. attouch , j. bolte and b. f. svaiter .convergence of descent methods for semi - algebraic and tame problems : proximal algorithms , forward - backward splitting , and regularized gauss - seidel methods .137 , ser .a , pp . 91129 ( 2013 ) .m. fortin and r. glowinski .on decomposition - coordination methods using an augmented lagrangian . in m. fortin and r. glowinski , eds . , _ augmented lagrangion methods : applications to the solution of boundary problems ._ north - holland , amsterdam , 1983 .applications of the method of multipliers to variational inequalities . in m. fortin and r.glowinski , eds ., _ augmented lagrangion methods : applications to the solution of boundary problems . _ north - holland , amsterdam , 1983 . p. gong , c. zhang , z. lu , j. huang and j. ye . a general iterative shrinkage and thresholding algorithm for non - convex regularized optimization problems . the 30th international conference on machine learning ( icml 2013 ) . | we consider the problem of minimizing the sum of a smooth function with a bounded hessian , and a nonsmooth function . we assume that the latter function is a composition of a proper closed function and a surjective linear map , with the proximal mappings of , , simple to compute . this problem is nonconvex in general and encompasses many important applications in engineering and machine learning . in this paper , we examined two types of splitting methods for solving this nonconvex optimization problem : alternating direction method of multipliers and proximal gradient algorithm . for the direct adaptation of the alternating direction method of multipliers , we show that , if the penalty parameter is chosen sufficiently large and the sequence generated has a cluster point , then it gives a stationary point of the nonconvex problem . we also establish convergence of the whole sequence under an additional assumption that the functions and are semi - algebraic . furthermore , we give simple sufficient conditions to guarantee boundedness of the sequence generated . these conditions can be satisfied for a wide range of applications including the least squares problem with the regularization . finally , when is the identity so that the proximal gradient algorithm can be efficiently applied , we show that any cluster point is stationary under a slightly more flexible constant step - size rule than what is known in the literature for a nonconvex . |
in the early 1900s , bernstein asked the following question : given a collection of subsets of a set , is there a partition of into such that no subset is contained in either or ?if we think of the elements of as vertices and of each subset as a hyperedge , the question can be rephrased as whether a given hypergraph can be 2-colored so that no hyperedge is monochromatic . of particular interestis the setting where all hyperedges contain vertices , -uniform hypergraphs .this question was popularized by erds who dubbed it `` property b '' in honor of bernstein and has motivated some of the deepest advances in probabilistic combinatorics . indeed , determining the smallest number of hyperedges in a non-2-colorable -uniform hypergraph remains one of the most important problems in extremal graph theory , perhaps second only to the ramsey problem .a more modern problem , with a somewhat similar flavor , is boolean satisfiability : given a cnf formula , is it possible to assign truth values to the variables of so that it evaluates to true ?satisfiability has been the central problem of computational complexity since 1971 when cook proved that it is complete for the class np .the case where all clauses have the same size is known as -sat and is np - complete for all . for both -sat and property bit is common to generate random instances by selecting a corresponding structure at random . indeed ,random formulas and random hypergraphs have been studied extensively in probabilistic combinatorics in the last three decades .while there are a number of slightly different models for generating such structures uniformly at random " , we will see that results transfer readily between them . for the sake of concreteness , let denote a formula chosen uniformly among all formulas on variables with clauses .similarly , let denote a hypergraph chosen uniformly among all hypergraphs with vertices and hyperedges .we will say that a sequence of events occurs _ with high probability _( ) if = 1 ] . throughout the paper, will be arbitrarily large but fixed . in recent years , both problems have been understood to undergo a `` phase transition '' as the ratio of constraints to variables passes through a critical threshold .that is , for a given number of vertices ( variables ) , the probability that a random instance has a solution drops rapidly from 1 to 0 around a critical number of hyperedges ( clauses ) .this sharp threshold phenomenon was discovered in the early 1990s , when several researchers performed computational experiments on and found that while for almost all formulas are satisfiable , for almost all are unsatisfiable .moreover , as increases , this transition narrows around . along with similar results for other fixed has led to the following popular conjecture : * satisfiability threshold conjecture : * _ for each , there exists a constant such that _ = \begin{cases } 1 & \mbox{if }\\ 0 & \mbox{if } \enspace . \end{cases}\ ] ] in the last ten years , this conjecture has become an active area of interdisciplinary research , receiving attention in theoretical computer science , artificial intelligence , combinatorics and , more recently , statistical physics .much of the work on random -sat has focused on proving upper and lower bounds for , both for the smallest computationally hard case and for general . at this pointthe existence of has not been established for any .nevertheless , we will take the liberty of writing to denote that for all , is satisfiable ; analogously , we will write to denote that for all , is unsatisfiable . as we will see , an elementary counting argument yields for all .lower bounds , on the other hand , have been exclusively algorithmic : to establish ones shows that for some specific algorithm finds a satisfying assignment with probability we will see that an extremely simple algorithm already yields .we will also see that while more sophisticated algorithms improve this bound slightly , to date no algorithm is known to find a satisfying truth assignment ( even ) when , for any .the threshold picture for hypergraph 2-colorability is completely analogous : for each , it is conjectured that there exists a constant such that = \begin{cases } 1 & \mbox{if }\\ 0 & \mbox{if } \enspace . \end{cases}\ ] ] the same counting argument here implies , while another simple algorithm yields .again , no algorithm is known to improve this bound asymptotically , leaving a multiplicative gap of order between the upper and lower bound for this problem as well . in this paper, we use the _ second moment method _ to show that random -cnf formulas are satisfiable , and random -uniform hypergraphs are 2-colorable , for density up to .thus , we determine the threshold for random -sat within a factor of two and the threshold for property b within a small additive constant .recall that is unsatisfiable if .our first main result is [ thm : ksat ] for all , is satisfiable if .our second main result determines the property b threshold within an additive .[ thm : hyp ] for all , is non-2-colorable if there exists a sequence such that for all , is 2-colorable if the upper bound in corresponds to the density for which the expected number of 2-colorings of is .our main contribution is inequality which we prove using the second moment method .in fact , our approach yields explicit bounds for the hypergraph 2-colorability threshold for each value of ( although ones that lack an attractive closed form ) .we give the first few of these bounds in table [ tab : val ] .we see that the gap between our upper and lower bounds converges to its limiting value of rather rapidly .[ tab : val ] unlike the algorithmic lower bounds for random -sat and hypergraph 2-colorability , our arguments are non - constructive : we establish that solutions exist for certain densities but do not offer any hint on how to find them .we believe that abandoning the algorithmic approach for proving such lower bounds is natural and , perhaps , necessary . at a minimum ,the algorithmic approach is limited to the small set of rather naive algorithms whose analysis is tractable using current techniques . perhaps more gravely , it could be that _ no _ polynomial algorithm can overcome the barrier .determining whether this is true even for certain limited classes of algorithms , random walk algorithms , is a very interesting open problem .in addition , by not seeking out some specific truth assignment , as algorithms do , the second moment method gives some first glimpses of the `` geometry '' of the set of solutions .deciphering these first glimpses , getting clearer ones , and exploring potential interactions between the geometry of the set of solutions and computational hardness are great challenges lying ahead .we note that recently , and independently , frieze and wormald applied the second moment method to random -sat in the case where is a moderately growing function of .specifically , they proved that when , is satisfiable if but unsatisfiable if , where and is such that .their result follows by a direct application of the second moment method to the number of satisfying assignments of . as we will see shortly , while this approach gives a very sharp bound when , it fails for any fixed and indeed for any .we also note that since this work first appeared , the line of attack we put forward has had several other successful applications .specifically , in , the lower bound for the random -sat threshold was improved to by building on the insights presented here . in , the method was successfully extended to random max -sat , while in it was applied to random graph coloring .we discuss these subsequent developments in the conclusions .the version of the second moment method we will use is given by lemma [ lem : sec ] below and follows from a direct application of the cauchy - schwarz inequality ( see remark 3.1 in ) .[ lem : sec ] for any non - negative random variable , \,\ge\ , \frac{\ex[x]^2}{\ex[x^2 ] } \enspace .\ ] ] it is natural to try to apply lemma [ lem : sec ] to random -sat by letting be the number of satisfying truth assignments of .unfortunately , as we will see , this naive " application of the second moment method fails rather dramatically : for all and every , > ( 1+\beta)^n \,\ex[x]^2 ] for . shortly afterwards , chao and franco complemented this result by proving that for all , if then the following linear - time algorithm , called unit clause ( uc ) , finds a satisfying truth assignment : if there exist unit clauses , pick one randomly and satisfy it ;else pick a random unset variable and give it a random value .note that since uc succeeds only ( rather than ) this does not imply a lower bound for .the satisfiability threshold conjecture gained a great deal of popularity in the early 1990s and has received an increasing amount of attention since then .the polynomial - time solvable case was settled early on : independently , chvtal and reed , fernandez de la vega , and goerdt proved that .chvtal and reed , in addition to proving , gave the first lower bound for , strengthening the positive - probability result of chao and franco by analyzing the following refinement of uc , called short clause ( sc ) : if there exist unit clauses , pick one randomly and satisfy it ; else if there exist binary clauses , pick one randomly and satisfy a random literal in it ; else pick a random unset variable and give it a random value .in , the authors showed that for all , sc finds a satisfying truth assignment for and raised the question of whether this lower bound for can be improved asymptotically .a large fraction of the work on the satisfiability threshold conjecture since then has been devoted to the first computationally hard case , , and a long series of results has narrowed the potential range of .currently this is pinned between by kaporis , kirousis and lalas and hajiaghayi and sorkin and by dubois and boufkhad .upper bounds for come from probabilistic counting arguments , refining the above calculation of the expected number of satisfying assignments .lower bounds , on the other hand , have come from analyzing progressively more sophisticated algorithms .unfortunately , neither of these approaches helps narrow the asymptotic gap between the upper and lower bounds for .the upper bounds only improve by a small additive constant ; the best algorithmic lower bound , due to frieze and suen , is where there are two more results that stand out in the study of random -cnf formulas . in a breakthrough paper , friedgut proved the existence of a _ non - uniform _ satisfiability threshold , of a sequence around which the probability of satisfiability goes from 1 to 0 .[ thm : frie ] for each , there exists a sequence such that for every , = \begin{cases } 1 & \mbox{if }\\ 0 & \mbox{if } \enspace . \end{cases}\ ] ] in , chvtal and szemerdi established a seminal result in proof complexity , by extending the work of haken and urquhart to random formulas .specifically , they proved that for all , if then is unsatisfiable but every resolution proof of its unsatisfiability contains at least clauses , for some . in , achlioptas , beame andmolloy extended the main result of to random cnf formulas that also contain as this is relevant for the behavior of davis - putnam ( dpll ) algorithms on random -cnf .( dpll algorithms proceed by setting variables sequentially , according to some heuristic , and backtracking whenever a contradiction is reached . ) by combining the results in the present paper with the results in , it was recently shown that a number of dpll algorithms require exponential time _ significantly below _ the satisfiability threshold , for provably satisfiable random -cnf formulas .finally , we note that if one chooses to live unencumbered by the burden of mathematical proof , powerful non - rigorous techniques of statistical physics , such as the `` replica method '' , become available .indeed , several claims based on the replica method have been subsequently established rigorously , so it is frequently ( but definitely not always ) correct . using this technique , monasson and zecchina predicted . like most arguments based on the replica method ,their argument is mathematically sophisticated but far from rigorous . in particular, they argue that as grows large , the so - called _ annealed approximation _ should apply .this creates an analogy with the second moment method which we discuss in section [ sec : replica ] .while bernstein originally raised the 2-colorability question for certain classes of infinite set families , erds popularized the finite version of the problem and the hypergraph representation . recall that a 2-uniform hypergraph , a graph , is 2-colorable if and only if it has no odd cycle . in a random graph with edges this occurs with constant probability if and only if ( see for more on the evolution of cycles in random graphs ) . for all , on the other hand , hypergraph 2-colorability is np - complete and determining the 2-colorability threshold for -uniform hypergraphs remains open .analogously to random -sat , we will take the liberty of writing if is 2-colorable for all , and if is non-2-colorable for all .alon and spencer were the first to give bounds on the potential value of .specifically , they observed that , analogously to random -sat , the expected number of 2-colorings of is at most ^n c = ( 1-\epsilon)\ , c_k(n) c = ( 1+\epsilon)\ , c_k(n) ] , is for every and .similarly , for ] , the probability that a fixed pair of truth assignments satisfy the random clause , depends only on the number of variables to which and assign the same value .specifically , if the overlap is , we claim that this probability is our claim follows by inclusion - exclusion and observing that if is not satisfied by , the only way for it to also not be satisfied by is for all variables in to lie in the overlap of and .thus , quantifies the correlation between the events that and are satisfying as a function of their overlap .in particular , observe that truth assignments with overlap are uncorrelated since ^ 2 ] .therefore , if there exists some ]. put differently , unless the dominant contribution to ] .therefore , the derivative of is never 0 at 1/2 , instead becoming 0 at some where the benefit of positive correlation balances with the cost of decreased entropy .( indeed , this is true for all and constant . )[ fig_sat ] , ( top to bottom ) let us now repeat the above analysis but with being the number of nae - satisfying truth assignments of a formula . recall that is a nae - satisfying assignment iff under every clause has at least one satisfied literal _ and _ at least one unsatisfied literal .thus , for a -cnf formula with random clauses , proceeding as in , we get = 2^n ( 1 - 2^{1-k})^{m } \enspace , \ ] ] since the probability that nae - satisfies the random clause is for every and .regarding the second moment , proceeding exactly as in , we write ] has only a polynomial number of terms , we now get \leq 2^n \left(\max_{0 \leq \a \leq 1 } \left[\frac{f_n(\a)^r}{\a^\a(1-\a)^{1-\a}}\right ] \right)^n \times { \mathrm { poly}}(n ) \equiv \left(\max_{0 \leq \a \leq 1 } \lambda_n(\a)\right)^n \times { \mathrm { poly}}(n ) \enspace .\ ] ] as before , it is easy to see that ^ 2 = \lambda_n(1/2)^n ] and ^ 2 ] is bounded by a constant , implying that nae - satisfiability holds so , all in all , again we hope that the dominant contribution to ] and so there are no nae - satisfying assignments for such .it is worth noting that for , even though , the second moment is exponentially large ( since near and ) .[ fig_nae ] , ( top to bottom ) , the most interesting case is . here is a local maximum and greater than 1 , but the two global maxima occur at and where the function equals ... as a result , again , the second moment method only gives an exponentially small lower bound on ] is now exponentially large .indeed , the largest value for which the second moment succeeds for is when the two side peaks reach the same height as the peak at ( see the plot on the right in fig . 2 ) .so , the situation can be summarized as follows .by requiring that we only count nae - satisfying truth assignments we make it , roughly , twice as hard to satisfy each clause .this manifests itself in the additional factor of 2 in the middle term of compared to . on the other hand , now , the third term of , capturing joint " behavior , is symmetric around , making itself symmetric around .this enables the second moment method which , indeed , only breaks down when the density gets within an additive constant of the upper bound for the nae -sat threshold .given a truth assignment and an arbitrary cnf formula , let denote the total number of literal _ occurrences _ in satisfied by .so , for example , is maximized by those truth assignments that assign every variable its `` majority '' value . with this definition at hand , a potential explanation of how symmetry reduces the variance is suggested by considering the following trivial refinement of our generative model : first i ) draw i.i.d .uniformly random literals just as before and then ii ) partition the drawn literals randomly into -clauses ( rather than assuming that the first literals form the first clause , the next the second , etc . ) .in particular , imagine that we have just finished performing the first generative step above and we are about to perform the second .observe that at this point the value of has already been determined for every .moreover , for each fixed the conditional probability of yielding a satisfying assignment corresponds to a balls - in - bins experiment : distribute balls in bins , each with capacity , so that every bin receives at least one ball .it is clear that those truth assignments for which is large at the end of the first step have a big advantage in the second .to get an idea of what typically looks like on we begin by observing that the number of occurrences of a fixed literal , , is distributed as .thus , = o(1) ] as a sum of contributions from pairs of assignments with different overlaps , we have in fact calculated the average of over all formulas , weighted by the number of pairs of satisfying assignments of each one .physicists call this weighted average the `` annealed approximation '' of , and denote it .it is worth pointing out that , while the annealed approximation clearly overemphasizes formulas with more satisfying assignments , monasson and zecchina conjectured in , based on the replica method , that it becomes asymptotically tight as . on a more rigorous footing, it is easy to see that in our case is proportional to .therefore , whenever is peaked at , is tightly peaked around , since vanishes for all other values of as .this is precisely what we prove occurs in random for densities up to . in other words , for densitiesalmost all the way to the random nae -sat threshold , in the annealed approximation , the nae - satisfying assignments are scattered throughout the hypercube _ as if they were independent . _note that even if is concentrated around 1/2 ( rather than just ) this still allows for a typical geometry where there are exponentially many , exponentially large clusters , each centered at a random assignment .indeed , this is precisely the picture suggested by some very recent , ground - breaking work of mezard , parisi , and zecchina , based on non - rigorous techniques of statistical physics .if this is indeed the true picture , establishing it rigorously would require considerations much more refined than the second moment of the number of solutions . more generally , getting a better understanding of the typical geometry and its potential implications for algorithms appears to us a very challenging and very important open problem .given a set of boolean variables , let denote the set of all proper _ -clauses _ on , the set of all disjunctions of literals involving distinct variables .similarly , given a set of vertices , let be the set of all -subsets of . as we saw , a random -cnf formula is formed by selecting uniformly a random -subset of , while a random -uniform hypergraph is formed by selecting uniformly a random -subset of .while and are perhaps the most natural models for generating random -cnf formulas and random -uniform hypergraphs , respectively , there are a number of slight variations of each model .those are largely motivated by amenability to certain calculations . to simplify the discussion we focus on models for random formulas in the rest of this subsection .all our comments transfer readily to models for random hypergraphs .for example , it is fairly common to consider the clauses as ordered -tuples ( rather than as -sets ) and/or to allow replacement in sampling the set .clearly , for properties such as satisfiability the issue of ordering is irrelevant .moreover , as long as , essentially the same is true for the issue of replacement . to see that ,observe that the number of repeated clauses is and the subset of distinct clauses is uniformly random .thus , if a monotone decreasing property ( such as satisfiability ) holds with probability for a given when replacement is allowed , it holds with probability for all when replacement is not allowed .the issue of selecting the literals of each clause with replacement ( which might result in some improper " clauses ) is completely analogous .that is , the probability that a variable appears more than once in a given clause is at most and hence there are improper clauses .finally , we note that by standard techniques our results also transfer to the model where every clause appears independently of all others with probability . for that it suffices to set such that .our plan is to consider random -cnf formulas formed by generating i.i.d .random literals , where , and proving that if is the number of nae - satisfying assignments then : [ lem : naepos ] for all , and , there exists some constant such that }< c \times { \ex[x]^2 } \enspace .\ ] ] by lemma [ lem : sec ] and our discussion in section [ sec : gene ] , this implies that is nae - satisfiable since a nae - satisfiable formula is also satisfiable , we have established that is satisfiable for all as in lemma [ lem : naepos ] . to boost this to a high probability result , thus establishing theorem [ thm : ksat ] , we employ the following immediate corollary of theorem [ thm : frie ] .[ cor : boost_nae ] if is satisfiable then is satisfiable for all .friedgut s arguments apply equally well to nae -sat , implying that is nae - satisfiable for as in lemma [ lem : naepos ] .thus , lemma [ lem : naepos ] readily yields below , while comes from noting that the expected number of nae - satisfying assignments is ^n ] and let be a fixed integer .let and let letting , define on ] we get that for all as in lemma [ gleas ] } < c \times { \ex[x]^2 } \enspace .\ ] ]just as for nae -sat , it will be easier to work with the model in which generating a random hypergraph corresponds to generating random vertices , each such vertex chosen uniformly at random with replacement , and letting the first vertices form the first hyperedge etc . in we proved of theorem [ thm : hyp ] by letting the set of all 2-colorings and using a convexity argument to show that ] even if is the number of all 2-colorings . of course , in order for balanced colorings to exist must be even and we will assume that in our calculations below . to get theorem [ thm : hyp ] for all sufficiently large , we observe that if for a given , is 2-colorable then for all , is 2-colorable since deleting a random vertex of removes edges . with this in mind , in the following we let be the number of balanced 2-colorings and assume that is even . since the vertices in each hyperedge are chosen uniformly with replacement , thenthe probability that a random hyperedge is bichromatic in a fixed balanced partition is .since there are such partitions and the hyperedges are drawn independently , we have = \binom{n}{n/2 } \,\left(1 - 2^{1-k}\right)^{m } \enspace .\ ] ] to calculate the second moment , as we did for [ nae ] -sat , we write ] from below using stirling s approximation and get }{\ex[x]^2 } < c\times \frac{n^{-1/2 } \binom{n}{n/2 } \,g_c(1/2)^n}{{\binom{n}{n/2}}^2 \,(1 - 2^{1-k})^{2rn } } = c \times \frac{n^{-1/2}\,2^n}{\binom{n}{n/2 } } \to c \times \sqrt{\frac{\pi}{2 } } \enspace .\ ] ] to complete the proof , analogously to [ nae ] -sat , we use the following boosting " corollary of theorem [ thm : frie_hyp ] .[ cor : boost_hyp ] if is 2-colorable then is 2-colorable for all .we need to prove and for all .as is symmetric around , we can restrict to ] into two parts and handle them with two separate lemmata . the first lemma deals with ] . for all , if then and . the second lemma deals with ] . for every and all , if then . combining lemmata [lem : nearhalf ] and [ lem : far ] we see that for every and , if then for all and , establishing lemma [ gleas ] .we prove lemmata [ lem : nearhalf ] and [ lem : far ] below .the reader should keep in mind that we have made no attempt to optimize the value of in lemma [ lem : far ] , aiming instead for proof simplicity . for the lower bounds presented in table [ tab : val ] we computed numerically , for each , the largest value of for which the conclusions of lemma [ gleas ] hold . in each case , the condition was satisfied with room to spare , while establishing for all was greatly simplified by the fact that always has no more than three local extrema in ] , thus establishing .since is positive , to do this it suffices to prove that in this interval .in fact , since at , it will suffice to prove that for ] , so .moreover , for any since and , we thus see that it suffices to have now observe that for any and , since we can set , yielding since , we find that holds as long as where we are thus left to minimize in ] for all .setting to zero gives by `` bootstrapping ''we derive a tightening series of lower bounds on the solution for the l.h.s . of for .note first that we have an easy upper bound , at the same time , if then , implying if we write then becomes by inspection , if the r.h.s .of is greater than the l.h.s . for all , yielding a contradiction .therefore , for all .since for , we see that for , implies finally , observe that implies that as increases the denominator of approaches . to bootstrap, we note that since we have where relies on and .moreover , implies .thus , by using and the fact for all , gives for , for , .thus , by , we have .this , in turn , implies and so , by and , we have for plugging into to bootstrap again , we get that for since for and for , we see that for such plugging into the fact we get . using that for , we get the closely matching upper bound , thus , we see that for , is minimized at an which is within of , where .let be the interval ] .then yields where .in addition to , valid for ] then for every , there exists a constant such that for all .let and , and let we use to bound the terms in and to bound the remaining terms of . since , and since for any , we see that for every say that a twice - differentiable function is _ unimodal _ on an interval ] with , and furthermore . since for all and , we can take small enough so that is unimodal on .this implies that is also unimodal on and , for , that is unimodal also .since is unimodal , we evaluate this last integral using lemma [ lem : debruijn ] , the laplace method for asymptotic integrals .[ lem : debruijn] .let be unimodal on ] .then applying lemma [ lem : debruijn ] to with and , we see that , where . lemma [ lem : peak ] has the following obvious corollary , which is useful for a variety of second moment calculations .let and be defined as in lemma [ lem : peak ] .if there exists with and a constant such that for all , then there exists a constant such that this work , lower bounds on the thresholds of random constraint satisfaction problems were largely derived by analyzing very simple heuristics . here , instead , we derive such bounds by applying the second moment method to the number of solutions .in particular , for random nae -sat and random hypergraph 2-colorability we determine the location of the threshold within a small additive constant for all . as a corollary, we establish that the asymptotic order of the random -sat threshold is answering a long - standing open question .since this work first appeared , our methods have been extended and applied to other problems . for random -sat , achlioptas and peres confirmed our suspicion ( see section [ sec : boost ] ) that the main source of correlations in random -sat is the `` populist '' tendency of satisfying assignments towards the majority vote assignment . by considering a carefully constructed random variable which focuses on balanced solutions ,on satisfying assignments that satisfy roughly half of all literal occurrences , they showed , establishing . in , achlioptas , naor and peres extended the approach of balanced solutions to max -sat .let us say that a -cnf formula is -satisfiable if there exists a truth assignment which satisfies at least of all clauses ; note that every -cnf is 0-satisfiable .for ] showing in both and , controlling the variance crucially depends on focusing on an appropriate subset of solutions ( akin to our nae - assignments , but less heavy - handed ) . in , achlioptas and naor applied the naive second moment method to the canonical symmetric constraint satisfaction problem , to the number of -colorings of a random graph .bearing out our belief that the naive approach should work for symmetric problems they obtained asymptotically tight bounds for the -colorability threshold .the difficulty there is that the overlap parameter " is a matrix rather than a single real $ ] . since , this makes the asymptotic analysis dramatically harder and much closer to the realm of statistical mechanics calculations . 1 .does the second moment method give tight lower bounds on the threshold of all constraint satisfaction problem with a permutation symmetry ?does it perform well for problems that are symmetric on average " ?for example , does it perform well for _ regular _ random -sat where every literal appears an equal number of times ?3 . what rigorous connections can be made between the success of the second moment method and the notion of `` replica symmetry '' in statistical physics ? 4 . is there a polynomial - time algorithm that succeeds with uniformly positive probability close to the threshold , or at least for where ? we are grateful to paul beame , ehud friedgut , michael molloy , assaf naor , yuval peres , alistair sinclair , and chris umans for reading earlier versions and making many helpful suggestions , and to remi monasson for discussions on the replica method .we would like to thank henry cohn for bringing to our attention .is funded partly by the national science foundation under grant phy-0200909 , and thanks tracy conrad for her support .a. z. broder , a. m. frieze , and e. upfal . on the satisfiability and maximum satisfiability of random -cnf formulas .in _ proc .4th annual symposium on discrete algorithms _ , pages 322330 , 1993 .p. flajolet , d.e .knuth , and b. pittel .the first cycles in an evolving graph ., 1 - 3 ( 1989 ) , 167215 . j. franco and m. paull .probabilistic analysis of the davis putnam procedure for solving the satisfiability problem . , 5(1):7787 , 1983 .a. frieze and n. c. wormald .random -sat : a tight threshold for moderately growing . in _ proceedings of the 5th int. symp . on theory and applications of satisfiability testing _ , pages 16 , 2002 . | many np - complete constraint satisfaction problems appear to undergo a `` phase transition '' from solubility to insolubility when the constraint density passes through a critical threshold . in all such cases it is easy to derive upper bounds on the location of the threshold by showing that above a certain density the first moment ( expectation ) of the number of solutions tends to zero . we show that in the case of certain symmetric constraints , considering the second moment of the number of solutions yields nearly matching lower bounds for the location of the threshold . specifically , we prove that the threshold for both random hypergraph 2-colorability ( property b ) and random not - all - equal -sat is . as a corollary , we establish that the threshold for random -sat is of order , resolving a long - standing open problem . |
there is a wealth of literature available on the highly isotropic nature of cosmic - rays ( crs ) observed at the earth ( see e.g. the references given in thoudam 2007 , hereafter paper i ) .the cr anisotropy amplitude is only in the energy range of ( guillian et al .2007 and references therein ) with the phase ( direction ) mainly found in the outer galaxy , particularly in the second quadrant of the galaxy .the possible explanations for the anisotropy are generally beleived to be the global diffusion leakage of crs from the galaxy , the random nature of the cr sources in space - time and the effect of the local sources . in paperi , the effect of the known local supernova remnants ( snrs ) has been studied in detail by giving more emphasis to the particle release time .the study found that the observed anisotropy data favour the burst - like injection model if particles are released from the sources at an age of .the continuous injection model gives an anisotropy which is too large to explain the observed data .however , paper i considered the cr diffusion zone as an unbounded three - dimensional space which is actually too far from the real geometry of the galaxy .the present work is a continuation of the earlier work , but considers the diffusion region as a flat cylindrical disc having both radial and the vertical boundaries . in the present study ,the propagation of crs is assumed to follow the same diffusion equation given in paper i. the solution will be applied to local snrs and the results will be compared to those obtained in paper i for the burst - like model of particle injection .in the diffusion model , neglecting convection , energy losses and particle losses due to nuclear interactions , the propagation of cr protons in the galaxy is given by the equation where is the differential number density , is the proton kinetic energy , with constant ( positive ) is the diffusion coefficient which is assumed to be spatially uniform in the galaxy and is the proton production rate .the cr propagation region is assumed to be a cylindrical box bounded in both the radial and vertical directions , and our calculation takes into acount the exact location of the sources with respect to the earth .inspite of the fact that the actual spatial distribution of observed snrs extent as far as from the galactic plane ( stupar et al .2007 ) , most of the cr propagation studies assume the sources to be uniformly distributed in a thin disc of half - thickness .such an approximation is valid in the study of global properties of galactic crs since majority of the sources are confined within from the plane .but , in studies like the present one where the effects of nearby discrete sources are discussed , the actual position of the sources should be considered since , for example , for the same source distance we expect to see different cr fluxes at different source heights due to the presence of the vertical halo boundary .our calculation will also assume that the sun is located on the galactic plane since our solar system is only away from the plane ( cohen 1995 ) .the green s function of eq .( 1 ) , i.e. the solution for a -function source term can be found so that the general solution can be obtained as since the cr particles are assumed to be liberated at time , the equation for at becomes simply eq .( 3 ) is solved using the proper boundary conditions and the continuity equations .while solving , we consider the origin to be located at from the galactic center .note that later on this point will represent the actual position of the source with respect to the observer .then , the cr density at a point due to a point source [ which is positioned at from the galactic center ] with age , is obtained using eq .( 2 ) as \right\rbrace\nonumber\\ \times \displaystyle\sum_{k=1}^{\infty}\left\lbrace sin\left(\frac{k\pi(r - y_i)}{2r}\right)sin\left(\frac{k\pi(r - y_i-|y|)}{2r}\right)exp\left[-\frac{k^2\pi^2d(t - t_0)}{4r^2}\right]\right\rbrace\nonumber\\ \times \displaystyle\sum_{n=1}^{\infty}\left\lbrace sin\left(\frac{n\pi(h - z_i)}{2h}\right)sin\left(\frac{n\pi(h - z_i-|z|)}{2h}\right)exp\left[-\frac{n^2\pi^2d(t - t_0)}{4h^2}\right]\right\rbrace\end{aligned}\ ] ] where and represent the radial and the vertical boundaries of the galaxy respectively .the solution at is obtained by just replacing with in eq .the proton flux can be calculated using , where is the velocity of light and the source spectrum is taken as in which is the proton mass energy and is the normalization constant .the source spectral index is chosen such that , the observed proton spectral index ( haino et al .2004 ) . for very large radial boundary , the solution of eq .( 1 ) at can be written as \displaystyle\sum_{n=1}^{\infty}\left\lbrace sin\left(\frac{n\pi(h - z_i)}{2h}\right)sin\left(\frac{n\pi(h - z_i-|z|)}{2h}\right)exp\left[-\frac{n^2\pi^2d(t - t_0)}{4h^2}\right]\right\rbrace\ ] ] fig .1 compares the proton flux at the galactic center given by eq .( 4 ) with that of eq . ( 6 ) for an snr - like source located at away from the center with an age .the results of eq .( 4 ) at are shown by the thin solid , dashed and dotted lines respectively .the thick solid lines represent the unbounded solution given by eq .( 6 ) ( i.e. the solution for ). the calculations are done at and at assuming , represented by the left- and right - hand figures respectively .the diffusion coefficient is taken as for , where is in gev ( engelmann et al .1990 ) and the injected protons are assumed to carry percent of the total explosion energy of .the figures clearly show that , for sources near to the observer , the solution of eq .( 4 ) can be very well approximated by the much simpler unbounded solution for any value of if .for example , the results at exactly coincide with the lines .therefore , considering the fact that our solar system is positioned at a distance of from the galactic center and that the galactic radius extends as far as , the effect of the radial boundary on the observed crs should be negligible at least for those sources that can give appreciable density fluctuations at the earth , i.e. for those sources located within from the earth ( see thoudam 2006a ) . in the following sections where we study the effect of nearby snrs on the observed crs , we will therefore adopt the simpler eq .( 6 ) instead of the complicated eq .( 4 ) .cr proton flux at the galactic center due to an snr - like source with age located at away from the center .the thin solid line , dashed and dotted lines are the fluxes calculated using eq .( 4 ) for respectively .the thick solid line represents the flux calculated for the boundaryless case ( ) using eq .the calculation assumes and ._ left _ : for ._ right _ : . from the figures, it can be seen that for any value of eq .( 4 ) can be well approximated by the much simpler boundaryless solution if .,title="fig:",scaledwidth=28.0% ] cr proton flux at the galactic center due to an snr - like source with age located at away from the center .the thin solid line , dashed and dotted lines are the fluxes calculated using eq .( 4 ) for respectively .the thick solid line represents the flux calculated for the boundaryless case ( ) using eq .the calculation assumes and ._ left _ : for ._ right _ : . from the figures, it can be seen that for any value of eq .( 4 ) can be well approximated by the much simpler boundaryless solution if .,title="fig:",scaledwidth=28.0% ]knowing the cr density at a point away from a source of age , the single source anisotropy amplitude in the diffusion approximation can be calculated using ( mao shen 1972 ) where is given by eq .( 6 ) for a point source located at from the earth .the total anisotropy parameter at the earth due to a number of nearby discrete sources in the presence of an isotropic cr background is given by ( paper i ) where the summation is over the nearby discrete sources . denotes the direction of the source giving a flux and denotes the direction of maximum intensity . represents the total observed flux of cr protons above ( haino et al .the phase of the anisotropy is taken as the direction of maximum intensity .therefore , the anisotropy as well as the phase at an energy depends on the age and distance of the nearby sources , and may be determined by different sources at different energy intervals . however , in the case of a single source dominance , the total anisotropy is given by , where denotes the source giving the maximum flux at the earth .cr anisotropy at the earth due to nearby known snrs assuming burst - like injection model . figs .( a ) , ( b ) , ( c ) ( d ) are the results obtained for respectively .thin solid lines represent the results of eq .( 8) for , the thin dashed lines for , dotted lines for and the dot - dashed lines for .the thick solid lines are the results of paper i i.e. for ( a ) , ( b ) ( c ) show that different sources determine the anisotropy at different energy intervals .these are marked by the source names along the lines . in fig .( d ) , the monogem snr solely determine the anisotropy in the whole energy range .the thick dashed line is the best fit result , in the case of a single source dominance , calculated assuming infinite boundaries .data points are taken from the compilation of various results given in erlykin wolfendale 2006.,title="fig:",scaledwidth=28.0% ] cr anisotropy at the earth due to nearby known snrs assuming burst - like injection model . figs .( a ) , ( b ) , ( c ) ( d ) are the results obtained for respectively .thin solid lines represent the results of eq .( 8) for , the thin dashed lines for , dotted lines for and the dot - dashed lines for .the thick solid lines are the results of paper i i.e. for ( a ) , ( b ) ( c ) show that different sources determine the anisotropy at different energy intervals .these are marked by the source names along the lines . in fig .( d ) , the monogem snr solely determine the anisotropy in the whole energy range .the thick dashed line is the best fit result , in the case of a single source dominance , calculated assuming infinite boundaries .data points are taken from the compilation of various results given in erlykin wolfendale 2006.,title="fig:",scaledwidth=28.0% ] cr anisotropy at the earth due to nearby known snrs assuming burst - like injection model( a ) , ( b ) , ( c ) ( d ) are the results obtained for respectively .thin solid lines represent the results of eq .( 8) for , the thin dashed lines for , dotted lines for and the dot - dashed lines for .the thick solid lines are the results of paper i i.e. for .( a ) , ( b ) ( c ) show that different sources determine the anisotropy at different energy intervals .these are marked by the source names along the lines . in fig .( d ) , the monogem snr solely determine the anisotropy in the whole energy range .the thick dashed line is the best fit result , in the case of a single source dominance , calculated assuming infinite boundaries .data points are taken from the compilation of various results given in erlykin wolfendale 2006.,title="fig:",scaledwidth=28.0% ] cr anisotropy at the earth due to nearby known snrs assuming burst - like injection model .( a ) , ( b ) , ( c ) ( d ) are the results obtained for respectively .thin solid lines represent the results of eq .( 8) for , the thin dashed lines for , dotted lines for and the dot - dashed lines for .the thick solid lines are the results of paper i i.e. for ( a ) , ( b ) ( c ) show that different sources determine the anisotropy at different energy intervals .these are marked by the source names along the lines . in fig .( d ) , the monogem snr solely determine the anisotropy in the whole energy range .the thick dashed line is the best fit result , in the case of a single source dominance , calculated assuming infinite boundaries .data points are taken from the compilation of various results given in erlykin wolfendale 2006.,title="fig:",scaledwidth=28.0% ] in this section , we will try to investigate whether the presence of a halo boundary can affect the anisotropy at the earth due to nearby sources . for that, we consider the known snrs located within from the earth as listed in table 1 of paper i. the total anisotropy due to these snrs is calculated using eq .( 8) for different values at different s .2 shows the comparison of the anisotropies calculated in the present work with those obtained in paper i for the burst - like particle injection model .the data points are taken from the compilation of various experiments given in erlykin wolfendale ( ew ) 2006 .figs 2(a ) , ( b ) , ( c ) and ( d ) are the results obtained for , respectively .the thin solid lines represent the results of eq .( 8) for , the dashed lines are for , the dotted lines are for and the dot - dashed lines are for .the thick solid lines are the results of paper i which were obtained assuming [ eq .11 of paper i ] . in figs2(a)(c ) , different sources determine the anisotropy at different energy ranges .these are marked by the source names along the lines .it can be seen that the results for show a noticeable deviation from the lines , while those for show a very slight deviation .the results for other higher - values almost overlap with the lines and are not easily visible in the figures .this shows that , for the particle release time of , the halo height effect on the local snr contribution to the observed cr anisotropy is almost negligible if .however , the situation is somewhat different in fig .2(d ) where the calculations are performed at . note that this value of particle injection time is that at which the model calculated anisotropy values are close to the observed data ( see the results of paper i ) .the anisotropy here is determined solely by the monogem snr in the whole energy range considered here , and only those results for show considerable variation from the line .the results for show a negligible deviation . combining all the results of fig .2 , we can finally conclude that the effect of the halo boundary of our galaxy on the local snr contribution to the observed cr anisotropy is negligible as long as the boundary is greater than . in the next section, we will combine this result along with the halo heights obtained by several authors to discuss the importance of in the anisotropy study due to local sources .the effect of the nearby cr sources is considered as one of the important effects that can give rise to the observed cr anisotropy at the earth .however , the calculation of cr fluxes from any type of source in the galaxy essentially requires the use of the proper geometry of the galaxy as well as the actual position of the source with respect to the observer .since our galaxy has a cylindrical geometry with the radius much larger than the height , the radial boundary is found to have a negligible effect on the cr density and hence the geometry can be approximated by an infinite radius with a finite vertical height .furthermore , this study has found that the effect of the vertical halo boundary on the local snr contribution to the cr anisotropy is negligible if .2 shows the effect of the halo height on the cr anisotropy due to nearby known sources for different particle injection times . among the 13 snrs considered , only monogem , vela , g299.2 - 2.9 , sn185 and cygnus loopare found to determine the anisotropy at different energy intervals . also , all of them except sn185 ( with ) have distances .this shows that only the nearest sources mainly determine the anisotropy as expected , and hence this results in a negligible halo height effect for .it is also worth mentioning that the vertical heights of the dominant sources above the galactic plane are found to be less than which is much less than the halo heights considered here .the actual value of the halo height of our galaxy is not exactly known .its value is generally obtained along with other propagation parameters using the observed cr data like the secondary ratios , cr density distribution etc .but , the values obtained from the same experimental data are different for different cr propagation models .webber , lee gupta ( 1992 ) had obtained a value of using diffusion - convection model .lukasiak et al .( 1994 ) had obtained using the webber et al .( 1992 ) model without convection .webber soutoul ( 1998 ) obtained and using the diffusion and monte carlo models respectively .other results like those of freedman et al .( 1980 ) and ptuskin soutoul ( 1998 ) obtained and respectively . a completely numerical approach using more realistic physical conditions of the galaxy determined a value of for the diffusion - convection model and for the re - acceleration model ( strong moskalenko 1998 ) .these results are found to be consistent with the observations of galactic radio emission structure at 408 mhz which indicate the presence of a thick radio disk with full equivalent width of , and in the galactic radial range of , and respectively ( beuermann et al .1985 ) , but such a wide range of values makes the galactic halo height a very uncertain parameter in cr propagation studies .however , since most of the values obtained are found to have , the conclusion given in the previous section suggests that the study of local crs due to nearby snrs can be carried out without having much information on .this is because the effect of the nearest sources dominates over the influence of the other nearby sources and the cr fluxes from these sources are almost independent of the halo boundary for as discussed before .hence , the study of the effect of local sources on the cr anisotropy at the earth can be done using the much simpler three - dimensional unbounded solution .for the infinite boundary case , if a single source dominates the anisotropy in the whole energy range as in fig .2(d ) , the total anisotropy follows an energy dependence of the form in the high energy regime ( paper i ) , which for goes as .such a decrease with energy is in fact observed in the high energy anisotropy data somewhere above upto around .moreover , the increase in anisotropy from in the energy range can also be possibly explained by a proper choice of or rather for the single dominant source .we try to estimate the physical parameters of such a source that best fit the data .the best - fitting parameters are found to be and , and the best - fitting line is shown as the thick dashed line in fig .thus , for the source should have an age of . however , it is possible to obtain a number of combinations which equally fit the data , all of them giving the same value of and .therefore , the present study only gives an estimate of the distance to the single dominant source ; it does not give any precise information on the age and the particle release time of the source .it should be noted that it is not the individual values that determine the contribution of the source , but the propagation time of the particles after their release from the source .we can determine the best - fitting - value only if we know , but the value of is not exactly known .it may even be that is an energy - dependent parameter , i.e. particles with different energies emitted at different times .studies based on diffusive shock acceleration in snrs have shown that the highest energy particles start leaving the source region already at the beginning of the sedov phase ( berezhko et al .1996 ) , but the major fraction of accelerated crs remain confined for almost around for an interstellar medium ( ism ) hydrogen atom density of .this implies that for the local ism which has ( see e.g. thoudam 2006b and references therein ) , if a single source determines the whole anisotropy , the source should have a characteristic age of .unfortunately , there is no nearby known snr with such an age located at .however , it is quite possible that the single dominant source may be an _ undetected _ old snr .in fact , studies assuming adiabatic phase in snr evolution have shown that the surface brightness of an snr of age yrs lies below the detection limit of radio telescopes ( leahy xinji 1989 ) .the present result is further supported by the fact that almost all the nearby sources are quite young with estimated ages less than ( the generally accepted particle release time ) , and they might not have released the crs into the local ism . in addition , the possiblity that some of the observed features of crs may be due to _ undetected _ nearby sources can not be simply ignored .the single - source explanation of the observed cr properties can also be found in some earlier works ( e.g. ew 2000 and references therein ; ew 2006 , etc . ) , but in a somewhat different context .ew 2000 claimed that the knee in the cr spectrum at can be attributed to the presence of a single recent supernova ( as yet unidentified ) in the local region . on the other hand ,ew 2006 tried to explain the rise in the anisotropy amplitude as well as the change in its phase near the knee using a single source exploded in the direction from the sun downward of the main cr flux , which are predominantly coming from the inner galaxy .the latter study considered the source parameters as similar to those of the monogem snr .although the single source idea has not been readily accepted by the cr community , at the same time there is no reason why it should be just neglected .the present study even points out one more observed property of crs that can possibly be explained by the single source model .99 berezhko , e. g. , yelshin v.k . ksenofontov l.t .1996 , j. exp .phys . , 82 , 1 beuermann , k. , kanbach , g. , berkhuijsen , e. m. 1985 , a , 153 , 17 cohen , m. , 1995 , apj , 444 , 874 engelmann , j. j. , ferrando , p. , soutoul , a. , goret , p. , juliusson , e. 1990 , a , 233 , 96 erlykin , a. d. , wolfendale , a. w. 2000 , a , 356 , l63 erlykin , a. d. , wolfendale , a. w. 2006 , astropart .phys . , 25 , 183 freedman , i. , kearsey , s. , osborne , j. l. , giler , m. 1980 , a , 82 , 110 guillian , g. , et al .2007 , phys .d , 75 , 062003 haino , s. , et al .2004 , phys .b594 , 35 leahy , d. a. , xinji , w. 1989 , pasp , 101 , 607 lukasiak , a. , ferrando , p. , mcdonald f. b. , webber , w. r. 1994 , apj , 423 , 426 mao , c. y. , shen , c. s. 1972 , chinese j. phys ., 10 , 16 ptuskin , v. s. , soutoul , a. 1998 , a , 337 , 859 strong , a. w. , moskalenko , i. v. 1998 , apj , 509 212 stupar , m. , filipovi , m. d. , parker , q. a. , white , g. l. , pannuti , t. g. , jones , p. a. 2007 , ap , 307 , 423 thoudam , s. 2006a , mnras , 370 , 263 thoudam , s. 2006b , astropart .phys . , 25 , 328 thoudam , s. 2007 , mnras , 378 , 48 ( paper i ) webber , w. r. , lee , m. a. , gupta , m. 1992 , apj , 390 , 96 webber , w. r. , soutoul , a. 1998 , apj , 506 , 335 | in an earlier paper , the effect of the nearby known supernova remnants ( snrs ) on the local cosmic - rays ( crs ) was studied , considering different possible forms of the particle injection time . the present work is a continuation of the previous work , but assumes a more realistic model of cr propagation in the galaxy . the previous work assumed an unbounded three - dimensional diffusion region , whereas the present one considers a flat cylindrical disc bounded in both the radial and vertical directions . the study has found that the effect of the vertical halo boundary on the local snr contribution to the observed cr anisotropy is negligible as long as . considering the values of the halo height obtained by different authors , the present work suggests that the study of the effect of local sources on the cr anisotropy can be carried out without having much information on and hence , using the much simpler three - dimentional unbounded solution . finally , the present work discusses about the possibility of explaining the observed anisotropy below the knee by a single dominant source with properly chosen source parameters , and claims that the source may be an _ undetected _ old snr with a characteristic age of located at a distance of from the sun . [ firstpage ] cosmic rays remnants |
models for growing cells generally consist of substrates( ) and active proteins that catalyze their own synthesis and that of other components .for example , in the models developed by scott _et al._ and maitra _et al._ , the active proteins correspond to ribosomes .this class of models involving catalytic proteins can be used to accurately describe the exponential growth of a cell under a sufficient supply of substrates ; however , once the degradation rate of the active protein exceeds its rate of synthesis under a limited substrate supply , the cell s volume will shrink , leading to cell death .hence , a cell population either grows exponentially or dies out , and in this cellular state it is not possible to maintain the population without growth .+ to model a state with such suppressed growth , we consider two more chemical species , inhibitors( ) and active protein - inhibitor complexes( ) , in addition to the substrates( ) and active proteins( ) that are commonly adopted in models of cell growth . a schematic representation of the present model is shown in fig.[fig : fig1].(a ) . here, we focus on two classes of proteins that are essential to the description of cellular growth : an active protein and inhibitor .the active proteins are those that catalyze their own growth such as ribosomes , and can include metabolic enzymes , transporters , and growth - facilitating factors .inhibitory proteins form a complex with active proteins , thereby suppressing their catalytic synthesis function. they can be inhibitory factors such as yfia or hpf in _other candidates for such inhibitors are misfolded or mistranslated proteins that are produced erroneously during the replication of active proteins , which inhibit the catalytic activity of active proteins by trapping them into the aggregates of misfolded proteins . our model , then ,is given by where and represent the synthesis rate of the active protein and inhibitor , to be presented below . denotes the reaction of complex formation , given by , represents the external concentration of substrate , denotes the spontaneous degradation rate of macromolecules , and represents the specific growth rate of the cell , given by .+ in this model , the cell takes up substrates from the external environment from which active proteins and inhibitors are synthesized .these syntheses , and , as well as the uptake of substrates , take place with the aid of catalysis by the active proteins . then , by assuming that the synthesized components are used for growth in a sufficiently rapid period , the growth rate is set to be proportional to the rate of active protein synthesis .next , the catalytic activity of the active protein is inactivated due to the formation of an active protein - inhibitor complex , which , for example , corresponds to the interaction between a ribosome and yfia and hpf .all chemical components are diluted by the volume growth of a cell , although they are spontaneously degraded at a much lower rate .the complex has higher stability than active protein and inhibitor , alone ( is smaller than and ) .+ it has been well established that inhibitory factors are actively synthesized under a resource - limited condition . moreover , interpreting molecules as incorrect polymers , they are expected to increase with the decrease of supply substrates , since this situation will limit the proofreading mechanism to eliminate them .thus , with this interpretation of the inhibitors as incorrect polymers and also in consistency with increase in inhibitory factors under resource - limited condition , it naturally follows that the ratio of the synthesis of active protein to inhibitors is an increasing function of substrate concentration , i.e. , . in the model, we assume that this ratio increases with the concentration and becomes saturated at higher concentrations , as in michaelis - menenten s form , and choose and , for example .( see also the _ supplementary information _ for the derivation of such form in the case of a proofreading mechanism ) .+ note that by summing up and , we obtain if and are zero ( or negligible ) .it means that if the cell once reaches any steady state , the relationship is kept satisfied as long as and are not zero .we use the relationship and eliminate by substituting for analysis below .the steady state of the present model exhibits three distinct phases as a function of the external substrate concentration ( fig.[fig : fig1].(b ) ) , as computed by its steady - state solution .the three phases are distinguished by both the steady growth rate and the concentration of active protein , which are termed as the active , inactive , and death phases , as shown in the figure , whereas the growth rate shows a steep jump at the boundaries of the phases .the phases are characterized as follows . *( i ) * in the active phase , the highest growth rate is achieved , where abundant active proteins work freely as catalysts . *( ii ) * in the inactive phase , growth rate is not zero but is drastically reduced with orders of magnitude compared with the active phase . here ,almost all active proteins are arrested by complex formation with the inhibitor , and their catalytic activity is deactivated . *( iii ) * at the death phase , a cell can not grow , and all of the active proteins , inhibitors , and complexes go to zero . in this case, the cell goes beyond the so - called `` point of no return '' and can never grow again , regardless of the amount of increase in , since the catalysts are absent in any form .( as will be shown below , the active and inactive phases correspond to the classic log and stationary phases , but to emphasize the single - cell growth mode , we adopt these former terms for now ) . + the transition from the active to inactive phase is caused by the interaction between the active protein and inhibitor . in the substrate - poor condition, the amount of inhibitor greatly exceeds the total amount of catalytic proteins ( ) , and any free active protein remaining vanishes . below the transition point from the inactive to death phase , the spontaneous degradation rate surpasses the synthesis rate , at which point all of the components decrease . this transition pointis simply determined by the balance condition .hence , if is set to zero , the inactive - death transition does not occur .+ we now consider the time series of biomass ( the total amount of macromolecules ) that is almost proportional to the total cell number , under a condition with a given finite resource , for comparison with experimental data in the batch culture condition ( fig.[fig : fig1].(c and d ) ) . in the numerical simulation , the condition with a given, finite amount of substrates corresponding to the increase of cell number is implemented by introducing the dynamics of external substrate concentration to the original model . here, is decreased as the substrates are replaced by the biomass , resulting in cell growth ( details are given in the _ supplementary information _ ) . at the beginning of the simulation , the amount of biomass ( i.e. , cell number ) stays almost constant , and then gradually starts increases exponentially . after the phase of exponential growth , substrates are consumed , and the biomass increase stops . then , over a long time span , the biomass stays at a nearly constant value , until it begins to slowly decrease .finally , the degradation dominates and the biomass ( cell number ) falls off dramatically .+ these successive transitions in the growth of biomass ( fig.[fig : fig1](c and d ) ) from initially inactive to the active , inactive , and death phases corresponds to those among the lag , log , stationary , and death phases . as the initial condition was chosen as the inactive phase under a condition of rich substrate availability , most of the active proteins are arrested in a complex at this point .therefore , at the initial stage , dissociation of the complex into active proteins and inhibitors progress , and biomass is barely synthesized , even though rich substrate is available .after the cell escapes this waiting mode , catalytic reactions from active proteins progress , leading to an exponential increase in biomass .subsequently , the external substrate is depleted , and cells experience another transition from the active to inactive phase . at this point , the biomass decreases only slowly owing to the remaining substrate and stability of the active protein - inhibitor complex . however , after the substrate is depleted and the active protein and inhibitor are dissociated from the complex , the biomass decreases at a much faster rate , ultimately entering the death phase .+ in the active phase with exponential growth , the present model exhibits classical growth laws , namely * ( i ) * monod s growth law , * ( ii ) * pirt s law , and * ( iii ) * growth rate vs. ribosome fraction ( see _ supplementary information _ fig .in this section , we uncover the quantitative relationships among the basic quantities characterizing the transition between the active and inactive phases ; i.e. , lag time , starvation time , and growth rates .we demonstrate that the theoretical predictions agree well with experimentally observed relationships .+ first , we compute the dependency of lag time on starvation time and the maximum specific growth rate . up to time , the model cell is set in a substrate - rich condition , and stays at a steady state with exponential growth .then , the external substrate is depleted to instantaneously .the cell is exposed to this starvation condition up to starvation time .subsequently , the substrate concentration instantaneously returns to .after the substrate level is recovered , it takes a certain length of time for a cell to return to its original growth rate ( fig .s2 ) , which is the lag time ( following the standard definition of introduced by penfold and pirt ) . given this ,the dependency of on the starvation time and can be computed .+ we found that increases proportionately to , as shown in fig.[fig : fig2](a ) .the experimentally observed relationship between and is also plotted for comparison in fig.[fig : fig2](b ) , using reported data , which also exhibited dependency .although this empirical dependency has been previously discussed , its theoretical origin has thus far not been uncovered .+ indeed , the origin of is explained by noting the anomalous relaxation of inhibitor concentration , which is caused by the interaction between the active protein and inhibitor .the sketch of this explanation is given below , and the analytic derivation is given in the _supplementary information_. + first , consider the time course of chemical concentrations during starvation . in this condition ,cell growth is inhibited by two factors : substrate depletion and deactivation of catalytic activity of the active protein . following the decrease in uptake due to depletion of , the concentration of decreases , resulting in a change in the balance between and .( hereafter we adopt the notation such that , , and also denote the concentrations of corresponding chemicals ) . under the condition , the ratio of the synthesis of to increases . with an increase in , decreases due to the formation of a complex with . over time , more gets arrested , and the level of inactivation increases with the duration of starvation .+ in this scenario , the increase of concentration is slow . considering that the complex formation reaction rapidly approaches its equilibrium , i.e. , , is roughly proportional to the inverse of ( recall ) , if is sufficiently large .accordingly , the synthesis rate of , given by , is inversely proportional to its amount , i.e. , and thus hence , the inhibitor accumulation progresses with .( note that due to depletion , the dilution effect is negligible . ) + next , we consider the time course for the resurrection after recovery of the external substrate . during resurrection , is increased while is reduced . since is strongly deactivated after starvation , the dilution effect from cell growthis the only factor contributing to the reduction of .noting and , the dilution effect is given by at the early stage of resurrection .thus , the resurrection time course of is determined by the dynamics leading to the linear decrease of , i.e. , .+ let us briefly recapitulate the argument presented so far .the accumulated amount of component is proportional to , while during resurrection , the dilution of progresses linearly with time , which is required for the dissociation of and , leading to growth recovery . by combining these two estimates ,the lag time satisfies .second , the relationship is obtained by numerical simulation of our model , in line with experimental results ( fig.[fig : fig2](c and d ) ) .+ this relationship is also explained by the characteristics of the resurrection time course .the dilution rate of over time is given by , as mentioned above ; thus , at the early stage , .in the substrate - rich condition , the substrate abundances are assumed to be saturated , so that holds because is satisfied .thus , it follows that .+ we also obtained an analytic estimation of the lag time as where ( see _supplementary information _ for conditions and calculation ) . in this form, the two relationships and are integrated .+ the present theory also explains other experimental observations .first , in predictive microbiology , the lag time to return the log phase from the stationary phase is regarded as the time span required to consume the accumulated during the stationary phase with the rate .thus , the amount of is defined as the product of and . in our results ,accumulated inhibitor needs to be consumed during the lag time , so that is interpreted as , whose time course agrees well with that of obtained experimentally ( see supplementary figure fig.s4 ) .second , the tradeoff between the growth rate and tolerance for the starvation , experimentally observed is also derived from our theory ( see _ supplementary information _ ) .so far , we have considered the dependence of lag time on the starvation time . however, in addition to the starvation period , the starvation process itself , i.e. , the speed required to reduce the external substrate , has an influence on the lag time .+ for this investigation , instead of the instantaneous depletion of the external substrate , its concentration is instead gradually decreased over time in a linear manner over the span , in contrast to the previous simulation procedure , which corresponds to .then , the cell is placed under the substrate - poor condition for the duration , before the substrate is recovered , and the lag time is computed .+ the dependence of the lag time on and is shown in fig.[fig : fig3](a ) .while monotonically increases against for a given , it shows drastic dependence on .if the external concentration of the substrate is reduced quickly ( i.e. , small ) , the lag time is rather small .however , if the decrease in the external substrate concentration is slow ( i.e. , large ) , the lag time is much longer .in addition , this transition from a short to long lag time is quite steep .+ this transition against the timescale of the environmental change manifests itself in the time course of chemical concentrations ( see fig.[fig : fig3](b ) ) . with rapid environmental change, decreases first , whereas with slow environmental change , decreases first .in addition , the value of is quite different between the two cases , indicating that the speed of environmental change affects the degree of inhibition , i.e. , the extent to which active proteins are arrested by inhibitors to form a complex . + now , we provide an intuitive explanation for two distinct inhibition processes .when starts to decrease , a cell is in the active phase in which is abundant .if the environment changes sufficiently quickly , there is not enough time to synthesize the chemicals or , because of the lack of , and the concentrations of chemical species are frozen near the initial state with abundant . however ,if the environmental change is slower than the rate of the chemical reaction , the concentration of the inhibitor ( active protein ) increases ( decreases ) , respectively .hence , remains rich in the case of fast environmental change , whereas is rich for a slow environmental change . in the former case , when the substrate is increased again , the active proteins are ready to work , so that the lag time is short , which can be interpreted as a kind of `` freeze dry '' process .note that the difference in chemical concentration caused by different is maintained for log time because in slow ( fast ) environmental change , chemical reactions are almost halted due to the decrease of ( ) , respectively .thus , the difference of lag time remains even for large as fig.[fig : fig3](a ) ( the mechanism of this slow process is discussed in _ supplementary information_. ) .+ this lag time difference can also be explained from the perspective of dynamical systems . for a given , the temporal evolution of and given by the flow in the state space of .examples of the flow are given in fig.[fig : fig4 ] .the flow depicts , which determines the temporal evolution .the flow is characterized by and nullclines , which are given by the curves satisfying and , as plotted in fig.[fig : fig4 ]. + note that at a nullcline , the temporal change of one state variable ( either or ) vanishes .thus , if two nullclines approach each other , then the time evolution of both the concentrations and are slowed down , and the point where two nullclines intersect corresponds to the steady state . as shown in fig .[ fig : fig4 ] , nullclines come close together under the substrate - depleting condition , which gives a dynamical systems account of the slow process in the inactive phase discussed so far .+ for a fast change ( i.e. , small , fig.[fig : fig4](a ) ) , is quickly reduced at the point where the two nullclines come close together .then , the dynamics of follow the flow as shown in the figure .first , decreases to reach the -nullcline .then , the state changes along the almost coalesced nullclines when the dynamics are slowed down .thus , it takes a long time to decrease the concentration , so that at resumption of the substrate , sufficient can be utilized .+ in contrast , for a slow change ( i.e. , large ) , the flow in gradually changes as shown in fig.[fig : fig4](b - d ) .initially , the state stays at the substrate - rich steady state ( fig.[fig : fig4](b ) ) . due to the change in substrate concentration , two nullclines moderatelymove and interchange their vertical locations .since the movement of nullclines is slow , the decrease in progresses before the two nullclines come close together ( i.e. , before the process is slowed down ) .the temporal evolution of and is slowed down only after this decrease in (fig.[fig : fig4](c and d ) ) .hence , the difference between the cases with small and large is determined by whether the nullclines almost coalesce before or after the decrease , respectively .so far , we have considered the average change of chemical concentrations using the rate equation of chemical reactions .however , the biochemical reaction is inherently stochastic , and thus the lag time is accordingly distributed .this distribution was computed by carrying out a stochastic simulation of chemical kinetics using the gillespie algorithm .+ we found that the distribution of lag time has a standard gaussian form for the shorter lag - time side but has an tail for the longer side ( fig.[fig : fig5 ] ) .this exponential tail was also observed in experiments , as overlaid in fig.[fig : fig5 ] , which is adapted from reisman __ . in the present model ,once the number of active proteins becomes small , more time is needed to recover the growth , so that the distribution of initial active protein abundances is expanded to a long - tailed distribution .the agreement of the model with experimental data is relatively good for a short starvation time ( 24 hours and 48 hours ) but for longer times , the experimental data may suggest the existence of a much longer tail . +we developed a coarse - grained model consisting of a substrate , autocatalytic active protein , inhibitor of the active protein , and active protein - inhibitor complex . in the steady state ,the model shows distinct phases , i.e. , active , inactive , and death phases .in addition , the temporal evolution of total biomass shows bacterial growth curve - like behavior .the present model is not only consistent with the already - known growth laws in the active phase but also demonstrates two relationships , and , concerning the duration of the lag time .although these two relationships have also been observed experimentally , their origins and underlying mechanisms had not yet been elucidated .the present model can explain these relationships based on the formation of a complex between the active protein and inhibitor , whose increase in the starvation condition hinders the catalytic reaction .the inactive phase , which corresponds to the stationary phase , as well as the above two laws are generally derived as long as the ratio of the synthesis of the inhibitor to that of the active protein is increased along with a decrease in the external substrate concentration .this condition is also derived if the inhibitor is interpreted as a product of erroneous protein synthesis , where a proofreading mechanism to correct the error needing energy works inefficiently in a substrate - poor condition . + although the cell state with exponential growth has been extensively analyzed in previous theoretical models , the transition to the phase with suppressed growth has thus far not been theoretically explained .our model , albeit simple , provides an essential mechanism for this transition as complex formation of active and inhibitor proteins , which can be experimentally tested .+ moreover , the model predicts that the lag time differs depending on the rate of external depletion of the substrate , which can also be examined experimentally .recently , the bimodal distribution of growth resumption time from the stationary phase was reported in a batch culture experiment .the heterogeneous depletion of a substrate due to the spatial structure of a bacterial colony is thought to be a potent cause of this bimodality , while understanding of this concept is fairly underway .since the present model shows different lag times for different rates of environmental change , it can provide a possible scenario for explaining this bimodality .the authors would like to thank s.krishna , s.semsey , n.mitarai , a.kamimura , n.saito , and t. s. hatakeyama for useful discussions ; and i. l. reisman , n. balaban , and j.c .augustin for providing data .this research is partially supported by the platform for dynamic approaches to living system from japan agency for medical research and development ( amed ) , grant - in - aid for scientific research ( s ) ( 15h05746 from jsps ) , and the japan society for the promotion of science(16j10031 ) . | quantitative characterization of bacterial growth has gathered substantial attention since monod s pioneering study . theoretical and experimental work has uncovered several laws for describing the log growth phase , in which the number of cells grows exponentially . however , microorganism growth also exhibits lag , stationary , and death phases under starvation conditions , in which cell growth is highly suppressed , while quantitative laws or theories for such phases are underdeveloped . in fact , models commonly adopted for the log phase that consist of autocatalytic chemical components , including ribosomes , can only show exponential growth or decay in a population , and phases that halt growth are not realized . here , we propose a simple , coarse - grained cell model that includes inhibitor molecule species in addition to the autocatalytic active protein . the inhibitor forms a complex with active proteins to suppress the catalytic process . depending on the nutrient condition , the model exhibits the typical transition among the lag , log , stationary , and death phases . furthermore , the lag time needed for growth recovery after starvation follows the square root of the starvation time and is inverse to the maximal growth rate , in agreement with experimental observations . moreover , the distribution of lag time among cells shows an exponential tail , also consistent with experiments . our theory further predicts strong dependence of lag time upon the speed of substrate depletion , which should be examined experimentally . the present model and theoretical analysis provide universal growth laws beyond the log phase , offering insight into how cells halt growth without entering the death phase . quantitative characterization of a cellular state , in terms of cellular growth rate , concentration of external resources , as well as abundances of specific components , has long been one of the major topics in cell biology , ever since the pioneering study by monod . quantitative growth laws have been uncovered mainly by focusing on the microbial log phase in which the number of cells grows exponentially , including pirt s equation for yield and growth and the relationship between the fraction of ribosomal abundance and growth rate ( experimentally demonstrated by schaechter _ et al._ , and theoretically rationalized by scott _ et al . _ ) , among others , in which the constraint to maintain steady growth leads to general relationships . in spite of the importance of the discovery of these universal laws , cells under poor conditions exhibit different growth phases in which such relationships are violated . indeed , in addition to the death phase , cells undergo a stationary phase under conditions of resource limitation , in which growth is drastically suppressed . once cells enter the stationary phase , a certain time span is generally required to recover growth after resources are supplied , which is known as the lag phase . although several quantities have been measured to characterize these phases , such as the length of lag time for resurrection , and the tolerance time for starvation or antibiotics , there has been no theory put forward to characterize the phase changes , and no corresponding quantitative laws have been established . + to develop a theory for bacterial physiology beyond the log phase , we first constructed a simple mathematical model that exhibits the changes among the lag , log , stationary , and death phases . we then uncovered the quantitative characteristics of each of these phases in line with experimental observations . including bacterial growth curve , quantitative relationships of lag - time with starvation time and the maximal growth rate , exponentially - tailed distribution of lag - time , and trade - off between the growth rate and tolerance for the starvation . these are formulated by the changes in inhibitor ( or mistranslated proteins ) chemicals in addition to changes in ribosomal proteins ( ribosomes ) . the proposed model also allowed us to reach several experimentally testable predictions , including the dependence of lag time on the speed of the starvation process . |
in this work , we study the stability of the jump - diffusion it s stochastic differential equations ( sdes ) of the form ,\,\ , t>0.\end{aligned}\ ] ] here is a -dimensional brownian motion , , satisfies the one - sided lipschitz condition and the polynomial growth condition , the functions and satisfy the globally lipschitz condition , and is a one dimensional poisson process with parameter .the one - sided lipschitz function can be decomposed as , where the function is the global lipschitz continuous part and is the non - global lipschitz continuous part . using this decomposition , we can rewrite the jump - diffusion sdes in the following equivalent form equations of type arise in a range of scientific , engineering and financial applications ( see and references therein ) .the standard explicit methods for approximating sdes of type is the euler - maruyama method and implicit schemes .their numerical analysis have been studied in with implicit and explicit schemes .recently it has been proved ( see ) that the euler - maruyama method often fails to converge strongly to the exact solution of nonlinear sdes of the form without jump term when at least one of the functions and grows superlinearly . to overcome this drawback of the euler - maruyama method , numerical approximation which computational cost is close to that of the euler - maruyama method and which converge strongly even in the case the function is superlinearly growingwas first introduced in . in our accompanied paper , the work in has been extended to sdes of type and the strong convergence of the following numerical schemes has been investigated and where is the time step - size , is the number of time subdivisions , and .the scheme is called the non compensated tamed scheme ( ncts ) , while scheme is called the semi - tamed scheme .strong and weak convergences are not the only features of numerical techniques .stability for sdes is also a good feature as the information about step size for which does a particular numerical method replicate the stability properties of the exact solution is valuable .the linear stability is an extension of the deterministic a - stability while exponential stability can guarantee that errors introduced in one time step will decay exponentially in future time steps , exponential stability also implies asymptotic stability . by the chebyshev inequality and the borel cantelli lemma , it is well known that exponential mean square stability implies almost sure stability . the stability of classical implicit and explicit methods forare well understood .although the strong convergence of the ncts and sts schemes given respectively by and have been provided in , a rigorous stability properties have not yet investigated to the best of our knowledge .the goal of this paper is to study the linear stability and the exponential stability of and for sdes driven by both brownian motion and poisson jump .our study will also provide the rigorous study of linear stabilities of schemes and for sdes without jump , which have not yet studied to the best of our knowledge .the paper is organised as follows . the linear mean - square stability and the exponential mean - square stability of the tamed and semi - tamed schemes are investigated respectively in section [ linearsta ] and section [ nlinearsta ] .section [ simulations ] presents numerical simulations to sustain the theoretical results .we also compare the stability behaviors of tamed and semi - tamed schemes with those of backward euler and split - step backward euler , this comparison shows the good behavior of the semi - tamed scheme and therefore confirms the previous study in for sdes without jump .throughout this work , denotes a complete probability space with a filtration . for all ,we denote by , and , for all .the goal of this section is to find the time step - size limit for which the tamed euler scheme and the semi - tamed euler scheme are stable in the linear mean - square sense . for the scalar linear test problem, the concept of a - stability of a numerical method may be interpreted as problem stable method stable for all .we consider the following linear test equation with real and scalar coefficients . where satisfied .it is proved in that the exact solution of is mean - square stable if and only if using the discrete form of , the numerical schemes and will be therefore mean - square stable if and the following result provides the time step - size limit for which the semi - tamed scheme ( sts ) is is mean - square stable .assume that , then and the semi - tamed scheme is mean - square stable if and only if applying the semi - tamed euler scheme to leads to squaring both sides of leads to taking expectation in both sides of and using the relations , and with the fact that and are independents leads to so , the semi - tamed scheme is stable if and only if that is .the following result provide the time step - size limit for which the non compensated tamed scheme ( ncts ) is stable .[ thmncts ] assume that , the tamed euler scheme is mean - square stable if one of the following conditions is satisfied , and .+ and . applying the tamed euler scheme to equation leads to by squaring both sides of leads to using the inequality ,the previous equality becomes taking expectation in both sides of the previous equality and using independence and the fact that , , , leads to : \mathbb{e}|x_n|^2\nonumber\\ & + & \mathbb{e}\left(\dfrac{2ax^2_n\delta t(1+\lambda c\delta t)}{1+\deltat|ax_n|}\right ) . \label{ch5eq2}\end{aligned}\ ] ] if , it follows from that \delta t\}\mathbb{e}|x_n|^2.\end{aligned}\ ] ] therefore , the numerical solution is stable if \delta t<1.\end{aligned}\ ] ] that is . if , using the fact that , inequality becomes \mathbb{e}|x_n|^2 .\label{ch5eq3}\end{aligned}\ ] ] therefore , it follows from that the numerical solution is stable if .that is . in theorem [ thmncts ], we can easily check that if , we have : , c\geq 0,\\ \delta t < \dfrac{2a - l}{a^2+\lambda^2 c^2}\\ \end{array } \right . \bigcup \left\lbrace \begin{array}{l} a \in ( l/2,0 ) , c<0,\\ \delta t < \dfrac{2a - l}{a^2+\lambda^2 c^2 } \\ \delta t\leq \dfrac{-1 } { \lambda c } \\ \end{array } \right.\\ \bigcup \left\lbrace \begin{array}{l } a>0 , c<0 \\\delta t < \dfrac{2a - l}{a^2+\lambda^2 c^2 } \\\delta t \geq \dfrac{-1 } { \lambda c } \end{array } \right . \end{aligned}\ ] ] this section , we focus on the exponential mean - square stability of the approximation .we follow closely and assume that and .it is proved in that under the following conditions , for all , where , and are constants , the exact solution of sde is nonlinear mean - square stable if .indeed under the above assumptions , we have ( * ? ? ?* theorem 4 ) so , if we have and the exact solution is exponentially mean - square stable . in the sequel of this section , we will use some weak assumptions , which of courses imply that the conditions - hold . in order to study the nonlinear stability of the semi - tamed scheme ( sts ) ,we make also the following assumptions [ ch5assumption2 ] there exist some positive constants , , , , , and such that we denote by and we will always assume that to ensure the stability of the exact solution . the nonlinear stability of sts scheme is given in the following theorem .[ nt1 ] under assumptions [ ch5assumption2 ] and the further hypothesis , for any stepsize \overline{\beta}}\wedge\dfrac{2\beta-\overline{\beta}}{2(k+\lambda c)\overline{\beta}} ] , which is equivalent to , becomes finally , from the discussion above on and , it follows that on , if \overline{\beta}}\wedge\dfrac{2\beta-\overline{\beta}}{2(k+\lambda c)\overline{\beta}} ] . the stability occurs if and only if , so we should also have that is }{(k+\lambda c)^2 } = \dfrac{-\alpha_1}{(k+\lambda c)^2 } , \end{aligned}\ ] ] and there exists a constant such that ^n\mathbb{e}\| x_0\|^2\leq \mathbb{e}\|x_0\|^2 e^{-\gamma t_n } , \end{aligned}\ ] ] by the taylor expansion , as we obviously have . in order to analyse the nonlinear mean - square stability of the tamed euler scheme ( ncts ) , we use the following assumption . [ ch5assumption3 ] there exist some positive constant , , , , , , and such that : apart from , assumption [ ch5assumption3 ] is a consequence of assumption [ ch5assumption2 ] . using assumption [ ch5assumption3 ], we can easily check that the exact solution of is exponentiallly mean - square stable if . under assumption [ ch5assumption3 ] ,if , , and , for any stepsize there exists a constant such that and the numerical solution is exponentiallly mean - square stable . from equation , we have using assumption [ ch5assumption3 ] , it follows that so from assumption [ ch5assumption3 ] , we have let us define . on , using assumption [ ch5assumption3 ] we have : therefore using and in yields \|x_n\|^{a+1}}{1+\delta t\|f(x_n)\|}. \label{ch5meantamed6 } \end{aligned}\ ] ] since , becomes on , using assumption [ ch5assumption3 ] and the inequality we have therefore , using and , becomes \|x_n\|^{a+1}}{1+\delta t \|f(x_n)\|}. \label{ch5meantamed9 } \end{aligned}\ ] ] for , which is equivalent to , becomes from the above discussion on and , it follows that on , if and then we have taking the expectation in both sides of , using the relation , , and leads to \mathbb{e}\|y_n\|^2 .\end{aligned}\ ] ] iterating the last inequality leads to ^n\mathbb{e}\|x_0\|^2 .\end{aligned}\ ] ] to have the stability of the ncts scheme , we should also have that is }{2k^2+\lambda^2c^2 } , \end{aligned}\ ] ] and there exists a constant such that as in the proof of theorem [ nt1 ] , we obviously have . note that from the studies above , we can deduce the linear stabilities of schemes and for sdes without jump by setting . however , by setting we obtain the nonlinear stability of semi - tamed scheme without jump performed in .the goal of this section is to provide some practical examples to sustain our theoretical results in the previous section .we compare the stability behaviors of the tamed scheme and the semi - tamed scheme with those of numerical schemes presented in .more precisely , we test the stability of the semi - tamed scheme , tamed scheme , backward euler and split - step backward euler schemes with different stepsizes .we denote by all the approximated solutions from those schemes . herewe consider the following linear stochastic differential equation ,\quad x_0=1 , \end{aligned}\ ] ] where the intensity of the poisson process is taken to be .so , which ensures the linear mean - square stable of the exact solution .we can easily check from the theoretical results in the previous section that for the semi - tamed and the tamed euler scheme reproduce the linear mean - square property of the exact solution . in figure [ fig02 ], we illustrate the mean - square stability of the semi - tamed scheme , the tamed scheme , the backward and the spli - step backward euler scheme for different stepsizes .we take and , and generate paths for each numerical method .we can observe from figure [ fig02 ] that the semi - tamed scheme works well with the backward and spli - step backward euler schemes .we can also observe that the semi - tamed scheme works better than the tamed euler scheme , and in some case overcomes the backward and spli - step backward euler schemes . for nonlinear stability , we consider the following nonlinear stochastic differential equation ,\quad \quad x_0=1.\end{aligned}\ ] ] the poisson process intensity is , and . we take and . indeed , we obviously have , and .it follows that the exact solution is exponentially mean - square stable .one can easily check that for from theoretical results the semi - tamed and the tamed euler reproduce the exponentially mean - square stability property of the exact solution .figure [ fig03 ] illustrates the stability of the tamed scheme , the semi - tamed scheme , the backward euler and the split - step backward euler for different stepsizes .we take and and generate samples for each numerical method. we can observe that all the schemes have the same stability behavior , although tamed and semi - tamed schemes are very efficient and backward euler and the split - step backward euler less efficient as nonlinear equations are solved for each time step .this project was supported by the robert bosch stiftung through the aims arete chair programme .0.01 0.01 0.01 0.01 0.01 0.0100 m. hutzenthaler , a.jentzen and p. e. kloeden .divergence of the multilevel of the multilevel monte carlo euler euler method for nonlinear stochastic differential equations . , 23(5)(2013 ) , 19131966 .chengming huang .exponential mean square stability of numerical methods for systems of stochastic differential equations ., 236 ( 2012 ) 40164026 .higham , and p.e .umerical methods for nonlinear stochastic differential equations with jumps . ,101 ( 2005 ) 101119. f. c klebaner .introduction to stochastic calculus with applications ., london wc2h 9he , 2004 .x. zong , f. wu and c.huang . convergence and stability of the semi - tamed euler scheme for stochastic differential equations with non - lipschitz continous coefficients . , 228 ( 2014 ) , 240250 .x. wang , and s. gan . compensated stochastic theta methods for stochastic differential equations with jumps . , 60(2010 ) , 877887 .d. j. higham and p. e. kloeden .convergence and stability of implicit methods for jump - diffusion systems . , 3(2006 ) , pp .e. platen and n. bruti - liberati .solution of stochastic differential equations with jumps in finance . ,berlin , 2010 . | under non - global lipschitz condition , euler explicit method fails to converge strongly to the exact solution , while euler implicit method converges but requires much computational efforts . tamed scheme was first introduced in to overcome this failure of the standard explicit method . this technique is extended to sdes driven by poisson jump in where several schemes were analyzed . in this work , we investigate their nonlinear stability under non - global lipschitz and their linear stability . numerical simulations to sustain the theoretical results are also provided . stochastic differential equation , linear stability , exponential stability , jump processes , one - sided lipschitz . |
purpose and scope of this paper is to introduce and discuss a simple hamiltonian dynamical system describing the motion of particles in the ( _ complex _ ) plane .this problem is the prototype of a class of models that feature a transition from _ very simple _ ( even _ isochronous _ ) to _ _ quite complicated _ _ motions characterized by a _ sensitive dependence _ both on the initial data and the parameters ( `` coupling constants '' ) of the model .this transition can be explained as travel on riemann surfaces .the interest of this phenomenology illustrating the _ onset _ in a _deterministic _ context of _ irregular _ motions is underlined by its generality , suggesting its eventual relevance to understand natural phenomena and experimental investigations .the novelty of the model treated herein is that it allows a quite explicit mathematical treatment . here only some of our main findings are reported , without detailing their proofs : a more complete presentation will be published elsewhere .the idea that the _ integrable _ or _ nonintegrable _ character of a dynamical system is closely related to the _ analytic _ structure of its solutions as functions of the independent variable ( `` time '' , but considered as a _ complex _ variable ) goes back to such eminent mathematicians as carl jacobi , sophia kowalewskaya , henri poincar , paul painlev and his school .some of us heard illuminating discussions of this notion by martin kruskal , whose main ideas a synthetic if overly terse rendition of which might be the statement that _ integrability is compatible with the presence of multivaluedness but only provided this is not excessive _ can be gleaned from some papers written by himself and some of his collaborators , or by others who performed theoretical and numerical investigations motivated by his ideas .the results presented below constitute progress along this line of thinking . for a more detailed analysiswe refer the interested reader to the papers where more complete versions are presented of our findings .the model we introduce and discuss in this paper is characterized by the following equations of motion: _ _ : here and hereafter indices such as range from to and are defined superimposed dots indicate differentiations with respect to the _ real _ independent time variable ; the dependent variables _ _ _ _ are _ complex _ , and indicate the positions of point `` particles '' moving in the _ complex _ -plane ; is the _ imaginary _ unit ; the parameter is _ positive _ , and it sets the time scale via the basic period quantities are arbitrary coupling constants , but in this paper we restrict consideration to the case in which they are all _ real _ and moreover satisfy the `` semisymmetrical '' restriction that the two particles with labels and are _ equal _ , while particle is _different_. more special cases are the `` fully symmetrical '' , or``integrable '' , one characterized by the equality of _ all _ coupling constants, the `` two - body '' one , with only one nonvanishing coupling constant , in this latter case clearly the remaining _ two - body _ problem is easily solvable, \nonumber \\ \fl \qquad\qquad\quad -(-)^{s}\,\big\ { \frac{1}{4}\ [ z_{1}(0)-z_{2}(0 ) ] ^{2}+f\ \frac{\exp ( 2\,\rmi\,\omega \,t)-1}{2\,\rmi\,\omega } \big\ } ^{\,1/2}% \bigg ] ~ , \quad s = 1,2~.\label{twobodyc}\end{aligned}\]]the justification for labelling the fully symmetrical case ( [ integr ] ) as `` integrable '' will be clear from the following ( or see section 2.3.4.1 of ) .the treatment of the more general case with different coupling constants is outlined in .note that the equations of motion ( [ eqmot ] ) are of `` archimedian '' , rather than `` newtonian '' , type , inasmuch as they imply that the `` velocities '' , rather than the `` accelerations '' , are determined by the `` forces '' .these equations of motion are hamiltonian , indeed they follow in the standard manner from the hamiltonian function ~. \label{h}\]]and they can be reformulated as , still hamiltonian , _ real _ ( and _ covariant _ , even _rotation - invariant _ ) equations describing the motion of three point particles in the ( _ real _ ) horizontal plane . the following _ qualitative _ analysis ( confirmed by our _ quantitative _ findings ,see below ) is useful to get a first idea of the nature of the motions entailed by our model . for _ large _ values of ( the modulus of ) `` two - body forces '' represented by the last two terms in the right - hand side of ( [ eqmot ] ) become _ negligible _ with respect to the `` one - body ( linear ) force '' represented by the first term , hence in this regime entailing one thereby infers that , when a particle strays far away from the origin in the complex -plane , it tends to rotate ( clockwise , with period ) on a circle : hence the first _ qualitative _ conclusion that _ all motions are confined_. secondly , the _ two - body _ forces cause a _singularity _ whenever there is a _ collision _ of _ two _ ( or all _ three _ ) of the particles , and become dominant whenever _ two _ particles get very close to each other , namely in the case of _ near misses_. but if the three particles move _ aperiodically _ in a _ confined _ region ( near the origin ) of the _ complex _-plane , an _ infinity _ of _ near misses _ shall indeed occur . andsince the outcome of a _ near miss _ is generally quite _ different _( whenever the two particles involved in it are _ different _ ) depending on which side the particles slide past each other and this , especially in the case of _ very close _ near misses , depends _ sensitively _ on the initial data of the trajectories under consideration we see here a mechanism causing a _ sensitive dependence _ of the time evolution on its initial data .this suggests that our model ( [ eqmot ] ) , in spite of its simplicity , might also support quite complicated motions , possibly even displaying an `` unpredictable '' evolution in spite of its _ deterministic _ character .this hunch is confirmed by the results reported below . to investigate the dynamics of our `` physical '' model ( [ eqmot ] )it is convenient to introduce an `` auxiliary '' model , obtained from it via the following change of dependent and independent variables: that initially the coordinates and coincide: equations of motion of the auxiliary model follow immediately from ( eqmot ) via ( [ zita ] ) ( or , even more directly , by noting that , for and : of course the appended prime denotes differentiation with respect to the ( _ complex _ ) variable .the definition of implies that as the ( _ real _ ) time variable evolves onwards from the _ complex _variable travels round and round , making a full tour ( counterclockwise ) in every time interval on the circle the diameter of which , of length lies on the imaginary axis in the _ complex _-plane , with one end at the origin , and the other at ( draw this circle ! ) .hence these relations , ( [ zita ] ) , entail that if is _ holomorphic _ as a function of the _ complex _variable in the closed disk encircled by the circle , the corresponding function is _ periodic _ in the _ real _ variable with period ( indeed _ antiperiodic _ with period : it is easy to prove that the solution of ( [ eqzita ] ) is _ holomorphic _ ( at least ) in the circular disk centered at the origin of the _ complex _ -plane and having the radius may therefore conclude that our physical system ( [ eqmot ] ) is _ isochronous with period _ see ( [ t ] ) . indeed an _system is characterized by the property to possess one or more _ open _ sectors of its phase space , each having of course _ full dimensionality _ , such that _ all _ motions in each of them are _ completely periodic _ with the same _ fixed period _ ( the periods may be different in these different sectors of phase space , but must be fixed , i e. independent of the initial data , within each of these sectors ) : and in our case clearly ( at least ) _ all _ the motions characterized by initial data such that _ completely periodic _ with period , see ( [ periodic ] ) , since this inequality , implying ( via ( [ zitazero ] ) and ( [ dzero ] ) ) entails that is _ holomorphic _ ( at least ) in a disk that includes , in the _ complex _-plane , the disk .this argument is a first demonstration of the usefulness of the `` trick '' ( zita ) , associating the auxiliary system ( [ eqzita ] ) to our physical system ( [ eqmot ] ) .more generally , this relationship ( [ zita ] ) allows to infer the main characteristics of the _ time evolution _ of the solutions of our physical system ( [ eqmot ] ) from the _ analyticity properties _ of the corresponding solutions of the auxiliary system ( [ eqzita ] ) : indeed the evolution of as the time increases from the initial value is generally related via ( [ zita ] ) to the values taken by when rotates ( counterclockwise , with period on the circle in the _ complex _ -plane and correspondingly travels on the riemann surface associated to its _analytic _ structure as a function of the _ complex _variable .suppose for instance that the _ only _ singularities of in the _ finite _ part of the _ complex _ -plane are _ square - root branch points _ , as it is indeed the case for our model ( [ eqmot ] ) at least for a range of values of the ratio of the coupling constants and see ( [ symm ] ) and below . ) , while the number and especially the locations of these singularities depend on the specific solution under consideration and their identification requires a more detailed knowledge than can be obtained by a local analysis _ la painlev_. ] then the _ isochronous _ regime corresponds to initial data such that the corresponding solution has _ no branch points _ inside the circle on the main sheet of its riemann surface ( i. e. that characterized by the initial data ) . moreover , if there is a _( _ nonvanishing _ ) number of branch points inside the circle on the main sheet of the riemann surface of and a _ finite _ number of branch points inside the circle on all the sheets that are accessed by traveling on the riemann surface round and round on the circle , then clearly the corresponding solution is still a _ completely periodic _ function of the time but now its period is a _ finite integer multiple _ of the basic period the value of depending of course on the number of sheets that get visited along this travel before returning to the main sheet .hence , in particular , whenever the _ total _ number of ( _ square - root _ ) branch points of the solution of the auxiliary problem ( [ eqzita ] ) is _ finite _ , the corresponding solution of our physical model ( [ eqmot ] ) is _ completely periodic _ , although possibly with a _ very large _ period ( if is _ very large _) the value of which may depend , possibly quite sensitively , on the initial data . on the other hand if the number of ( _ square - root _ ) branch points possessed by the generic solution is _ infinite _ , and the riemann surface associated with the function has an _ infinite _ number of sheets ( as it can happen in our case , see below ) , then it is possible that , as goes round and round on the circle the corresponding value of travels on this riemann surface without ever returning to its main sheet , entailing that the time evolution of the corresponding function __ _ _ is _ aperiodic _ , and that it depends _ sensitively _ on the initial data inasmuch as these data characterize the positions of the branch points hence the structure of the riemann surface .this terse analysis entails an important distinction among all these ( _ square - root _ ) _ _ _ _ branch points : the `` active '' branch - points are those located _ inside _ the circle on sheets of the riemann surface accessed when starting from the main sheet by traveling round and round on that circle , so that they do affect the subsequent sequence of sheets that get visited ; while the `` inactive '' branch points are , of course , those that fall _ outside _ the circle as well as those that are located _ inside _ the circle but on sheets of the riemann surface that do not get visited while traveling round and round on that circle ( starting from the main sheet ) and that therefore do _ not _ influence the time - evolution of the corresponding solution of our physical system ( eqmot ) .this distinction is of course influenced by the initial data of the problem , that characterize the initial pattern of branch points ; clearly it is not just a `` local '' characteristic of each branch point depending only on its position ( for instance , _ inside _ or _ outside _ the circle : it depends on the overall structure of the riemann surface , for instance if there is no branch point on its main sheet that containing the point of departure of the travel round and round on the circle then clearly _ all _ the other branch points are _ inactive _ , irrespective of their location .let us also emphasize that , whenever an _ active _ branch point is _ quite close _ to the circle it corresponds to a _ near miss _ involving _ two _ particles of our physical model ( [ eqmot ] ) , at which these two particles scatter against each other almost at right angles ( corresponding to the _ square - root _ nature of the branch point ) .the difference between the cases in which such a branch point falls _ just inside _ respectively _ just outside _ the circle corresponds to a _ near miss _ in which the two particles slide past each other on one side respectively on the other ( see figure [ fig1 ] ) , and this makes a substantial difference as regards the subsequent evolution of our -body system ( unless the two particles are equal ) .the closer the _ near miss _ , the more significant this effect is , and the more _ sensitive _ it is on the _ initial data _ ,a tiny change of which can move the relevant branch point from one side to the other of the circumference of the circle and correspondingly drastically affect the outcome of the _ near miss_. this is the mechanism that accounts for the fact that , when the initial data are in certain sectors of their phase space ( of course quite different from that characterized by the inequalities ( [ condiso ] ) ) , the resulting motion of the physical -body problem ( [ eqmot ] ) is _ aperiodic _ , indeed nontrivially so : in such cases ( as we show below ) the _ aperiodicity _ is indeed associated with the coming into play of an _ infinite _ number of ( _ square - root _ ) branch points of the corresponding solution of the auxiliary problem ( [ eqzita ] ) and _ _ _ _ correspondingly with an _ infinite _ number of _ near _ _ misses _ experienced by the particles throughout their time evolution , this phenomenology being clearly characterized by a _ sensitive dependence _ on the initial data .this mechanism to explain the transition from _ regular _ to _ irregular _ motions and in particular from an _ isochronous _ regime to one featuring _unpredictable _ aspects was already discussed in the context of certain many - body models somewhat analogous to that studied herein .but those treatments were limited to providing a _ qualitative _ analysis such as that presented above and to ascertaining its congruence with _ numerical solutions _ of these models .the interest of the simpler model introduced and discussed herein is to allow a detailed , _ quantitative _ understanding of this phenomenology .this is based on the following explicit solution of our model ( [ eqmot ] ) , obtained via the auxiliary problem ( [ eqzita ] ) : [ prueba ] ^{\,1/2}\cdot & & \nonumber \\\quad \cdot \left\ { \left [ \check{w}\left ( t\right ) \right ] ^{\,1/2}-\left ( -\right ) ^{s}\,\left [ 12\,\mu -3\,\check{w}\left ( t\right ) \right ] ^{\,1/2}\right\ } ~,\qquad\quad s=1,2~ , & & \label{zsolution}\\ \label{zsolutionb } \fl \quad z_{3}\left ( t\right ) = z\,\exp \left ( -\rmi\,\omega \,t\right ) + \left ( \frac{% f+8\,g}{6\,\rmi\,\omega } \right ) ^{\,1/2}\,\left [ 1+\eta \,\exp \left ( -2\,\rmi\,\omega \,t\right ) \right ] ^{\,1/2}\,\left [ \check{w}\left ( t\right ) % \right ] ^{\,1/2}~.\end{aligned}\]]here the function is defined via the relation , \label{wtildew}\]]with = \bar{\xi% } + r\,\exp \left ( 2\,\rmi\,\omega \,t\right ) ~ , \label{ksi}\]]and implicitly defined by the _parameter is defined in terms of the coupling constants and see ( [ symm ] ) , as follows: in ( 13)-([ksi ] ) the _ three _ constants and ( or are defined in terms of the _ _ _ _ initial data as follows : \fl\qquad\quad\quad r=\frac{3\,\left ( f+8\,g\right ) } { 2\,\rmi\,\omega \,\left [ 2% \,z_{3}(0)-z_{1}(0)-z_{2}(0)\right ] ^{\,2}}\,\left [ 1-\frac{1}{\check{w}(0)}% \right ] ^{\,\mu -1}~ , \label{parb}\\[4pt ] \fl\qquad\quad\quad \bar{\xi}=r\,\eta ~ , \label{parc}\\[4pt ] \fl\qquad\quad\quad\eta = \frac{\rmi\,\omega \,\left\ { \left [ z_{1}(0)-z_{2}(0)\right ] ^{\,2}+\left [ z_{2}(0)-z_{3}(0)\right ] ^{\,2}+\left [ z_{3}(0)-z_{1}(0)\right ] ^{\,2}\right\ } } { 3\,\left ( f+2\,g\right ) } -1~ , \label{pard}\\[4pt ] \fl\qquad\quad\quad\check{w}(0)=\frac{2\,\mu \,\left [ 2\,z_{3}(0)-z_{1}(0)-z_{2}(0)\right ] ^{\,2}}{\left [ z_{1}(0)-z_{2}(0)\right ] ^{\,2}+\left [ z_{2}(0)-z_{3}(0)% \right ] ^{\,2}+\left [ z_{3}(0)-z_{1}(0)\right ] ^{\,2}}~. \label{pare}\end{aligned}\ ] ] note that the constant is the initial value of the center of mass of the system , and indeed the first term in the right - hand side of the solution ( 13 ) represents the motion of the center of mass of the system : just a circular motion around the origin , with a constant velocity entailing a period .since the rest of the motion is independent of the behavior of the center of mass , in the study of this model attention can be restricted without significant loss of generality to the case when the center of mass does not move , the nontrivial aspects of the motion are encoded in the time evolution of the function , see ( 13 ) and ( [ wtildew ] ) : let us emphasize in this connection that the dependent variable is that solution of the _ nondifferential _ equation ( [ eqwtilde ] ) uniquely identified by continuity , as the time unfolds , hence as the variable goes round and round , in the _ complex _-plane , on the circle with center and radius ( see ( [ ksi ] ) ) , from the initial datum assigned at , = w\left ( \bar{\xi}+r\right ) = \check{w}% \left ( 0\right ) ~ , \label{win}\]]see ( [ pare ] ) .this specification of the initial value is relevant , because generally the _ nondifferential _ equation ( eqwtilde ) has more than a single solution , in fact possibly an _ infinity _ of solutions , see below .it is clear from ( 13 ) that the time evolution of the solution of our model ( [ eqmot ] ) is _ mainly _ determined by the time evolution of the function . indeed , 1 .the factor ^{\,1/2} ] appearing in the right - hand side of the solution formulas ( 13 ) , is clearly as well _ periodic _ with period or _antiperiodic _ with period hence _ periodic _ with period depending whether the closed trajectory of in the complex -plane does not or does enclose the ( branch ) point 3 .likewise the square root ^{\,1/2} ] of the _ nondifferential _ equation ( [ eqwtilde ] ) .moreover , we consider only _ generic _ solutions of ( [ eqmot ] ) , namely those characterized by initial data that exclude one of the following special outcomes : 1 . takes , at some ( _ real _ ) time , the value entailing a _ pair collision _ of the equal particles occurring at this time , .2 . takes , at some ( _ real _ ) time , the value entailing a _ pair collision _ of the different particle with one of the equal particles occurring at this time , or .3 . the constant has unit modulus , , i. e. with _ real _ ( and of course defined ) which entails a _ triple collision _ of the particles occurring at the time , .4 . vanishes at some ( _ real _ ) time , ( but , as our notation suggests , this case ( c ) is just a subcase of ( c ) , although this is not immediately obvious from ( zsolution ) but requires using also ( [ ksi ] ) and ( [ eqwtilde ] ) ) . the initial data that give rise to solutions having one of these singularities form a set of null measure .it can be easily seen that these singular solutions of our physical problem ( [ eqmot ] ) correspond via ( [ zita ] ) to special solutions of our auxiliary problem ( [ eqzita ] ) possessing a _ branch point _ that sits _exactly _ on the circle in the complex -plane : more precisely , 1 . a _ square - root _ branch point featured by and but not by , 2 .a _ square - root _ branch point featured by all functions , 3 . a branch point featured by all functions the nature of which depends on the parameter .as mentioned above , in this paper we confine our treatment to discussing the time evolution of a _ generic _ root of the _ nondifferential _ equation ( [ eqwtilde ] ) with ( [ ksi ] ) , and in particular to identifying for which initial data its time evolution is _ periodic _ , and in such a case what the period is .remarkably we find out that , for ( _ arbitrarily _ ) given initial data , _ all _ these roots have at most three different periods ( one of which might be _ infinite _ , signifying an _ aperiodic _ motion ) ; periods which we are able to determine explicitly ( although the relevant formulas have some nontrivial , even chaotic " , aspects , in a sense that is made explicit below ) .the question of identifying , among _ all _ the roots of this _ nondifferential _ equation ( [ eqwtilde ] ) , the `` physical '' one i. e. the one that evolves from the initial datum ( [ pare ] ) , and in particular of specifying the character of its time evolution among the ( at most ) alternatives discussed below , is a technically demanding job the solution of which shall be reported in cgss .let us re - emphasize that the time evolution of ] . to begin with , we consider the case in which the parameter is _ rational _ , and coprime integers and _ positive _ , .the extension of the results to the case of _ irrational _ is made subsequently ; although , to avoid repetitions , we present below some results in a manner already appropriate to include also the more general case with _real_. in the _ rational _ case ( [ mupq ] ) the _ nondifferential _ equation that determines the `` dependent variable '' in terms of the `` independent variable '' becomes _ polynomial _ , and takes one of the following forms depending on the value of the parameter see ( [ mupq ] ) : ^{\,q}\,\tilde{w}^{\,p},\qquad&\mbox{if } \mu > 1 , \label{poleqa}\\ \left [ \bar{\xi}+r\,\exp \left ( 2\,\rmi\,\omega \,t\right ) \right ] ^{\,q}\,\left ( \tilde{w}-1\right ) ^{\,q - p}\,\tilde{w}^{\,p}=1,\qquad&\mbox{if } % 0<\mu < 1 , \label{poleqb}\\ \left [ \bar{\xi}+r\,\exp \left ( 2\,\rmi\,\omega \,t\right ) \right ] ^{\,q}\,\left ( \tilde{w}-1\right ) ^{\,q+\left\vert p\right\vert } = \tilde{w}% ^{\,\left\vert p\right\vert } , \qquad&\mbox{if } \mu < 0 .\label{poleqc}\end{aligned}\]]the above expressions are polynomials ( in the dependent variable ) of degree : as for the boundaries of these cases , let us recall that corresponds , via ( [ mu ] ) , to namely , see ( [ symm ] ) , to the trivially solvable _ two - body _ case , see ( 5 ) , while respectively correspond , via ( [ mu ] ) , to respectively to and require a separate treatment , for which the interested reader is referred to cgss .clearly the third case ( ) becomes identical to the first ( ) via the replacement without modifying ; therefore in the following , without loss of generality , we often forsake a separate discussion of this third case .clearly the factor ^{\,q}, ] as the point travels round and round on the circle brings it back to its point of departure after a single round ; equivalently , in this case the root ] ; equivalently , such a root $ ] belongs to a set ( including an _ infinity _ of roots ) that does get permuted as the point travels round and round on the circle , with both mechanisms the pairwise exchange of some roots , and the cyclic permutation of an _ infinite _ number of roots playing a role at each round .the identification of which sheets get thereby accessed , and in which order namely the specific shape of the trajectory when looked at , as it were stroboscopically , at the discrete sequence of instants is discussed in .the extent to which this regime yields _ irregular _ motions is discussed further below , also to illuminate the distinction in these regimes between the time evolution entailed by our model with a given _ irrational _ value of and that of the analogous models with _ rational _ values of providing more and more accurate approximations of the given _ irrational _ value of . in case ( iii )the time evolution is still _ isochronous _ , inasmuch as the results reported above entail that , for _ any _ given initial data ( excluding , of course , the _ special _ ones leading to a collision ; which are _ special _ in the same sense as a _ rational _ number is _ special _ in the context of _ real _ numbers ) , the motion of every root is _ periodic _ with one of the periods entailed by ( [ jtilc ] ) ( and note that the value of the integer provided by the _third _ of these formulas is just the sum of the values for provided by the first of these formulas ) .it is indeed clear that the initial data yielding such an outcome are included in an _ open _ set of such data , having of course _ full dimensionality _ in the space of initial data , _ all _ yielding the _ same _ outcome : since the periods do _ not _ change , see ( jtilc ) , if the change of the initial data , hence the change in the ratio is sufficiently tiny .however the measures of these sets of data yielding the _ same _ outcome gets progressively _ smaller _ as the predicted periods get _ larger _ , and moreover the corresponding predictions involve more and more terms in the ( never ending ) _ continued fraction _ expansion of the irrational number , see ( [ confuna ] ) , displaying thereby , as increases towards unity , a _progressively more sensitive _ dependence of the periodicity of our system on the initial data and moreover on the parameters ( the coupling constants , that determine the value of , see ( [ mu ] ) ) of our physical model ( [ eqmot]). 0.3cm**example**. let us display here a specific example with the following ( conveniently chosen ) _ irrational _ value of in the interval ( hence corresponding to case ( iii ) of proposition 5 ) : of course is the _ golden ratio_. this assignment entails that _ all _ the coefficients , see ( [ confuna ] ) , are in this case unity , , hence the quantities see ( confune ) , coincide with the fibonacci numbers , moreover one easily finds that corresponding formula for the periods obtains inserting these values in ( [ nukb ] ) and ( [ jtilc ] ) . from it , with some labor ,one can obtain the following controlled estimate for the possible values of : inequalities are valid for all values of ( in the interval ) ; they clearly entail that the integer diverges proportionally to as and that for with only possible values of are and this concludes our presentation of the results , and of some related observations , detailing the periodicity ( if any ) of the time evolution of a _ generic _ root of ( [ eqwtilde ] ) with ( [ ksi ] ) .the identification of analogous , but of course more definite , results for the _ physical _ root and the consequential information on the periodicity ( if any ) of the solution of the physical problem ( eqmot ) as well as some additional information on the corresponding trajectories of the coordinates are provided in .last but not least let us elaborate on the character of the _ aperiodic _ time evolution indicated under item ( ii ) of proposition 5 , including the extent it is _ irregular _ and it depends _ sensitively _ on its initial data .it is illuminating to relate this question with the finding reported under item ( ii ) of proposition 4 , also in order to provide a better understanding of the relationship among the _ aperiodic _ time evolution that can emerge when is _ irrational _ ( see item ( ii ) of proposition 5 ) and the corresponding behavior say , with the same initial data for a sequence of models with _ rational _ values of ( see ( [ mupq ] ) ) that provide better and better approximations to that _ irrational _ value of ; keeping in mind the _ qualitative _ difference among the _ aperiodic _ time evolution emerging when is _ irrational _ , and the _ periodic _ indeed , even _ isochronous _ time evolutions prevailing whenever is _ rational _ , albeit with the qualifications indicated under item ( ii ) of remark 4 .note that we are now discussing the case ( an analogous discussion in the case can be forsaken ) , with initial data such that the two circles and in the _ complex _-plane _ do intersect _ and moreover the origin falls _ inside _ the circle ( i. e. and let us then consider a given _ irrational _ value of and let the _ rational _ number ( with ) provide a _ very good _ approximation to which of course entails that the _ positive integers _ and are both _very large_. consider then the _ difference _ (see the second entry in ( [ jtilab ] ) ) between the _ two _ positive integers that characterize the _ two _ periods of the _ two _ time evolutions of corresponding to _ two _ sets of initial data that differ _ very little_. here clearly the quantity is the _ difference _ between the number of branch points that are enclosed inside the circle for these _ two _ different sets of initial data .since the number of branch points on _ very large _ , it stands to reason that the _ quite small _ ( _ positive _ ) _ _ _ _ number provides a ( dimensionless ) measure of the difference between the _ two _ sets of initial data ( see for instance ( [ nu ] ) , which clearly becomes approximately applicable when is _ very large _ ) .the _ floor _ symbol has been introduced in the right - hand side of this formula to account for the integer character of the numbers hence of their difference while the _ order of magnitude _ symbol indicates that the difference is proportional ( in fact equal , given the latitude left by our definition of the quantity to the quantity up to _ corrections _ which become _ negligibly small _ when is _ very small _ and is _ very large _ , but irrespective of the value of the quantity itself which , as the product of the _ large _ number by the _small _ number , is required to be neither _ small _ nor _large_. the relation by this argument indicates that , for any given one can always choose ( finitely different ) initial data which differ by such a tiny amount that the corresponding periods are identical , confirming our previous statement about the _ isochronous _character of our model whenever the parameter is _ rational_. but conversely this finding also implies that , for any set of initial data in the sector under present consideration ( i. e. that characterized by the inequalities and and by an additional specification to identify the physical root , see ) , if our physical model ( [ eqmot ] ) is characterized by an _irrational _ value of see ( [ mu ] ) , and one replaces this value by a more and more accurate _ rational _ approximation of it , see ( mupq ) as it would for instance be inevitable in any numerical simulation corresponding to larger and larger values of and then one shall have to choose the two different assignments of initial data closer and closer to avoid a drastic change of period and for these very close sets of data the motion is indeed _ periodic _ with a period ( which we are able to predict , see ( [ jtilab ] ) and , but ) which becomes larger and larger the better one approximates the actual , _ irrational _ value of moreover in any numerical simulation the accuracy of the computation , in order to get the correct period , shall also have to increase more and more ( with no limit ) , because of the occurrence of closer and closer _ near misses _ through the time evolution ( associated with the coming into play of _ active _ branch points sitting on the circle closer and closer to the points of intersection with the circle ) . andfinally , if one insists in treating the problem with a truly _ irrational _ then , no matter how close the initial data are , the change in the periods becomes _ infinite _ because the difference in the number of _ active _ _ square - roots _ branch points on the circle included _ inside _ the circle is _ infinite _ ( see ( [ deltata ] ) ) , signifying that the motion is _ aperiodic _ , and that its evolution is indeed characterized by an _ infinite _ number of _ near misses _ , making it truly _ irregular ._ this phenomenology , together with that of the _ near misses _ as described above , illustrates rather clearly the _ irregular _ character of the motions of our physical model when the coupling constants have appropriate values ( such as to produce an irrational value of outside the interval ) and the initial data are in the sector identified above .note that the lyapunov coefficients associated with the corresponding trajectories vanish , because these coefficients as usually defined compare the difference ( after an _ infinitely long _ time ) of two trajectories that , to begin with , differ _ infinitesimally ; _ whereas our mechanism causing the _ irregular _ character of the motion requires , to come into play , an _ arbitrarily small but finite _ difference among the initial data .the difference between these two notions corresponds to the fact that inside the interval between two _ _ differen__t real numbers however close they may be there always is an _ infinity _ of _ rational _ numbers ; while this is _ not _ the case between two real numbers that differ only _ infinitesimally _ !this observation suggests that , in an _ applicative _ context , the mechanism causing a _ sensitive dependence _ on the initial data manifested by our model may be _ phenomenologically _ relevant even when no lyapunov coefficient , defined in the usual manner , is _positive_. as already observed previously , this mechanism is in some sense analogous to that yielding _ aperiodic _ trajectories in a triangular billiard with _ irrational_ angles ; although in that case in contrast to ours this outcome is mainly attributable to the essentially singular character of the corners , and moreover no truly _ irregular _ motions emerge .in this paper we have introduced and discussed a -body problem in the plane suitable to illustrate a mechanism of transition from _ regular _ to _ irregular _ motions .this model is the simplest one we managed to manufacture for this purpose .its simplicity permitted us to discuss in considerable detail the mathematical structure underlining this phenomenology : this machinery can not however be too simple since it must capture ( at least some of ) the subtleties associated with the _ onset _ of an _ irregular _ behavior . therefore in this short paper we were only able to report our main findings without detailing their proofs , and we also omitted several other relevant aspects of our treatment ( including a fuller discussion of previous work by others in related areas ) : this material shall be presented in a separate , much longer , paper , and probably as well via an electronic version of our findings so as to supplement their presentation with various animations illustrating these results and their derivation . our main motivation to undertake this research project is the hunch that this mechanism of transition _ _ _ _ have a fairly general validity and be relevant in interesting applicative contexts .hence we plan to pursue this study by focussing on other cases where this mechanism is known to play a key role , including examples ( see , for instance , and ) featuring a pattern of branch points covering densely an _ area _ of the _ complex _ plane of the independent variable rather than being confined just to reside densely on a _ line _ as is the case in the model treated herein ; and eventually to extend the application of this approach to problems of direct applicative interest . in this connectionthe following final observation is perhaps relevant . in this paper as well as in others the main focushas been on models featuring a transition from an _ isochronous _ to an _ irregular _ regime , and in this context much emphasis was put on the `` trick '' ( [ zita ] ) and in particular on the relationship it entails between the _ periodicity _ of the ( `` physical '' ) dependent variables as functions of the _ real _ independent variable ( `` time '' ) and the _ analyticity _ of other , related ( `` auxiliary '' ) dependent variables as functions of a _ complex _ independent variable .but our findings can also be interpreted _ directly _ in terms of the _ analytic properties _ of the physical dependent variables as functions of the independent variable considered itself as a _ complex _ variable. then the time evolution , which corresponded to a uniform travel round and round on the circle in the _ complex _ -plane or equivalently on the circle in the _ complex _-plane , is represented as a uniform travel to the right along the _ real _ axis in the _ complex _-plane , while , via the relations ( see ( [ zita ] ) and ( [ ksi ] ) ) pattern of branch points in the _ complex _ -plane or equivalently in the _ complex _ -plane gets mapped into a somewhat analogous pattern in the _ complex _-plane , _ repeated periodically _ in the _ real _ direction with period see ( [ t ] ) .in particular to mention the main features relevant to our treatment , see above the circle on which the _ square - root _ branch points in the _ complex _-plane sit , gets mapped in the _ complex _-plane into a curve on which sit the _ square - root _ branch points in the _ complex _ -plane ; note that this curve ( in contrast to the circle ) _ does _ now _ depend _ on the initial data .this curve is of course repeated periodically ; it is _ closed _ and contained in each vertical slab of width ( see figure [ fig2]b ) if the point is _ outside _ , otherwise it is _ open _ , starting in one slab and ending in the adjoining slab at a point shifted by the amount ; and it does not or does cross ( of course twice in each period ) the _ real _ axis in the _ complex _ -plane depending whether , in the _ complex _-plane , the two circles and do not or do intersect each other ( see figure [ fig2]a ) . likewise ,depending whether it is _ inside _ or _ outside _ the circle the point which , as entailed by our analysis , is a highly relevant branch point in the _ complex _-plane ( unless ) gets mapped into an analogous branch point located in each vertical slab _ above _ or _ below _ the _ real _ axis in the _ complex _ -plane ; while the other branch point , at in the _ complex _-plane , gets mapped into an analogous branch point located at infinity in the _ lower half _ of the _ complex _ -plane .clearly the physical mechanism of _ near misses _ , which is the main cause of the eventual _ irregularity _ of the motion , becomes relevant only for initial data such that the curve crosses the real axis , thereby causing ( if is _ irrational _ ) an infinity of _ square - root _ branch points of the functions to occur _ arbitrarily close _ to the _ real _ axis in the __ complex__-plane branch points which are however _ active _ ( namely , they actually cause a _ near miss _ in the physical evolution ) in only some ( yet still an infinity ) of the infinite number of vertical slabs in which the _ complex _ -plane gets now naturally partitioned .the _ near miss _ implies that the two particles involved in it slide past each other from one side or the other depending whether the corresponding branch point is just above or just below the real axis in the _ complex _ -plane .the _ sensitive _ dependence on the initial data is due to the fact that any tiny change of them causes some _ active _ branch point in the _ complex _-plane which is very close to real axis to cross over from one side of it to the other , thereby drastically changing the outcome of the corresponding _ near miss_. this terse discussion shows clearly that the explanation of the _ irregular _ behavior of a dynamical system in terms of travel on a riemann surface is by no means restricted to _isochronous _ systems .we found it convenient to illustrate in detail this paradigm by focussing in this paper on a simple _ isochronous _ model and by using firstly and then as independent _ complex _ variables but , as outlined just above , our analysis can also be done albeit less neatly by using directly the independent _ complex _ variable ; and the occurrence of a kind of periodic partition of the _ complex _-plane into an infinite sequence of vertical slabs characteristic of our _ isochronous _ model does not play an essential role to explain the _ irregular _ character of the motion when such a phenomenology does indeed emerge .the essential point is the possibility to reinterpret the time evolution as travel on a riemann surface , the structure of which is sufficiently complicated to cause an _irregular _ motion featuring a _ sensitive dependence _ on its initial data .the essential feature causing such an outcome is the presence of an _ infinity _ of branch points _ arbitrarily close _ to the _ real _ axis in the _ complex _-plane , the positions of which , as well as the identification of which of them are _ active _ , depends on the _ initial data _ nontrivially .the model treated in this paper shows that such a structure can be complicated enough to cause an _ irregular _motion , yet amenable to a simple mathematical description yielding a rather detailed understanding of this motion ; this suggests the efficacy also in more general contexts of this paradigm to understand ( certain ) _ irregular _ motions featuring a _ sensitive dependence _ on their initial data and possibly even to _ predict _ their behavior to the extent such a paradoxical achievement ( predicting the unpredictable ! ) can at all be feasible .9 f. calogero , a class of integrable hamiltonian systems whose solutions are ( perhaps ) all completely periodic , _ j. math .phys . _ * 38 * , 5711 - 5719 ( 1997 ) ; differential equations featuring many periodic solutions , in : l. mason and y. nutku ( eds ) , _ geometry and integrability _ , london mathematical society lecture notes , vol . * 295 * , cambridge university press , cambridge , 2003 , pp .9 - 21 ; periodic solutions of a system of complex odes , _ phys .lett . _ * a293 * , 146 - 150 ( 2002 ) ; on a modified version of a solvable ode due to painlev , _ j. phys .gen . _ * 35 * , 985 - 992 ( 2002 ) ; on modified versions of some solvable odes due to chazy , _ j. phys .gen . _ * 35 * , 4249 - 4256 ( 2002 ) ; solvable three - body problem and painlev conjectures , _ theor .phys . _ * 133 * , 1443 - 1452 ( 2002 ) ; erratum * 134 * , 139 ( 2003 ) ; a complex deformation of the classical gravitational many - body problem that features a lot of completely periodic motions , _ j. phys . a : math . gen . _ * 35 * , 3619 - 3627 ( 2002 ) ; partially superintegrable ( indeed isochronous ) systems are not rare , in : _ new trends in integrability and partial solvability _ , edited by a. b. shabat , a. gonzalez - lopez , m. maas , l. martinez alonso and m. a. rodriguez , nato science series , ii .mathematics , physics and chemistry , vol .* 132 * , proceedings of the nato advanced research workshop held in cadiz , spain , 2 - 16 june 2002 , kluwer , 2004 , pp .49 - 77 ; general solution of a three - body problem in the plane , _j_. _ phys .a : math . gen . _ * 36 * , 7291 - 7299 ( 2003 ) ; solution of the goldfish n - body problem in the plane with ( only ) nearest - neighbor coupling constants all equal to minus one half , _ _ j .nonlinear math .phys.__**11 * * , 1 - 11 ( 2004 ) ; two new classes of isochronous hamiltonian systems , _ j. nonlinear math .phys_. * 11 * , 208 - 222 ( 2004 ) ; isochronous dynamical systems , _applicable anal_. ( in press ) ; a technique to identify solvable dynamical systems , and a solvable generalization of the goldfish many - body problem , _ j. math . phys_. * 45 * , 2266 - 2279 ( 2004 ) ; a technique to identify solvable dynamical systems , and another solvable extension of the goldfish many - body problem , _ j. math .phys_. * 45 * , 4661 - 4678 ( 2004 ) ; isochronous systems , _ proceedings _ of the conference on geometry , integrability and physics , varna , june 2004 ( in press ) ; isochronous systems , in : _ encyclopedia of mathematical physics _ , edited by j .-p franoise , g. naber and tsou sheung tsun ( in press ) .f. calogero and a. degasperis , novel solution of the integrable system describing the resonant interaction of three waves , _ physica d _ * 200 * , 242 - 256 ( 2005 ) .f. calogero and j .- p .franoise , periodic solutions of a many - rotator problem in the plane , _ inverse problems _ * 17 * , 1 - 8 ( 2001 ) ; periodic motions galore : how to modify nonlinear evolution equations so that they feature a lot of periodic solutions , _ j. nonlinear math .phys . _ * 9 * , 99 - 125 ( 2002 ) ; nonlinear evolution odes featuring many periodic solutions , _ theorphys . _ * 137 * , 1663 - 1675 ( 2003 ) ; isochronous motions galore : nonlinearly coupled oscillators with lots of isochronous solutions , in : _superintegrability in classical and quantum systems _ , proceedings of the workshop on superintegrability in classical and quantum systems , centre de recherches mathmatiques ( crm ) , universit de montral , september 16 - 21 ( 2003 ) , crm proceedings & lecture notes , vol . *37 * , american mathematical society , 2004 , pp . 15 - 27 ; new solvable many - body problems in the plane , _ annales henri poincar _ ( submitted to ) .f. calogero , j .-franoise and a. guillot , a further solvable three - body problem in the plane , _ j. math .* 10 * , 157 - 214 ( 2003 ) .f. calogero and v. i. inozemtsev , nonlinear harmonic oscillators , _ j. phys . a : math .gen . _ * 35 * , 10365 - 10375 ( 2002 ) .s. iona and f. calogero , integrable systems of quartic oscillators in ordinary ( three - dimensional ) space , _ j. phys . a : math .gen . _ * 35 * , 3091 - 3098 ( 2002 ) . m. mariani and f. calogero , isochronous pdes , _ yadernaya fizika _( russian journal of nuclear physics ) * 68 * , 958 - 968 ( 2005 ) ; a modified schwarzian korteweg de vries equation in 2 + 1 dimensions with lots of periodic solutions , _ yadernaya fizika_(in press ) .m. sommacal , `` studio di problemi a molti corpi nel piano con tecniche numeriche ed analitiche '' , dissertaion for the `` laurea in fisica '' , universit di roma `` la sapienza '' , 26 september 2002 .f. calogero and m. sommacal , periodic solutions of a system of complex odes .higher periods , _ j. nonlinear math .phys . _ * 9 * , 1 - 33 ( 2002 ) .f. calogero , j .-franoise and m. sommacal , periodic solutions of a many - rotator problem in the plane .ii . analysis of various motions , _ j. nonlinear math .* 10 * , 157 - 214 ( 2003 ) .m. d. kruskal and p. a. clarkson , the painlev - kowalewski and poly - painlev tests for integrability , _ studies appl . math ._ * 86 * , 87 - 165 ( 1992 ) .r. d. costin and m. d. kruskal , nonintegrability criteria for a class of differential equations with two regular singular points , _ nonlinearity _ * 16 * , 1295 - 1317 ( 2003 ) .r. d. costin , integrability properties of a generalized lam equation ; applications to the hnon - heiles system , methods appl. anal . * 4 * , 113 - 123 ( 1997 ) .t. bountis , l. drossos and i. c. percival , non - integrable systems with algebraic singularities in complex time , _phys . a : math ._ _ * * 23 * * , 3217 - 3236 ( 1991 ) .t. bountis , investigating non - integrability and chaos in complex time , in : _ nato asi conf .como , september 1993 & _ phys .d _ * 86 * , 256 - 267 ( 1995 ) ; investigating non - integrability and chaos in complex time , _ physica _ * d86 * , 256 - 267 ( 1995 ) .a. s. fokas and t. bountis , order and the ubiquitous occurrence of chaos , _ physica _ * a228 * , 236 - 244 ( 1996).s .abenda , v. marinakis and t. bountis , on the connection between hyperelliptic separability and painlev integrability , j. phys .a : math . gen . * 34 * , 3521 - 3539 ( 2001 ) .e. induti , studio del moto nel piano complesso di particelle attirate verso l origine da una forza lineare ed interagenti a coppie con una forza proporzionale ad una potenza inversa dispari della loro mutua distanza " , dissertation for the laurea in fisica " , universit di roma la sapienza " , 26 may 2005 . | we introduce and discuss a simple hamiltonian dynamical system , interpretable as a -body problem in the ( _ complex _ ) plane and providing the prototype of a mechanism explaining the transition from _ regular _ to _ irregular _ motions as travel on riemann surfaces . the interest of this phenomenology illustrating the _ onset _ in a _ deterministic _ context of _ irregular _ motions is underlined by its generality , suggesting its eventual relevance to understand natural phenomena and experimental investigations . here only some of our main findings are reported , without detailing their proofs : a more complete presentation will be published elsewhere . |
let be a hilbert space .a set of elements in ( counting multiplicity ) is called a _ frame _ if there exist two positive constants and such that for any we have the constants and are called the _ lower frame bound _ and the _ upper frame bound _ , respectively . a frameis called a _ tight frame _ if . in this paperwe focus mostly on real finite dimensional hilbert spaces with and , although we shall also discuss the extendability of the results to the complex case .let ] could be so poor that the reconstruction is numerically unstable against the presence of additive noise in the data .thus robustness against data loss and erasures is a highly desirable property for a frame .there have been a number of studies that aim to address this important issue . among the first studies of erasure - robust frameswas given in .it was shown in subsequent studies that that unit norm tight frames are optimally robust against one erasure while grassmannian frames are optimally robust against two erasures .the literature on erasure robustness for frames is quite extensive , see e.g. also . in general , the robustness of a frame against -erasures , where , is measured by the maximum of the condition numbers of all submatrices of .more precisely , let and let denote the submatrix of with columns for ( in its natural order , although the order of the columns is irrelevant ) .then the robustness against -erasures of is measured by of course , the smaller is the more robust is against -erasures . in , fickus andmixon coined the term _ numerically erasure robust frame ( nerf)_. a frame is -nerf if thus in this case .note that for any full spark frame matrix and any there always exist such that is -nerf .the main goal is to find classes of frames where the bounds , and more importantly , , are independent of the dimension while allowing the proportion of erasures as large as possible .the authors studied in the erasure robustness of , where the entries of are independent random variables of the standard normal distribution .it was shown that with high probability such a matrix can be good nerfs provided that is no less than approximately of .the authors also proved that equiangular frame in with vectors is a good nerf against up to about % erasures .as far as the proportion of erasures is concerned this was the best known result for nerfs .however , the frame requires almost vectors .the authors posed as an open question whether there exist nerfs with .a more recent paper explored a deterministic construction based on certain group theoretic techniques .the approach offers more flexibility in the frame design than the far more restrictive equiangular frames . in this paperwe revisit the robustness of random frames .we provide a much stronger result for random frames , showing that for any , with very high probability , the frame is a -nerf where depend only on and the aspect ratio .one version of our result is given by the following theorem .[ theo-1.1 ] let where is whose entries are independent gaussian random variables of distribution .let .then for any and there exist depending only on and such that for any , the frame is a -nerf with probability at least .later in the paper we shall provide more implicit estimates for that will allow us to easily compute them numerically .note that our result is essentially the best possible , as we can not go to .a corollary of the theorem is that for random gaussian frames the proportion of erasures can be made arbitrary large while the frames still maintain robustness with overwhelming probability .our theorem depends crucially on a refined estimate on the smallest singular value of a random gaussian matrix .there is a wealth of literature on random matrices .the study of singular values of random matrices has been particularly intense in recent years due to their applications in compressive sensing for the construction of matrices with the so - called _ restricted isometry property _( see e.g. ) .random matrices have also been employed for phase retrieval , which aims to reconstruct a signal from the magnitudes of its samples . for a very informative and comprehensive survey of the subjectwe refer the readers to , which also contains an extensive list of references ( among the notable ones ) . for the gaussian random matrix the expected value of and are asymptotically and , respectively .many important results , such as the nerf analysis of random matrices in as well as results on the restricted isometry property in compressive sensing , often utilize known estimates of and based on hoeffding - type inequalities . one good such estimate is see .the problem with this estimate is that even by taking we only get a bound of even though the probability in this case is 0 .thus estimates such as ( [ 1.2 ] ) that cap the decay rate are often inadequate . when applied to the erasure robustness problem for frames they usually put a cap on the proportion of erasures . to go further we must prove an estimate that will allow the exponent of decay to be much larger .we achieve this goal by proving the following theorem : [ theo-1.2 ] let be whose entries are independent random variables of standard normal distribution .let . then for any there exist constants depending only on and such that furthermore, we may take and where * acknowledgement .* the author would like to thank radu balan and dustin mixon for very helpful discussions .we begin with estimates on the extremal singular values of a ranodm matrix whose entries are independent standard normal random variables. we shall assume throughout the section that is where .one of the very important estimates is see .our main goal of this section is to prove the estimates for smallest singular value stated in theorem [ theo-1.2 ] . an equivalent formulation of ( [ 2.1 ] ) is observe that where denotes the unit sphere in .[ lem-2.1 ] let . for any probability is independent of the choice of .we have for any .* proof .* the fact that is independent of the choice of is a well know fact , which stems from the fact that the entries of are again independent standard normal random variables for any orthogonal matrix . in particular, one can always find an orthogonal such that .thus we may without loss of generality take . in this case ]. then is an matrix whose entries are independent real standard normal random variables .it is easy to check that .thus by taking we have via ( [ 2.1 ] ) that the estimate for follows from the same strategy as in the real case .first of all , just like the real case for any unitary matrix the entries of are still independed complex standard normal random variables . as a result the probability ) where is a unit vector does not depend on the choice of . by taking see that has the distribution ( as opposed to the distribution in the real case ) .applying lemma [ lem-2.1 ] we obtain the equivalent result for the complex case in next for the -net , we observe that the unit sphere in is precisely the unit sphere in if we identify as .thus we can find an -net of cardinality no more than .the proof of theorem [ theo-1.2 ] now goes through with some minor modifications .the most important one is that with ( [ 2.16 ] ) and ( [ 2.17 ] ) the inequality condition ( [ 2.7 ] ) now becomes where the constant is changed to . substituting this and for we prove the theorem . ' '' ''our goal in this section is to establish the robustness of random frames against erasures by proving theorem [ theo-1.1 ] .here we restate theorem [ theo-1.1 ] in a a different form for the benefit of simpler notation in the proof .[ theo-3.1 ] let where is whose entries are drawn independently from the standard normal distribution .let and where .for any there exist constants depending only on , and such that is a -nerf with probability at least .* proof .* there exists exactly subsets of cardinality .it is well known that which can be shown easily by stirling s formula or induction on .set , which has .we have then now we set .let and where is given in ( [ 1.4 ] ) .let the columns of be .for any we denote by the submatrix of whose columns are .then for we have by theorem [ theo-1.2 ] .it follows that it follows that this implies that , by setting and , is a -nerf with probability at least . theorems [ 1.1 ] and [ theo-3.1 ] states that random gaussian frames can be robust with overwhelming probability against erasures of an arbitrary proportion of data from the original data , at least in theory , as long as the number of remaining vectors is at least for some . in practice one may ask how good the condition numbers are if the erasures reach a high proportion , say , 90% of the data .we show some numerical results below. * example 1 .* let where is whose entries are independent standard normal random variables .set . in this experimentwe fix and , respectively , and let vary .as increases from to the proportion of erasure increases from 0 to 99% .we shall use as a measure of robustness since it is an upper bound for the condition number . clearly , as increases we should expect to increase . the left plot in figure [ fig-1 ]shows against for both ( top curve ) and ( bottom curve ) .because the frame is normalized so that each column is on average a unit norm vector , it also makes sense to use the smallest singular value as a measurement of robustness .the right plot in figure [ fig-1 ] shows against also for both ( top curve ) and ( bottom curve ) .our numerical results show that in the case , with probability at least , the condition number is no more than for % erasures and no more than for 90% erasures . in the case , the corresponding numbers are 139.88 and 1862.1 , respectively .in fact , even with 99% erasures the condition number is no more than 42716 . left : against the proportion of erasures when varies from to while is fixed at ( top curve ) and ( bottom curve ) .right : same as in the left figure , but for .,title="fig:",width=272,height=226 ] left : against the proportion of erasures when varies from to while is fixed at ( top curve ) and ( bottom curve ) .right : same as in the left figure , but for .,title="fig:",width=272,height=226 ] * example 2 .* again we let where is whose entries are independent standard normal random variables , and let .in this experiment we fix and , respectively , and let vary so the proportion of erasures varies from 0 to 99% ( and 0 to 97% ( ) , respectively .again we should expect the robustness to go down as we increase . the left plot in figure [ fig-2 ] shows against for ( top curve ) and ( bottom curve ) .the right plot in figure [ fig-2 ] shows against also for both ( top curve ) and ( bottom curve ) .our numerical results show that in the case , with probability at least , the condition number is no more than for % erasures and for 90% erasures .in the case , the corresponding numbers are 23.48 and 315.12 , respectively .even with 95% erasures the condition number is no more than 1312.4 .left : against the proportion of erasures when varies while is fixed at ( top curve ) and ( bottom curve ) .right : same as in the left figure , but for .,title="fig:",width=272,height=226 ] left : against the proportion of erasures when varies while is fixed at ( top curve ) and ( bottom curve ) .right : same as in the left figure , but for .,title="fig:",width=272,height=226 ] | data erasure can often occur in communication . guarding against erasures involves redundancy in data representation . mathematically this may be achieved by redundancy through the use of frames . one way to measure the robustness of a frame against erasures is to examine the worst case condition number of the frame with a certain number of vectors erased from the frame . the term _ numerically erasure - robust frames ( nerfs ) _ was introduced in to give a more precise characterization of erasure robustness of frames . in the paper the authors established that random frames whose entries are drawn independently from the standard normal distribution can be robust against up to approximately 15% erasures , and asked whether there exist frames that are robust against erasures of more than 50% . in this paper we show that with very high probability random frames are , independent of the dimension , robust against any amount of erasures as long as the number of remaining vectors is at least times the dimension for some . this is the best possible result , and it also implies that the proportion of erasures can arbitrarily close to 1 while still maintaining robustness . our result depends crucially on a new estimate for the smallest singular value of a rectangular random matrix with independent standard normal entries . |
the liquid - vapor fractionation ratio , , is defined as where is the mole fraction of isotope , denotes the liquid phase , and denotes the vapor phase . in the second equality , is the helmholtz free energy corresponding to the process in this work we consider the dilute deuterium ( d ) limit which reflects the situation found in the earth s atmosphere where it is 6000 times less common than h. in this limit , we consider the free energy of exchanging a single d atom in a vapor water molecule with an h atom in a liquid water molecule , with all other molecules being h .the free energy difference can be calculated from the thermodynamic integration expression where and are the kinetic energy expectation values for a hydrogen isotope of mass in a water molecule ho in the liquid and vapor phases respectively .the kinetic energy can be calculated exactly for a given potential energy model of water using a path integral molecular dynamics ( pimd ) simulation .these simulations exploit the exact isomorphism between a system of quantum mechanical particles and that of a set of classical ring polymers in which the spread of a polymer is directly related to that quantum particle s position uncertainty .the kinetic energy for the particle in the molecule ho can be calculated from these simulations using the centroid virial estimator where t is the temperature and k is the boltzmann constant . here , is vector from the bead representing particle to the center of the ring polymer and is the force on that bead as shown in fig .[ fig : waterbeads ] .the first term in this expression is the classical kinetic energy and is independent of the surrounding environment , thus it is identical in the vapor and liquid phase .the second term is the kinetic energy associated with confinement of a quantum particle .this confinement depends on the forces exerted on the particle by the surrounding molecules .equilibrium fractionation is thus an entirely quantum mechanical phenomenon .pimd simulations were performed using 1000 water molecules with =32 ring polymer beads used for the imaginary time discretization . previously described evolution and thermostatting procedures were used .the computational cost of these calculations was reduced by using the ring polymer contraction technique with a cut - off of 5 , which for this system size leads to more than an order of magnitude speed - up compared to a standard pimd implementation . the integral in eq .[ eq : deltaa ] was performed using the midpoint rule with 11 masses evenly spaced between and .calculations were performed at the experimental coexistence densities . to model the interactions within and between water molecules, we used the q - spcfw and q - tip4p / f models which have previously been shown to accurately reproduce many of water s properties in pimd simulations of liquid water .both models are flexible , use point charges , and have a harmonic description of the bending mode .however , while q - spcfw uses a purely harmonic description of the oh stretch q - tip4p / f contains anharmonicity by modeling the stretch as a morse expansion truncated at fourth order .\label{eq : aharm}\ ] ] here is the distance between the oxygen and hydrogen atom and the parameters are given in refs . and . the anharmonicity in the q - tip4p / f model makes the observed quantum mechanical effects much smaller than previously predicted from harmonic or rigid models and gave rise to the idea of `` competing quantum effects '' in water . both models have previously been shown to accurately reproduce many of water s properties in pimd simulations of liquid water . due to their simple potential form such models are generally less transferable to other phases than more sophisticated polarizable or _ ab initio _ descriptions .however partially adiabatic centroid molecular dynamics simulations have shown that the anharmonic stretch ( eq . [ eq : aharm ] ) allows reasonable agreement to be obtained in the observed frequency shifts in the infrared spectrum in going from liquid to gaseous water as well as from pure light to pure heavy water .these models were chosen to assess the importance of anharmonicity in the oh stretch using a `` zeroth order '' description of liquid water .hence , as we show below , they offer a straightforward way to assess competing quantum effects in water .figure [ fig : graph ] shows the fractionation factors calculated from our pimd simulations compared to the experimental data of ref .for consistency with the experimental data , we plot which is simply . since is generally small , thus an experimental value at 280 k of 100 corresponds to 10% more d residing in the liquid than the vapor . above 500 k ,the experimental data shows a well characterized region of inverse fractionation where d becomes more favored in the vapor than in the liquid , as shown in the inset of fig .[ fig : graph ] . turning to the simulated data , we observe that the harmonic q - spc / fw model over - predicts the magnitude of fractionation at 300 k by a factor of 3 , and does not fall to the value observed experimentally at 300 k until the temperture is raised to 450 k. in contrast , the q - tip4p / f model is in error by only 25% at the lowest temperature and approaches the experimental values more closely at higher temperatures .it also correctly shows inverse fractionation above 540 k. a previous study using the rigid spc / e model and perturbation theory , found a h / d fractionation of 450 at 300 k which is times higher than that seen experimentally .however , it is not clear whether this was purely due to the use of a rigid model or whether the approximate perturbation technique used to obtain the fractionation ratios was also at fault . while fig.[fig : graph ] demonstrates that the q - tip4p / f model provides much better agreement with the experimental data than q - spcfw , it is not immediately clear what aspect of the parameterization causes this . to better understand the origins of this effect , we constructed two models which we denote aq - spc / fw and hq - tip4p / f .in the former , the q - spc / fw water model has its harmonic oh stretch replaced by a fourth order morse expansion using the parameters of q - tip4p / f ; in the latter , the morse potential of q - tip4p / f is truncated at the harmonic term ( see eqs .[ eq : harm ] and [ eq : aharm ] ) . the anharmonic variant of q - spc / fw gives results as good as q - tip4p / f , while the harmonic version of q - tip4p / f fares as poorly as q - spcfw . in other words , the accurate prediction of the fractionation ratios in liquid wateris tied to the anharmonicity in the oh direction and is rather insensitive to the other parameters .this sheds light on a previous study where a sophisticated rigid polarizable model gave identical predictions for h / d fractionation to the simple fixed - charge rigid spc / e model , i.e. varying the intermolecular potential alone does not give the flexibility required to accurately reproduce the experimental fractionation ratios . to determine the reason for the inversion observed in fractionation above 500 k in both experiment and the q - tip4p / f model , we decompose the contributions to the fractionation ratio by noting that the quantum contribution to the kinetic energy in eq .[ eq : kinetic ] is the dot product of two vectors .the overall kinetic energy is invariant to the coordinate system used to evaluate it , thus when the kinetic energy is calculated in the standard cartesian basis all three components will average to the same number due to the isotropy of the liquid . to gain further insight , we instead use the internal coordinates of the water molecule and determine the contribution to arising from the oh bond vector , a vector in the plane of the molecule , and the vector perpendicular to the molecular plane as shown in fig .[ fig : waterbeads ] .this is similar to the approach taken by lin _ et ._ in the different context of investigating the proton momentum disitribution in ice .the results of this decomposition are shown in table [ ta : decompose_300 ] for 300 k where d is experimentally seen to favor the liquid , and table [ ta : decompose_620 ] for 620 k where d is experimentally seen to favor the vapor . from table[ ta : decompose_300 ] , we see that all of the models have largely similar contributions from the two directions orthogonal to the oh bond and that both are positive and therefore favor the d excess in the liquid .the contribution perpendicular to the plane is noticeably larger than the contribution in the plane . as shown in eq .[ eq : deltaa ] , the values depend on the change in the quantum kinetic energy between h and d in the liquid and the vapor , which in turn is determined by how much the h or d atom s position uncertainty is restricted by interacting with the other water molecules in the liquid or vapor . since in the vaporthere is little confinement in plane orthogonal to the water , a larger contribution from that direction is expected ; in the liquid , other molecules are present which restrict expansion in that coordinate . in all casesthe oh contribution is negative , indicating that there is less confinement in the position of the h atom in that direction in the liquid than in the vapor because the hydrogen atom participates in hydrogen bonding allowing the oh chemical bond to stretch more easily .however , comparing the anharmonic models ( aq - spcfw and q - tip4p / f ) with their harmonic counterparts ( q - spcfw and hq - tip4p / f ) , we observe a 10 fold increase in the values arising from the oh contribution which gives rise to a larger cancellation of the positive contributions from the two orthogonal vectors .it is this cancellation that leads to the much better agreement with the experimental data at 300 k. we now turn now to table [ ta : decompose_620 ] , which shows the contributions to the fractionation ratio at 620 k , a regime where experimentally d is preferred in the lighter phase .this is closer to the classical limit where the fractionation would be zero , thus , as expected , each component is reduced in magnitude compared to the lower temperature data in table [ ta : decompose_300 ] .however , the relative decrease in each component varies .the contributions arising from the in - plane and out - of - plane contributions orthogonal to oh decrease by a factor of 7 - 8 whereas the oh contribution falls by a factor of only .for the anharmonic models , the negative oh contribution outweighs the positive components in the other two directions leading to an inversion of the fractionation compared to that seen at 300 k in agreement with the experimental observation of -2 .the reason the oh component falls off more slowly is that this direction is dominated by stretching of the oh chemical bond , which is a high frequency coordinate , so even at high temperatures , quantum mechanics plays a noticeable role .in contrast , the two directions orthogonal to the oh direction are lower frequency and so approach the classical limit more rapidly as the temperature is increased .thus , the reason the h / d fractionation is low around 600 k is not due to the fact that all contributions are individually low but rather that they nearly exactly cancel at this point due to the different rates at which the components approach the classical limit . finally , we computed the fractionation ratio for the ttm3-f water model , which is known to have a very large cancellation of its quantum mechanical effects at 300 k .this model was fit to _ ab initio _ calculations using a potential form incorporating anharmonic flexibility , geometry dependent charges , and polarizability on the oxygen site .as such , it represents the current gold standard " of parameterized water models and has been used extensively in recent studies probing the effects of quantum mechanical fluctuations on water . to seeif the large cancellation of quantum effects predicted by this model is consistent with experimental fractionation ratios , we calculated the value at 300 k which yielded a value of .this is in qualitative and quantitative disagreement with the experimental results , since it predicts d to be favored in the vapor at all temperatures .this model therefore over - predicts competing quantum effects in water and hence care should be taken concerning its predictions on the effects of quantum fluctuations on water s structure and dynamics .however , based on our discussion above it is likely that reparamaterization of the oh bond anharmonicity could correct this discrepancy .in conclusion , we have shown that including anharmonicity in the oh bond when modeling water is essential to obtain agreement with the experimentally observed h / d fractionation ratios and that these ratios provide an excellent method to assess the accuracy of the quantum effects predicted by models of water .since it has recently been shown that the competition between quantum mechanical effects applies to other hydrogen bonded systems , it is likely that many of our conclusions will be relevant to understanding isotopic fractionation in these systems .additionally , while we only considered equilibrium fractionation in this work , which can be calculated exactly for a given potential energy model using pimd simulations , many water processes occurring in the world s atmosphere are non - equilibrium ones . while including the effects of quantum fluctuations on dynamics is a much more challenging feat , the recent development of efficient condensed phase quantum dynamics approaches should allow insights to be gained into kinetic fractionation processes .these directions will form the basis of future work .the authors gratefully thank joseph morrone and david selassie for helpful comments and a critical reading of this manuscript .this research was supported by a grant to b.j.b . from the national science foundation ( nsf - che-0910943 ) .vanicek j , miller wh ( 2007 ) efficient estimators for quantum instanton evaluation of the kinetic isotope effects : application to the intramolecular hydrogen transfer in pentadiene ._ j. chem ._ 127:114309 .paesani f , voth ga ( 2010 ) a quantitative assessment of the accuracy of centroid molecular dynamics for the calculation of the infrared spectrum of liquid water . _ the journal of chemical physics _ 132:014105 .fanourgakis gs , xantheas ss ( 2008 ) development of transferable interaction potentials for water .v. extension of the flexible , polarizable , thole - type model potential ( ttm3-f , v. 3.0 ) to describe the vibrational spectra of water clusters and liquid water ._ j. chem .phys . _ 128:074506 .liu j , et al .( 2011 ) insights in quantum dynamical effects in the infrared spectroscopy of liquid water from a semiclassical study with an ab initio - based flexible and polarizable force field ._ j. chem ._ 135:244503 . 0.5 model & stretch & oh & plane & orth & totalq - spcfw & harmonic & -17 & 92 & 141 & 216 hq - tip4p / f & harmonic & -15 & 84 & 146 & 215 q - tip4p / f & anharmonic & -152 & 92 & 164 & 104 aq - spcfw & anharmonic & -149 & 98 & 151 & 100 0.5 model & stretch & oh & plane & orth & totalq - spcfw & harmonic & -3 & 13 & 20 & 30 hq - tip4p / f & harmonic & -3 & 11 & 20 & 28 q - tip4p / f & anharmonic & -40 & 13 & 22 & -5 aq - spcfw & anharmonic & -40 & 13 & 21 & -6 | when two phases of water are at equilibrium , the ratio of hydrogen isotopes in each is slightly altered due to their different phase affinities . this isotopic fractionation process can be utilized to analyze water s movement in the world s climate . here we show that equilibrium fractionation ratios , an entirely quantum mechanical property , also provide a sensitive probe to assess the magnitude of nuclear quantum fluctuations in water . by comparing the predictions of a series of water models , we show that those describing the oh chemical bond as rigid or harmonic greatly over - predict the magnitude of isotope fractionation . models that account for anharmonicity in this coordinate are shown to provide much more accurate results due to their ability to give partial cancellation between inter and intra - molecular quantum effects . these results give evidence of the existence of competing quantum effects in water and allow us to identify how this cancellation varies across a wide range of temperatures . in addition , this work demonstrates that simulation can provide accurate predictions and insights into hydrogen fractionation . ater within earth s atmosphere is naturally composed of the stable hydrogen isotopes hydrogen ( h ) and deuterium ( d ) . during cycles of evaporation , condensation and precipitation , these isotopes naturally undergo partial separation due to their differing masses thereby leading to different h / d ratios in the two phases . this process of fractionation has a number of fortuitous consequences which are utilized in hydrology and geology . for instance , by comparing the ratio of h to d , one can estimate the origins of a water sample , the temperature at which it was formed , and the altitude at which precipitation occurred . equilibrium fractionation , where the two phases are allowed to equilibrate their h / d ratio , is entirely a consequence of the effects of quantum mechanical fluctuations on water s hydrogen bond network . quantum mechanical effects such as zero - point energy and tunneling are larger for h due to its lower mass . despite numerous studies , the extent to which quantum fluctuations affect water s structure and dynamics remains a subject of considerable debate . it has long been appreciated that one effect of quantum fluctuations in water is the disruption of hydrogen bonding , leading to de - structuring of the liquid and faster dynamics . however , more recent work has suggested that a competing quantum effect may exist in water , namely that the quantum kinetic energy in the oh covalent bond allows it to stretch and form shorter and stronger hydrogen bonds , which partially cancels the disruptive effect . this hydrogen bond strengthening has only been recently appreciated , as many original studies drew their conclusions based on models with rigid or harmonic bonds which are unable to describe this behavior . the degree of quantum effect cancellation depends sensitively on the anharmonicity of the oh stretch and the temperature . these parameters tune the balance between the lower frequency hydrogen bonding disruption , which will dominate at lower temperatures , and the higher frequency hydrogen bond strengthening effect , which will dominate at higher temperatures when rotations become essentially classical . if such a large degree of cancellation existed at ambient temperature , it would be highly fortuitous both in terms of the biological effects of heavy water , which is only mildy toxic to humans , as well as the ability to use heavy solvents in 2d - ir and nmr spectroscopies , where deuteration is assumed not to dramatically alter the structure or dynamics observed . however , the size of this cancellation remains elusive since empirical quantum models of water are typically fit to reproduce its properties when used in path integral simulations and the two _ ab initio _ path integral studies performed have not produced a consistent picture . in addition , many of these simulation studies compare the properties of water to those of its classical counterpart , but `` classical '' water is physically unrealizable even at relatively high temperatures , since water still has significant quantum effects present in its vibrations . in this paper , we use equilibrium fractionation ratios as a sensitive probe to assess the magnitude of quantum mechanical effects in water . fractionation ratios can be directly related to quantum kinetic energy differences between h and d in liquid water and its vapor and can be calculated exactly for a given water potential energy model using path integral simulations . the large number of accurate experimental measurements of these ratios allows for sensitive comparisons of theory and experiment over a wide range of temperatures . in the present work , we show what features are needed in a water model to accurately predict these ratios by decomposing the contributions to the free energy difference leading to fractionation . this in turn leads to a simple explanation of the inversion of the fractionation ratios seen experimentally at high temperatures , where d is favored over h in the vapor phase . |
challenges posed by the constantly growing urbanisation are complex and difficult to handle .they range from the increasing dependence on energy , to serious environmental and sustainability issues , and socio - spatial inequalities .in particular , we observe the appearance of socially homogeneous zones and dynamical phenomena such as urban decay and gentrification that reinforce the heterogeneity of the spatial distribution of social classes in cities . such a segregation characterized by an important social differentiation of the urban space has significant social , economic and even health costs which justify the attention it has attracted in academic studies over the past century . despite the abundant literature in sociology and economics , however , there is no consensus on the adequate way to quantify and describe patterns of segregation . in particular, the identification of neighbourhoods where the different groups gather is still in its infancy . as stated many times , and at different periods in the sociology literature , the study of segregation is cursed by its intuitive appeal . the perceived familiarity with the concept favours what duncan and duncan called ` naive operationalism ' : the tendency to force a sociological interpretation on measures that are at odds with the conceptual understanding of segregation . as a matter of fact , segregation is a complex notion , and the literature distinguishes several conceptually different dimensions .massey first proposed a list of dimensions ( and related existing measures ) , which was recently reduced to by reardon .( i ) _ exposure _ which measures the extent to which different populations share the same residential areas ; ( ii ) the _ evenness _ ( and _ clustering _ ) to which extent populations are evenly spread in the metropolitan area ; ( iii ) _ concentration _ to which extent populations concentrate in the areal units they occupy ; and ( iv ) _ centralization _ to which extent populations concentrate in the center of the city .we identify several problems with this picture .the first fundamental issue lies in the lack of a general conceptual framework in which all existing measures can be interpreted .instead , we have a patchwork of seemingly unrelated measures that are labelled with either of the aforementioned dimensions .although segregation can indeed manifest itself in different ways , it is relatively straightforward to define what is _ not _ segregation : a spatial distribution of different categories that is undistinguishable from a uniform random situation ( with the same percentages of different categories ) .therefore , we can define segregation as _ any pattern in the spatial distribution of categories that deviates significantly from a random distribution _the different dimensions of then correspond to particular aspects of how a multi - dimensional pattern can deviate from its randomized counterpart .the measures we propose here are all rooted in this general definition of segregation .the other issues are technical in nature .first , several difficulties are tied to the existence of many categories in the underlying data .historically , measurements of racial segregation were limited to measures between population groups . however , most measures generalise poorly to a situation with many groups , and the others do not necessarily have a clear interpretation .worse , in the case of groups based on a continuum ( such as income ) , the thresholds chosen to define classes are usually arbitrary .we propose in the following to solve this issue by defining classes in a unambiguous and non - arbitrary way through their pattern of spatial interaction .applied to the distribution of income categories in us cities , we find emergent categories , which are naturally interpreted as the lower- , middle- and higher - income classes .second , most authors systematically design a single index of segregation for territories that can be very large , up to thousands of square kilometers . in order to mitigate segregation , a more local ,spatial information is however needed : local authorities need to locate where the poorest and richest concentrate if they want to design efficient policies to curb , or compensate for the existing segregation . in other words ,we need to provide a clear _ spatial _ information on the pattern of segregation .previous studies were interested in the characterisation of intra - urban segregation patterns , but they suffer from the limitations of the indicators they use .in particular , the values they map come with no indication as to when a high value of the index indicates high segregation levels . as a result ,the maps are not necessarily easy to read .furthermore , all the descriptions are cartographic in nature and while maps are a powerful way to highlight patterns , we would like to provide further , quantitative , information about the spatial distribution that goes beyond cartographic representation .the lack of a clear characterization of the spatial distribution of individuals is not tied to the problem of segregation in particular , but pertains to the field of spatial statistics .many studies avoided this spatial problem by assuming implicitely that cities are monocentric and circular , and rely on either an arbitrary definition of the city center boundaries , or on indices computed as a function of the distance to the center ( whatever this may be ) .however , most if not all cities are anisotropic , and the large ones , polycentric ( see and references therein ) . many empirical studies and models in economicsaim to explain the difference between central cities and suburbs . yet, the sole stylized fact upon which they rely city centers tend to be poorer than suburbs ( in the us ) lacks a solid empirical basis .in the first part of the paper , we define a null model the unsegregated city and define the representation , a measure that identifies significant local departures from this null case .we further introduce a measure of exposure that allows us to quantify the extent to which the different categories attract or repel one another .this exposure is the starting point for the non - parametric identification of the different social classes . in the second part, we define neighbourhoods by clustering adjacent areal units where classes are overrepresented and show that there an increased spatial isolation of classes as population size of cities grows .we also show that larger cities are richer in the sense that the wealthiest households tend to be overrepresented and the low - income underrepresented in large cities .finally , we discuss how density is connected to the spatial distribution of income , and how to go beyond the traditional picture of a poor center and rich suburbs .we focus here on the income distribution , using the data for the 2014 core - based statistical areas. however , the methods presented in this paper are very general , and can be applied to different geographical levels , to an arbitrary number of population categories , and to different variables such as ethnicity , education level , etc .most studies exploring the question of spatial segregation define measures before comparing their value for different cities .knowing that two quantities are different is however not enough : we also have to know whether this difference is significant . in order to assess the significance of a result , we have to compare it to what is obtained for a reasonable null model .we assume that we have areal units dividing the city and that individuals can belong to different categories .the elementary quantity is which represents the number of individuals of category in the unit .the total number of individuals belonging to a category is and the total number of individuals in the city is given by . in the context of residential segregation, a natural null model is the _ unsegregated city _, where all households are distributed at random in the city with the constraints that * the total number of households living in the areal unit is fixed ( from data ) .* the numbers are given by the data ; the problem of finding the numbers in this unsegregated city is reminiscent of the traditional occupancy problem in combinatorics .if we assume that for all categories , we have , they are then distributed according to the multinomial denoted by , and the number of people of category in the areal unit is distributed according to a binomial distribution .therefore , in an unsegregated city , we have & = n_\alpha\,\frac{n(t)}{n } \\ { \mathrm{var}}\left [ n_\alpha(t ) \right ] & = n_\alpha\,\frac{n(t)}{n } \left ( 1 - \frac{n(t)}{n } \right ) \end{aligned}\ ] ]the fundamental quantity we will use in the following is the _ representation _ of a category in the areal unit , defined as the representation thus compares the relative population in the areal unit to the value that is expected in an unsegregated city where individuals choose their location at random . or , equivalently , the representation compares the proportion of individuals in the unit to their proportion in the city as a whole . in metropolitan areas, is large compared to , and the distribution of the can be approximated by a gaussian with the same mean and variance .therefore we have in the unsegregated case & = 1 \\ { \mathrm{var}}\left [ r_\alpha(t ) \right ] & = \sigma_\alpha(t)^2 = \frac{1}{n_\alpha } \left ( \frac{n}{n(t ) } - 1 \right ) \end{split}\end{aligned}\ ] ] an important merit of the representation is the possibility to define rigorously the notion of _ over-_representation and _ under-_representation of a category in a geographical area . a category is overrepresented ( with a confidence ) in the geographical area if .a category is underrepresented ( with a confidence ) in the geographical area if .if the value falls in between the two previous limits , the representation of the category is not statistically different ( at this confidence level ) from what would be obtained if individuals were distributed at random .existing measures output levels of segregation ( typically a number between 0 and 1 ) but do not indicate whether these levels are _ abnormally _ high . to this respect , the representation is a significant improvement over previous measures . note that the above null model is reminiscent of the ` counterfactuals ' used in the empirical literature on agglomeration economies .also , the expression of the representation ( eq . [ eq : repre ] ) is very similar to the formula used in economics to compute comparative advantages , or to the localisation quotient used in various contexts . to our knowledge , however , this formula has never been justified by a null model in the context of residential location .the representation allows to assess the significance of the deviation of population distributions from the unsegregated city . as we will show below , it is also the building block for measuring the level of repulsion or attraction between categories allowing us to uncover the different classes and to identify the neighbourhoods where the different categories concentrate .last , but not least , the representation defined here does not depend on the category structure at the city scale , but only on the spatial repartition of individuals belonging to each category .this is essential in order to be able to compare different cities where the group compositions or inequality might differ .inequality and segregation are indeed two separate concepts , and the way they are measured should be distinct from one another . finally , we would like to mention that using the uniform distribution as a null model can have implications broader than the study of residential segregation . indeed , from a very abstract perspective , the study of residential segregation is the study of labelled objects in space .the methods presented here can therefore be applied to the study of the distribution of any object in space . in particular , it can be used to identify the locations in a territory where populations with different characteristics ( not necessarily socio - economic ) concentrate .another shortcoming of the literature about segregation is the lack of indicator to quantify to what extent different populations attract or repel one another .such a measure of attraction or repulsion is however important to understand the dynamics and scale ( intensity of attraction / repulsion ) of residential segregation .our indicator is inspired by the m - value first introduced by marcon and puech in the economics literature to measure the concentration of industries and used as a measure of interaction between retail store categories in .these authors were interested in measuring the geographic concentration of different type of industries .while previous measures ( such as ripley s k - value ) allow to identify departures from a random ( poisson ) distribution , the m - value s interest resides in the possibility to evaluate different industries tendency to co - locate .the idea , in the context of segregation is simple : we consider two categories and and we would like to measure to which extent they are co - located in the same areal unit . to quantify the tendency of households to co - locate , we measure the representation of the category as witnessed on average by individuals in category , and obtain the following quantity although it is not obvious with this formulation , this measure is symmetric : ( see supplementary information below .effectively , this ` e - value ' in this context is a measure of exposure , according to the typology of segregation measures proposed in . however , unlike the other measures of exposure found in the literature , we are able to distinguish between situations where categories attract ( ) or repel ( ) one another . in the case of an unsegregated city , every household in sees on average and we have .if populations and attract each other , that is if they tend to be overrepresented in the same areal units , every household sees and we have at the city scale . on the other hand ,if they repel each other , every household sees and we have at the city scale .the minimum of the exposure for two classes and is obtained when these two categories are never present together in the same areal unit and then the maximum is obtained when the two classes are alone in the system ( see supplementary information below for more details ) and in this case we get in the case , the previous measure represents the ` isolation ' defined as and measures to which extent individuals from the same category interact which each other . in the unsegregated city , where individuals are indifferent to others when chosing their residence , we have . in contrast , in the extreme situation where individuals belonging to the class live isolated from the others , the isolation reaches its maximum value of course , in order to discuss the significance of the values of exposure and isolation , one needs to compute the variance of the exposure in the unsegregated situation defined earlier .the calculations for the variance as well as for the extrema are presented in the supplementary information below .finally , we note that co - location is not necessarily synonymous with interaction , as pointed out by chamboredon , and we should rigorously talk about _ potential _ interactions .nevertheless , in the absence of large scale data about direct interactions between individuals , co - location is the best proxy available .studies that focus on the definition of a single segregation index for cities as a whole can avoid the problem of defining classes , either by measuring the between - neighbourhood variation of the average income ( examples are the standard deviation of incomes , the variance of logged incomes and jargowsky s neighbourhood sorting index ) , or by integrating over the entire income distribution ( for instance the rank - order information theory index defined in ) .however , when they investigate the behaviour of households with different income and their spatial distribution , studies of segregation must be rooted in a particular definition of categories ( or classes ) .unfortunately , there is no consensus in the literature about how to separate households in different classes according to their income , and studies generally rely on more or less arbitrary divisions .while in some particular cases grouping the original categories in pre - defined classes is justified , most authors do so for mere convenience reasons . however , as some sociologists have already pointed out , imposing the existence of absolute , artificial entities is necessarily going to bias our reading of the data .furthermore , in the absence of recognized standards , different authors will likely have different definitions of classes , making the comparisons between different results in the literature difficult . from a theoretical point of view, entities such as social classes do not have an existence of their own .grouping the individuals into arbitrary classes when studying segregation is thus a logical fallacy : it amounts to imposing a class structure on the society before assessing the existence of this structure ( which manifests itself by the differentiated spatial repartition of individuals with different income ) . here ,instead of imposing an arbitrary class structure , we let the class structure emerge from the data themselves .our starting hypothesis is the following : _ if there is such a thing as a social stratification based on income , it should be reflected in the households behaviours _ :households belonging to the same class should tend to live together , while households belonging to different classes should tend to avoid one another .in other words , we aim to define classes using the way they manifest themselves through the spatial repartition of the different categories .we choose as a starting point the finest income subdivision given by the us census bureau ( subdivisions ) and compute the matrix of values for all cities .we then perform a hierarchical clustering on this matrix , succesively aggregating the subdivisions with the highest values .the process , that we implemented in the python library marble , goes as follows : 1 .check whether there exists a pair , such that ( i.e. two categories that attract one another with at least 99% confidence according to the chebyshev inequality ) .if not , stop the agregation and return the classes ; 2 .if there is at least one couple satisfying ( 1 ) , normalize all values by their respective maximum values .find then the pair , whose normalized exposure is the maximum ; 3 . aggregate the two categories and ; 4 . repeat the process until it stops . in order to aggregate the categories at step 3 ,we need to compute the exposure between and any category , as well as its variance .the corresponding calculations are presented in the supplementary information ( below ) .we stress that the obtained classification does not rely on any arbitrary threshold .indeed , we stop the aggregation process when the only classes left are indifferent ( with confidence ) or repel each other ( with confidence ) . strikingly , the outcome of this method on us data is the emergence of 3 distinct classes ( fig .[ fig : classes_alluvial ] ) : the higher - income ( of the us population ) and the lower - income classes ( of the us population ) which repel each other strongly while being respectively very coherent and a somewhat small middle - income class ( of the population ) that is relatively indifferent to the other classes .this result implies that there is some truth in the conventional way of dividing populations into income classes , and that what we casually perceive as the social stratification in our cities actually emerges from the spatial interaction of people .our method has several advantages over a casual definition : it is not arbitrary in the sense that it does not depend on a tunable parameter ( besides the significance threshold ) and on who performs the analysis .its origins are tractable , and can be argued on a quantitative basis .because it is quantitative , it allows comparison of the stratification over different points in time , or between different countries .it can also be compared to other class divisions that would be obtained using a different medium for interaction , for instance mobile phone communications . in the following, we will systematically use the classes obtained with this method . at the scale of an entire country, segregation can manifest itself in the unequal representation of the different income classes across the urban areas .we plot on fig .[ fig : inter - urban_representation ] the ratio where is the number of cities of population greater than , and the number of cities of population greater than for which the class is overrepresented .a decreasing curve indicates that the class tends to be underrepresented in larger urban areas , while an increasing curve shows that the category tends to be overrepresented in larger urban areas ( the representation is here measured with respect to the total population at the us level ) .these results challenge sassen s thesis on social polarization , according to which world ( very large ) cities host proportionally more higher - income and lower - income individuals than smaller cities .if this thesis were correct , we should observe an overrepresentation of both higher - income and lower - income households in larger cities . instead , as shown on fig .[ fig : inter - urban_representation ] , higher - income households are overrepresented in larger cities , while lower - income households tend to be underrepresented ( see supplementary information for a detailed discussion ) .these results support the previous critique of the social polarisation thesis by hamnett . , scaledwidth=49.0% ]the representation measure introduced at the beginning of this article allows to draw maps of overrepresentation and thus to identify the areas of the city where categories are overrepresented . in the following ,we propose to characterise the spatial arrangement of these areas for the different categories . in many studies ,the question of the spatial pattern of segregation is limited to the study of the center versus suburbs and is usually adressed in two different ways .in the first case , a central area is defined by arbitrary boundaries and measures are performed at the scale of this central area and the rest is labelled as ` suburbs ' .the issue with this approach is that the conclusions depend on the chosen boundaries and there is no unique unambiguous definition of the city center : while some consider the central business district , others choose the urban core ( urbanized area ) where the population density is higher .the second approach , in an attempt to get rid of arbitrary boundaries , consists in plotting indicators of wealth as a function of distance to the center .this approach , inspired by the monocentric and isotropic city of many economic studies such as the von thunen or the alonso - muth - mills model , has however a serious flaw : cities are not isotropic and are spread unevenly in space , leading to very irregular shapes . representing any quantity versus the distance to a centerthus amounts to average over very different areas and in polycentric cases ( as it is the case for large cities ) is necessarily misleading .as we show below , this method mixes together areas that are otherwise very different .we propose here a different approach that does not require to draw boundaries between the center and the suburbs .in fact , it does not even require to define and locate the ` center ' at all .in the case of a monocentric and isotropic city , our method gives results similar to those given by the other measures . in the more general case where cities are not necessarily monocentric neither isotropic, our method allows to compare regions of equivalent densities .the center of a city is usually defined as the region which has the highest population ( or employment ) density .we therefore propose the density as a proxy to measure of how ` central ' an area is .we thus plot quantities computed over all areal units ( blockgroups in this dataset ) that have a density population in a given interval ] . assuming that and are independent ( which is rigorously not true for tracts with a fixed capacity ) , it follows & = { \mathrm{e}}[r_{\alpha}(t ) ] { \mathrm{e}}[r_{\beta}(t)]=1\\ { \mathrm{var } } [ e_{\alpha \beta}(t ) ] & = \frac{1}{n_\alpha n_\beta } \left ( \frac{n}{n(t)}-1 \right)^2 + \frac{1}{n_\alpha } \left ( \frac{n}{n(t ) } - 1 \right ) + \frac{1}{n_\beta } \left ( \frac{n}{n(t ) } - 1 \right)\end{aligned}\ ] ] thus & = 1\\ { \mathrm{var } } [ e_{\alpha \beta } ] & = \frac{1}{n^2 } \sum_t n(t)^2\ , { \mathrm{var}}[e_{\alpha \beta}(t ) ] + \frac{2}{n^2 } \sum_{s < t } n(s)\ , n(t)\ , { \mathrm{cov } } [ e_{\alpha \beta}(s ) , e_{\alpha \beta}(t)]\end{aligned}\ ] ] the covariance is non - zero because the of two different tracts and are not independent , and we have = \left(1-\frac{1}{n_\alpha}\right ) \left(1-\frac{1}{n_\beta}\right ) - 1\ ] ] in order to be able to make sense of the values of exposure ( ) and isolation ( ) , and compare different cities , we need to know their respective maximum and minimum values .we will consider the following cases : maximum isolation : : situation where each areal unit contains households from one and only one category .this situation corresponds to the minimum of and the maximum of .the unsegregated city : : when the distribution of households in the different areal units can not be distinguished from a random distribution .this is what we call the ` unsegregated city ' and gives a point of reference .it corresponds to the minimum of . in the unsegregated city case, there is no way to tell the difference between the distribution of the different categories in the different tracts and a random distribution . in this situation ,isolation indices reach their minimum value when .in the maximum isolation case , all categories are alone in their own tract . in other words , and we have iff .we thus obtain for the isolation where is the set of areal units where the category is present . in these unit , .therefore in the maximum isolation case , all categories are alone in their own tract . in other words , and we have iff .in this situation , we trivially have the maximum of the exposure is however more difficult to obtain in general .we fix and and we denote by a category all the rest . by definitionwe have , , and .we will look for the ` global ' maximum by keeping the only constraint that in each unit we have .we obtain for the exposure the maximization of the exposure with respect to thus gives \ ] ] which leads to the exposure for these values reads the quantity is in the compact set $ ] and the maximization is not necessarily given by taking the derivative equal to zero .indeed , in this case the maximum of is obtained for for all ( while the derivative equal to zero would lead to the minimum obtained for for all t ) and reads this maximum is the global one , obtained when there are no constraints .one can easily add the constraint by using a lagrange multiplier and we have then to maximize the function the derivative with respect to leads to where we expressed the constraint in order to eliminate the lagrange multiplier .we can then express the maximum obtained for these values of and as above the maximum is obtained for for all t and reads \ ] ] which is obviously smaller than the global maximum .these maxima were obtained when there are no constraints on the total number . when there is such a constraint , the construction of the maximum of eq .( [ eq : expomax ] ) is not trivial .very likely , when is fixed , we have to fill the smallest tracts with this class and we are then left with the classes and only .it seems difficult to obtain an analytical derivation of this maximum and we will keep as a reference in our calculations the global maximum .the study of segregation must be rooted in a particular definition of class .however , the income is a continuous variable , and there is no clear definition of incomes classes in the litterature : a class means different things to different people .we thus start by finding out the class structure as it manifests itself in the spatial arrangement of people .we take as a starting point the finest income subdivision given by the census bureau ( subdivisions ) and compute the matrix of values at the scale of each cities .we then perform hierarchical clustering on this matrix , successively aggregating the subdivisions with the highest values .the process , implemented in the library marble , goes as follows : 1 .check whether there exists a pair , such that ( i.e. two categories that attract one another with at least 99% confidence according to the chebyshev inequality ) .if not , stop the agregation and return the classes ; 2 .if there are some couples satisfying ( 1 ) , normalize all values by their respective maximum values .find then the pair , whose normalized exposure is the maximum ; 3 .aggregate the two categories and ; 4 .restart the process until it stops . in order to aggregate the categories at step 3 , we need to compute the distance betweence and any category once and have been aggregated . using the definition of , it is easy to show that the variance is also easily calculated as : = \frac{1}{\left ( n_\beta + n_\gamma \right)^2 } \left ( n_\beta^2\,{\mathrm{var}}\left [ e_{\alpha \beta } \right ] + n_\gamma^2\ , { \mathrm{var}}\left [ e_{\alpha \gamma } \right ] \right)\ ] ] we computed the class structure at the scale of the whole us .we assume that the country is a juxtaposition of the different cities , with independent values of .we then compute the average over the whole country and obtain where is the population of the city , and the urban population of the us .the sum runs over all msas in the us . the variance is then given by = \frac{1}{n_{us}^2 } \sum_{c}^{}(n_c)^2\ , { \mathrm{var}}\left [ e_{\alpha \beta}^c \right]\ ] ] starting from categories ( the poorest ) to ( the wealthiest ) , our methods finds the following classes for the us with in parenthesis the percentage of the total us population that is included in the corresponding classes .although intuitively appealing , the idea that larger metropolitan areas are richer is not as straightforward as it seems . the first question one can ask is if people are richer on average in large cities ? as shown in fig .[ fig : scaling_income ] , the total income in a city scales ( slightly ) superlineary with population size which suggests that the income _ per household _ is on average higher in larger cities than in smaller ones . in other words , there are proportionally more households belonging the to the wealthiest categories in large cities . in other words ,the income inequality is higher in large cities than in small ones .( ) .[ fig : scaling_income],scaledwidth=50.0% ] in order to measure levels of income inequality , we compute the gini coefficient of the income distribution for every core - based statistical area using the formula proposed in msa in versus the number of households in the city . no clear trend can be observed here.[fig : gini],scaledwidth=50.0% ] the results , shown in fig .[ fig : gini ] do not show any dependence of the gini coefficient on the metropolitan population .this example shows that the gini coefficient is not always a good measure of inequality and can be too aggregated to detect finer details . in order to confirm the consequence of the superlinear scaling of income in terms of larger cities having proportionally more higher - income households, we plot the number of households belonging to the different classes as a function of the total number of households on fig .[ fig : scaling_class ] .we find that for three classes , the data are well approximated by a power - law relationship the problem with writing scaling relationships in this case is that it the constraint is hidden ( ie .the numbers of households belonging to each category must sum to the total number of households ) .we therefore write where is the fraction of households in the city that belong to the class .the constraint that the numbers of households in each class should sum to is equivalent to we plot these ratios on fig .[ fig : ratio_class ] and we indeed see that the number of households belonging to the higher - income class is proportionally larger in larger cities ( for ) , while the number of households belonging to the lower - income class is proportionally smaller .the proportion of middle - income class households stays essentially the same across all metropolitan areas . in this work, we take a different approach and ask if the different classes are more or less represented in a given msa , compared to the average us result . in this context , a city is richer if the higher - income class is over - represented in this city , while the lower - income class is under - represented .the measure stems from the realisation that rich and poor are not absolute concepts , but must be related to the environment . in this case , it makes sense to compare the representation of the different income classes between metropolitan areas .neighbourhoods identify the areas in the city where the categories are overrepresented , but this does not necessarily mean that most households belonging to a category live in either of the corresponding neighourhood .we plot the distribution of the proportion of households belonging to the lower- , middle- and higher- income classes that also live in a corresponding neighbourhood on fig .[ fig : ratio ] .one can see that higher - income households tend to be more concentrated in the regions where they are represented , with an average of . followed by the lower - income households , with and average of .the middle - income households are equally evenly spread across the city , with an average of ., scaledwidth=50.0% ]in the main text , we find that the number of neighbourhoods for the classes grows sublinearly with the size of a city , with a behaviour that is well approximated by a power - law with ( ) for all classes together .we claim this shows the tendency of classes to cluster more in larger cities than in smaller ones .this is only true , however , if the number of areal units in which each class is overrepresented does not itself vary sublinearly with population size .we plot on fig .[ fig : overrepresented ] these numbers for each class and each city as a function of the size of the city .we find that the behaviour of the number of overrepresented units is consistent with a linear behaviour for all three classes massey , douglas s. , mary j. fischer , william t. dickens , and frank levy .the geography of inequality in the united states , 1950 - 2000 [ with comments ] .brookings - wharton papers on urban affairs . 2003;140 . | the spatial distribution of income shapes the structure and organisation of cities and its understanding has broad societal implications . despite an abundant literature , many issues remain unclear . in particular , all definitions of segregation are implicitely tied to a single indicator , usually rely on an ambiguous definition of income classes , without any consensus on how to define neighbourhoods and to deal with the polycentric organization of large cities . in this paper , we address all these questions within a unique conceptual framework . we avoid the challenge of providing a direct definition of segregation and instead start from a definition of what segregation is not . this naturally leads to the measure of representation that is able to identify locations where categories are over- or underrepresented . from there , we provide a new measure of exposure that discriminates between situations where categories co - locate or repel one another . we then use this feature to provide an unambiguous , parameter - free method to find meaningful breaks in the income distribution , thus defining classes . applied to the 2014 american community survey , we find 3 emerging classes low , middle and higher income out of the original 16 income categories . the higher - income households are proportionally more present in larger cities , while lower - income households are not , invalidating the idea of an increased social polarisation . finally , using the density and not the distance to a center which is meaningless in polycentric cities we find that the richer class is overrepresented in high density zones , especially for larger cities . this suggests that density is a relevant factor for understanding the income structure of cities and might explain some of the differences observed between us and european cities . |
high - energy muon beams have been proposed as uniquely powerful and incisive sources for neutrino scattering and oscillation studies .they may also enable energy - frontier lepton - antilepton colliders and may have unique advantages for studying the physics of electroweak symmetry breaking . the production of high - energy muon beams at the intensities needed for these applications will require muon - beam cooling .to accelerate a secondary or tertiary beam it is desirable first to decrease its size so that a reasonable fraction of the produced particles will fit inside the apertures of the beamline .it is well known that a focusing element ( _ e.g. _ a pair of quadrupole magnets with opposed field gradients ) can decrease the area of a charged - particle beam while increasing its spread in transverse momentum and , consequently , its divergence .this relationship is an example of liouville s theorem : conservative forces can not increase or decrease the volume occupied by a beam in six - dimensional phase space .focusing alone does not suffice for efficient acceleration of a secondary or tertiary beam , since the resulting increase in divergence means the beam will exceed some other aperture further downstream .what is needed instead is a process by which _ both _ the beam size and divergence can be reduced . by analogy with refrigeration , which decreases the random relative motions of the molecules of a gas ,this is known as beam _cooling_. it is convenient to represent the volume of phase space occupied by a beam by the beam s _ emittance_.the emittance in a given coordinate can be expressed as where designates root - mean - square , , and the factor is introduced so as to express emittance in units of length ( is the mass of the beam particle and the speed of light ) . neglecting possible correlations among the coordinates and momenta , we then have for the six - dimensional emittance .the subscript distinguishes these _ normalized _ emittances from the frequently used _unnormalized _ emittance where and are the usual relativistic factors . in terms of ( unnormalized ) emittance , the transverse beam sizes are given by where are the transverse amplitude functions of the focusing lattice in the and directions , which characterize the focusing strength along the lattice ( low corresponds to strong focusing in the direction ) . since liouville s theorem tells us that normalized emittance is a constant of the motion , beam cooling requires a violation " of liouville s theorem .this is possible by means of dissipative forces such as ionization energy loss , as described in more detail below .cooling of the transverse phase - space coordinates of a muon beam can be accomplished by passing the beam through energy - absorbing material and accelerating structures , both embedded within a focusing magnetic lattice ; this is known as ionization cooling .other cooling techniques ( electron , stochastic , and laser cooling ) are far too slow to yield a significant degree of phase - space compression within the muon lifetime .ionization of the absorbing material by the muons decreases the muon momentum while ( to first order ) not affecting the beam size ; by eq . [ eq : eps - n ] , this constitutes cooling . at the same time, multiple coulomb scattering of the muons in the absorber increases the beam divergence , heating the beam . differentiating eq .[ eq : eps - n ] with respect to path length , we find that the rate of change of normalized transverse emittance within the absorber is given approximately by where angle brackets denote mean value , muon energy is in gev , is evaluated at the location of the absorber , and is the radiation length of the absorber medium .( this is the expression appropriate to the cylindrically - symmetric case of solenoidal focusing , where . ) the first term in eq .[ eq : cool ] is the cooling term and the second is the heating term . to minimize the heating term , which is proportional to the function and inversely proportional to radiation length , it has been proposed to use hydrogen as the energy - absorbing medium , giving / m and m , with superconducting - solenoid focusing to give small cm .key issues in absorber r&d include coping with the large heat deposition by the intense ( ) muon beam and minimizing scattering in absorber - vessel windows .an additional technical requirement is high - gradient reacceleration of the muons between absorbers to replace the lost energy , so that the ionization - cooling process can be repeated many times . even though it is the absorbers that actually cool the beam , for typical radio - frequency ( rf ) accelerating - cavity gradients ( / m ) , it is the rf cavities that dominate the length of the cooling channel ( see _ e.g. _ fig .[ fig : sfofo ] ) , and the achievable rf gradient determines how much cooling is practical before an appreciable fraction of the muons have decayed .we see from eq .[ eq : cool ] that the percentage decrease in normalized emittance is proportional to the percentage energy loss , thus cooling in one transverse dimension by a factor 1/ requires % energy loss and replacement .the expense of rf power favors low beam energy for cost - effective reacceleration , as do also the increase of and the decreased width of the distribution at low momentum , and most muon - cooling simulations to date have used / .we are pursuing r&d on high - gradient normal - conducting rf cavities suitable for insertion into a solenoidal - focusing lattice .transverse ionization cooling causes the longitudinal emittance to grow .several effects contribute to this growth : fluctuations in energy loss in the absorbers ( energy - loss straggling , or the landau tail " ) cause growth in the energy spread of the beam , as does the negative slope of the momentum dependence in the beam - momentum regime ( below the ionization minimum ) that we are considering .moreover , these low - momentum , large - divergence beams have a considerable spread in propagation velocity through the cooling lattice , causing bunch lengthening .these effects result in gradual loss of particles out of the rf bucket .they could be alleviated by longitudinal cooling .longitudinal ionization cooling is possible in principle , but it appears to be impractical .its realization would call for operation _ above _ the ionization minimum , where the slope with momentum is positive , but that slope is small and the resulting weak cooling effect is overcome by energy - loss straggling .instead what is envisioned is emittance _ exchange _ between the longitudinal and transverse degrees of freedom , decreasing the longitudinal emittance while at the same time increasing the transverse .conceptually , such emittance exchange can be accomplished by placing a suitably shaped absorber in a lattice location where there is dispersion , _i.e. _ , using a bending magnetic field to spread the muons out in space according to their momenta , and shaping the absorber so as to absorb more energy from the higher - momentum muons and less from the lower - momentum ones .( one can see that this is emittance _ exchange _ rather than longitudinal cooling _per se _ , since to the extent that the muon momentum spread has been reduced by the shaped absorber , the beam can no longer be reconverged to a small spot by a subsequent bend . )this is followed by transverse ionization cooling , the combined process being effectively equivalent to longitudinal cooling .a variety of focusing - lattice designs for transverse muon cooling have been studied , most using solenoids as focusing elements . especially for the large ( m ) aperture required at the beginning of a muon cooling channel , stronger focusing gradients are possible using solenoids than using quadrupoles , and unlike quadrupoles , solenoids have the virtue of focusing both transverse dimensions simultaneously , giving a more compact lattice . while a high - field solenoid can produce a small ( and constant ) , it is straightforward to see that a single such solenoid is not sufficient for muon cooling . a charged particle entering a solenoid off - axis receives a transverse magnetic kick from the fringe field , such that the particle s straight - line motion in the field - free region becomes helical motion within the solenoid .the exit fringe field must thus impart an equal and opposite kick so that the particle resumes its straight - line motion in the subsequent field - free region .if within the solenoid the particle loses energy in an absorbing medium , the angular momentum of its helical motion must decrease , resulting in an imbalance between the entrance and exit kicks .the particle then exits the magnet with a net angular momentum , implying that a parallel beam entering an absorber - filled solenoid will diverge upon exiting . to cancel this net angular momentum ,the field direction must alternate periodically .the simplest case conceptually is focusing by a constant solenoidal field , but with one field flip " halfway along the cooling channel .the length of a uniform section can be of order m .better performance can be achieved by adding a second field flip . at an opposite extreme, the solenoidal - field direction can be flipped every meter or so , leading to a variety of solenoidal - focusing lattices dubbed alternating solenoid , fofo , dfofo , sfofo ( see fig . [fig : sfofo ] ) , etc .detailed six - dimensional simulations show that enough transverse cooling can be achieved to build a high - performance neutrino factory , using either a double - flip or sfofo cooling lattice .for example , in palmer s recent sfofo design an initial transverse normalized emittance of is cooled in a 400-m - long cooling system to a final emittance of with % muon loss .such a facility would produce neutrinos per year aimed at a far detector that could be thousands of km from the source , giving oscillation sensitivity at least two orders of magnitude beyond that of long - baseline experiments now under construction . without longitudinal - transverse emittance exchange , transverse cooling reaches a point of diminishing returns as emittance growth in the longitudinal phase plane causes muons to be lost from the rf bucket . while emittance exchange would be helpful but not essential for a neutrino factory , to achieve the considerably smaller emittances required in a muon collider , it is mandatory .r&d on emittance exchange is ongoing , and several promising ideas are being actively explored .however , _ nonlinear _ conservative forces can cause phase - space filamentation , " the practical effect of which is essentially growth of the beam s occupied phase - space volume , and stochastic cooling can be thought of as the opposite effect , in which particles are moved in phase space to make occupied regions of phase space more contiguous and move empty regions of phase space outside of the beam envelope .g. k. oneill , phys .rev . * 102 * , 1418 ( 1956 ) ; a. a. kolomensky , sov . atomic energy * 19 * , 1511 ( 1965 ) ; g. i. budker , a. n. skrinsky , sov .* 21 * , 277 ( 1978 ) ; d. neuffer , fnal report fn-319 ( 1979 ) ; a. n. skrinsky , v. v. parkhomchuk , sov .j. nucl .* 12 * , 223 ( 1981 ) ; d. neuffer , part .* 14 * , 75 ( 1983 ) ; e. a. perevedentsev , a. n. skrinsky , in _ proc .12th int .conf . on high energy accelerators _ ,f. t. cole , r. donaldson , eds .( fermilab , 1984 ) , p. 485 .however , at sufficiently high energy , a proposed technique known as optical stochastic cooling " may become practical ; see a. zholents , m. zolotorev , w. wan , phys .st accel .beams * 4 * , 031001 ( 2001 ) .another possibility is liquid - lithium lenses , which could serve as both the focusing element and the absorbing material .prototype studies in novosibirsk have so far been unsuccessful .a third possibility is high - pressure gaseous absorbers .a. moretti _ et al ._ , proc .20th international linac conference ( linac 2000 ) , monterey , california , 21 - 25 aug 2000 , p. 896( econf c000821:thc18 , 2000 ) ; j. corlett _et al . _ , to appear in proc .pac2001 ( 2001 ) .the numbers quoted above include the minicooling " section , located upstream of the buncher , " which precedes the sfofo channel . in the -m - long sfofo channel itself the emittance is cooled from to 2.8 with % muon loss . | starting from elementary concepts , muon - beam cooling is defined , and the techniques by which it can be accomplished introduced and briefly discussed . |
to shape the beam so that it matches the characteristics of the specific treatment in radiation therapy ( thus achieving the delivery of the prescribed dose to the target ( tumour ) and maximal protection of the surrounding healthy tissue and vital organs ) , beam - limiting and beam - shaping devices ( bl / bsds ) are routinely used .generally speaking , the beam is first restricted ( in size ) by the primary collimator , a beam - limiting device giving it a rectangular shape .the beam may subsequently encounter the multi - leaf collimator ( mlc ) , which may be static or dynamic ( i.e. , undergoing software - controlled motion during the treatment session ) .more frequently than not , the desirable beam shaping is achieved by inserting a metallic piece ( with the appropriate aperture and thickness ) into the beamline , directly in front of the patient ; this last beam - shaping device is called a patient collimator or simply a block .being positioned close to the patient , the block achieves efficient fall - off of the dose ( sharp penumbra ) outside the target area .( the simultaneous use of mlc and block is not common . ) to provide efficient attenuation of the beam outside the irradiated volume , the bl / bsds are made of high- materials .the primary collimator and the mlc are fixed parts of the treatment machine , whereas the block not only depends on the particular patient , but also on the direction from which the target is irradiated .therefore , there may be several blocks in one treatment plan , not necessarily corresponding to the same thickness or ( perhaps ) material .the presence of bl / bsds in treatment plans induces three types of physical effects : * \a ) confinement of the beam to the area corresponding to full transmission ( i.e. , to the aperture of the device ) . *\b ) effects associated with the nonzero thickness of the device ( geometrical effects ) . *\c ) effects relating to the scattering of the beam off the material of the device .type-(a ) effects ( direct blocking of the beam ) are dominant and have always been taken into account .the standard way to do this is by reducing the bl / bsd into a two - dimensional ( 2d ) object ( i.e. , by disregarding its thickness ) and assuming no transmission of the beam outside its aperture .type-(b ) and type-(c ) effects induce corrections which , albeit at a few - percent level of the prescribed dose , may represent a sizable fraction of the _ local _ dose ; due to their complexity and to time restrictions during the planning phase , these corrections have ( so far ) been omitted in clinical applications .the subject of the slit scattering in beam collimation was first addressed by courant ( 1951 ) .courant extracted analytical solutions for the effective increase in the slit width ( attributable to scattering ) by solving the diffusion equation inside the collimator . to fulfill the boundary conditions , he introduced the negative - image technique , which was later criticised ( e.g. , by burge and smith ( 1962 ) ) . despite the debatable usefulness of the practical use of courant s work , that article set forth definitions which were employed in future research ; for example , by categorising the scattered particles as : * those impinging upon the upstream face of the collimator and emerging from its inner surface ( bore ) , * those entering the bore and scattering out of it , and * those entering the bore and leaving the downstream face of the collimator .in the present paper , courant s type-1 particles will correspond to our ` outer tracks ' ( ots ) , type-2 particles to our ` bore - scattered inner tracks ' ( bsits ) , and type-3 particles to our ` going - through inner tracks ' ( gtits ) , see fig . [fig : model1 ] . the tracks which do not hit the block will comprise the ` pristine ' beam .aiming at the determination of the optimal material for proton - beam collimation and intending to provide data for experiments at the linear accelerator of the national institute for research in nuclear science ( nirns ) at harwell , burge and smith ( 1962 ) re - addressed the slit - scattering problem and obtained a solution via the numerical integration of the diffusion equation .burge and smith reported considerable differences to courant s results . in the last section of their article, the authors discussed alternative approaches to the slit - scattering problem .a monte - carlo ( mc ) method , as a means to study the collimator - scattering effects , was introduced by paganetti ( 1998 ) . using the geant code to simulate a proton beamline at the hahn - meitner institut ( hmi ) in berlin , paganetti introduced simple parameterisations to account for the changes in the energy and angular divergence of the beam as it traverses the various beamline elements . judging from fig .6 of that paper , one expects the scattering effects to be around the level .in the last section of his article , paganetti correctly predicted that ` monte - carlo methods will become important for providing proton phase - space distributions for input to treatment - planning routines , though the calculation of the target dose will still be done analytically . 'our strategy is similar to that of paganetti : the dose , delivered by the pristine beam , will be corrected for scattering effects on the basis of results obtained via mc runs prior to planning ( actually , when the particular proton - treatment machine is configured ) .block scattering was also investigated , along the general lines of paganetti s paper , in the work of van luijk ( 2001 ) .the ( same version of the ) geant code was used , to simulate a proton beamline at kernfysisch versneller instituut ( kvi ) in groningen , and the characteristics of the scattered protons were studied . to validate their approach ,the authors obtained dose measurements for several field sizes and at several distances from the block . unlike other works, the effect of scattering in air was also included in that study , and turned out to be more significant than previously thought ( its contribution to the angular divergence of the beam exceeded mrad per m of air ) .one of the interesting conclusions of that paper was that the penumbra of the dose distribution is mostly accounted for by the lateral spread of the undisturbed beam ; that conclusion somewhat allayed former fears that the extraction of the effective - source size ( ) and of the effective source - axis distance ( sad ) , both obtained from measurements conducted in the presence of a block , might be seriously affected by block - scattering contributions .finally , the paper addressed the asymmetries induced by a misalignment of the block , concluding that they might be sizable . in their recent work , kimstrand ( 2008 ) put the emphasis on the scattering off the multi - leaf collimator ( mlc ) . in their approach , they also obtain the values of the parameters ( involved in their corrections ) by using a mc geant - based method . in their section ` discussion and conclusions ' , the authors , setting forth the future perspectives , mention that ` a challenge is to implement collimator scatter for a pencil beam kernel dose calculation engine . 'we agree on this being ` a challenge ' .the authors then advance to pre - empt that ` the methods presented can straightforwardly be applied to arbitrary shaped collimators of different materials , such as moulded patient - specific collimators used in passively scattered proton beams . 'one should at least remark that their paper does not contain adequate information supporting the thesis that the proposed approach is of practical use in a domain where the execution time is a important factor .in the present paper , the standard icru ( 1987 ) coordinate system will be used ; in beam s eye view , the axis points to the right , upward , and toward the source . the origin of the coordinate system is the isocentre . assuming vanishing transmission outside the aperture of the bl / bsd , all relevant contributions to the fluence involve 2d integrals over its area .the evaluation of such integrals is time - consuming ; at a time when serious efforts are made to reduce the overall time allotted to each patient , the generous allocation of resources in the evaluation and application of _ corrections _ to the primary dose is unacceptable . to expedite the extraction of these corrections, one must find a fast way to decompose the effects of a 2d object ( e.g. , of the aperture of the bl / bsd ) into one - dimensional , easily - calculable , easily - applicable contributions .an example of one such 2d object is shown in fig .[ fig : miniblock1 ] ; the area b represents the aperture ( corresponding to full transmission of the beam ) and is separated from the area a , representing the ( high- ) material , via the contour c. full attenuation of the beam is assumed in the area a. the contour c and the ` outer ' contour of the area a ( which is not shown ) may have any arbitrary shape ( e.g. , rectangular , circular , etc . ) ; the only requirement is that the contour c be contained within the contour of the area a. let us assume that the aim is to derive the influence of the bl / bsd at one point q ( fig . [fig : miniblock2 ] ) ; the point q is projected to the point p , on the bl / bsd plane .the point p may be contained in the interior or the exterior of the area b. to enable the evaluation of the effects of the bl / bsd at the point q , a set of directions on the bl / bsd plane , intersecting at the point p , are chosen ; an example with is shown in fig .[ fig : miniblock3 ] .the effects of the bl / bsd at the point q will be evaluated from the elementary contributions of the line segments corresponding to the intersection of straight lines with the contour c. these line segments , which are contained within the area b and are bound by the contour c , will be called _ miniblocks_. the effect of the bl / bsd at the point q will be evaluated by averaging the elementary contributions of the miniblocks created around the point p. in fig .[ fig : miniblock3 ] , one of these miniblocks is represented by the line segment .obviously , the number of miniblocks in each direction depends on the number of intersections of the straight line ( drawn through the point p , parallel to the chosen direction ) and the contour c. as in the previous section , a set of directions , intersecting at the point p , are chosen ; an example with is shown in fig .[ fig : miniblock4 ] .again , the effects of the bl / bsd at the point q will be evaluated on the basis of the elementary contributions in these directions . in fig .[ fig : miniblock4 ] , one of the miniblocks is denoted by .evidently , the number of directions is arbitrary . for fixed ,the accuracy of the evaluation depends on the details of the contour c and on the proximity of the point p to it .the reliability of the estimation is expected to increase with .( the evaluation is exact for . )there is one difference between the cases described in sections [ sec : within ] and [ sec : without ] . in casethat the point p lies within the area b , there will always be at least one miniblock per direction ; if the point p lies outside the area b , there might be no intersections with the contour c in some directions ( in which case , the corresponding elementary contributions vanish ) .let us assume that the physical characteristics of the beam ( lateral spread , angular divergence , energy , etc . ) and the entire geometry ( also involving the bl / bsd ) are fixed .apart from the original point q , the elementary contributions will involve ( the coordinates of ) three additional points : the two end points of the particular miniblock and the point p. due to the fact that the directions are created around the point p , the two end points of every miniblock and the point p will always lie on one straight line . if the point p lies within the area b , it may lie within or outside a miniblock . on the other hand ,if the point p lies outside the area b , it will always lie outside any corresponding miniblock . in any case, the elementary contribution of a miniblock will generally be a function of the distance of its two end points to the point p. additionally , the elementary contribution of a miniblock to the point q will also involve the distance of the point q to the bl / bsd plane , denoted as in fig . [fig : miniblock2 ] .a few remarks are worth making .* the angle between consecutive directions may be constant or variable .if the angle is not constant , weights ( to the elementary contributions ) have to be applied . * the values of in sections [ sec : within ] and [ sec : without ] do not have to be the same .furthermore , different values of might be used for the points which are projected onto the interior ( or exterior ) of the area b , e.g. , depending on the proximity of the point p to the contour c. at present , the method of this paper applies only to bl / bsds with a constant aperture profile throughout their thickness ; to derive the corrections in case of other shapes ( e.g. , for bl / bsds with a tapered aperture profile ) , substantial modifications are required .the approach is applicable to any type of bl / bsd , be it the primary collimator , the mlc , or the block ; to derive the corrections , the only input pieces are the physical characteristics of the material ( of which the bl / bsd is made ) and , naturally , geometrical details . from now on ,however , we will restrict ourselves to the effects induced by the block , which ( given its proximity to the patient ) are expected to be of greater interest and importance in clinical applications .this choice , meant to emphasise importance , should not be seen as a restriction of the method . a method for the evaluation of the thickness correctionswas recently proposed by slopsema and kooy ( 2006 ) ; we will follow their terminology . thus , ` downstream ( upstream ) projection ' will indicate the projection of the downstream ( upstream ) face of the block on the ( , ) plane at the specified position ( depth ) . similarly , the ` extension ' of the block will correspond to the physical block translated in space ( from its actual position to the specified depth , parallel to the central beam axis ) . in medical applications , the formal beam - shaper object ( e.g. , the so - called dicom block ) , which is created at planning time to represent the physical block , comprises the projection of the _ downstream face _ ( of the block ) to a plane ( perpendicular to the beam ) at isocentre depth . the physical block ( extension )is therefore obtained via a simple scaling involving the source - axis and block - isocentre ( ibd ) distances .having retrieved the extension of the block , it is straightforward to obtain the downstream projection at any ; in order to obtain the upstream projections , one has to use , in addition to the aforementioned quantities , the block thickness ( ) .the relation between the dicom block , the extension , and the downstream projection at a specified position is shown in fig .[ fig : geometry1 ] .the upstream projection is obtained by projecting the extension from depth .the thin - block approximation , which is currently used in clinical applications when evaluating the dose , corresponds to mm ( the downstream and upstream projections coincide at all values ) . assuming that the coordinates of the projected ( downstream- or upstream - face , as the case might be ) ends of a miniblock are denoted as and , the contribution of the miniblock to the fluence at a point ( lying on the straight line defined by and ) on the calculation plane is given by the formula where denotes the error function and stand for the rms ( lateral ) spreads of the beam at the specified depth .the quantities are obtained via source mirroring according to the method of slopsema and kooy ( 2006 ) ; as points on the downstream or upstream faces of the block are used , the resulting values in eq .( [ eq:1dcontributions ] ) are equal only when one block face is involved in the mirroring process . one simple example of the projections on the calculation plane is shown in fig . [ fig : geometry2 ] ; the block extension is contained within the downstream projection , which ( in turn ) is contained within the upstream projection .in reality , depending on the complexity of the shape of the block aperture and on the relative position of the central beam axis , these three contours might intersect one another .the central beam axis intersects the calculation plane at the origin of the ( , ) coordinate system .one miniblock is shown ( the line segment contained within the block extension ) , along with two points , one within the miniblock ( p ) , another outside the miniblock ( p ) .the fluence contributions to both points may be evaluated by using eq .( [ eq:1dcontributions ] ) with the appropriate and values .the essentials for the evaluation of the contribution of a miniblock to the fluence at a specified point on the calculation plane are to be found in fig .[ fig : geometry3 ] .although , in the general case , the miniblock does not contain the intersection of the and axes of fig .[ fig : geometry2 ] , all lengths ( which are important to our purpose ) scale by the same factor , thus enabling the simplified picture of fig .[ fig : geometry3 ] . according to slopsema and kooy ( 2006 ) ,the source is mirrored onto the calculation plane by using points either on the downstream or on the upstream face of the block .the values of the mirrored - source size , corresponding to these two options , are given by and the coordinates of the projected points , , , and are obtained on the basis of fig . [ fig : geometry3 ] via simple operations .the last item needed to derive the contributions to the fluence may be found on page 5444 of the paper of slopsema and kooy ( 2006 ) : ` protons whose tracks project inside the aperture extension onto the plane of interest see the upstream face as the limiting boundary . only for protonswhose tracks end up outside the aperture on the plane of interest is the downstream face the limiting aperture boundary . ' in fact , this statement applies to the case of a half - block ( one - sided block ) .the modification , however , in case of a miniblock is straightforward .* points within the extension of the aperture see the upstream face of the miniblock as the limiting boundary .* points outside the extension of the miniblock see one upstream _ and one downstream _ edge as limiting boundaries . obviously , in order to evaluate the fluence at the point p ( fig .[ fig : geometry3 ] ) , one has to use the coordinates of the projected points and , along with in eq .( [ eq:1dcontributions ] ) . on the other hand , to evaluate the fluence at p , one has to use along with ( contribution of the ` left ' part of the miniblock ) and along with ( contribution of the ` right ' part of the miniblock ) .this simplified picture , featuring what a point ` perceives ' as limiting boundaries , suffices in obtaining the appropriate fluence contributions . finally , the number of contributions which a point will receive depends on the geometry and on the shape of the block .one example of a point receiving two contributions in a given direction is shown in fig .[ fig : geometry4 ] ; the point p lies within the extension of the miniblock on the left and outside the extension of the miniblock on the right .additionally , one might have to deal with a block with more than one apertures ( hence contours ) . to determine in the present work the appropriate thickness corrections for arbitrary block - aperture shape and beamline geometry , dedicated software was developed . before entering the details of the derivation of the scattering corrections, it is worth providing a concise outline of our approach .first of all , our aim is to obtain a _ fast and reliable _ solution to the scattering problem ; exact analytical solutions are welcome as long as they fulfill this requirement .second , the solution has to be general enough for direct application to all proton - treatment machines . to expedite the derivation and application of the corrections at planning time, we will introduce a two - step approach .* all the parameters which are independent of the specific details of plans will be evaluated during the beam - configuration phase ; given a fixed ( hardware ) setup , every treatment machine is configured only once . * the corrections for each particular planwill be derived ( at planning time ) from the existing results ( i.e. , those obtained at beam - configuration phase ) via simple interpolations .given that the physics of multiple scattering is known , it is possible to obtain the exact solution for the relevant beam properties ( lateral spread , angular divergence ) from the beamline characteristics of each treatment machine .however , it is unrealistic to introduce a dedicated process for each supported machine , especially in order to derive corrections to the delivered dose .additionally , if a dedicated ( per - case ) approach is implemented in a software product which is intended to support a variety of machine manufacturers , one has to be prepared to allot the necessary resources whenever a new product appears . to avoid these problems and to retain the generality of the approach , one has no other choice but to introduce a simple , adjustable model to account for the beam optics .the model of the present paper has only one parameter , which will be fixed from half - block fluence measurements .the parameters which achieve the description of the various distributions of the scattered protons are determined via mc runs .these runs take account of the variability in the block material , block thickness , incident energy ( energy at nozzle entrance ) , and nozzle - equivalent thickness ( net ) for all the options ( combinations of the hardware components of the beamline , leading to ranges of available energies and of nets , as well as imposing restrictions on the field size ) for which a machine is configured . to enable the easy use of the results ,the output is put in the form of expansion parameters in two geometrical quantities which are involved in the description of the scattering effects .the scattering corrections for all the blocks in a plan are determined ( at planning time ) from the aforementioned results via simple interpolations .the application of the corrections involves the concept of miniblocks , as they have been introduced in section [ sec : miniblocks ] of this paper .one model which is frequently used in beam optics features the bivariate gaussian distribution in the lateral direction ( distance to the central beam axis ) and the ( small ) angle ( with respect to that axis ) .( rotational symmetry is assumed here . ) with the parameters and represent twice the variance in and , respectively .the ( , ) correlation is defined as the quantity ( which is bound between and ) is a measure of the focusing in the beam .positive values indicate a defocusing system , negative a focusing one ; this becomes obvious after one puts eq .( [ eq : bivariate ] ) in the form we now touch on the variation of , , and along the beam - propagation direction . assuming that the quantities , , and denote the corresponding values at isocentre depth and that the beam propagates in air ( without scattering ) , , , and at distance from the isocentre ( see fig . [ fig : geometry1 ] ) are given by the expressions with these transformations , the joint probability distribution of eq .( [ eq : bivariate ] ) is invariant under translations in . in case that the beam traverses some material , eqs .( [ eq : abcz ] ) have to be modified accordingly , to take account of the beam broadening due to multiple scattering .the accurate modelling of the beam may be obtained on the basis of formulae such as those given in the previous section . in section [ sec : scattering ] , however , we reasoned that a simplified parameterisation of the beamline is desirable ; one additional argument may be put forth . currently , as far as proton therapy is concerned , four dose - delivery techniques are in use : single - scattering , double - scattering , uniform - scanning ( formerly known as wobbling ) , and modulated - scanning ( formerly simply known as scanning ) . in the modulated - scanning technique ,magnets deflect a narrow beam onto a sequence of pre - established points ( spots ) on the patient ( for pre - determined optimal times ) , thus ` scanning ' the ( cross section of the ) region of interest .uniform scanning involves the spread - out of the beam using fast magnetic switching .the broadening of the beam in the single - scattering technique is achieved by one scatterer , made of a high- material and placed close to the entrance of the nozzle .currently , the most ` popular ' technique involves a double - scattering system . in a double - scattering system ,a second scatterer is placed downstream of the first scatterer in order to achieve efficient broadening of the beam ; studies of the effects of the second scatterer may be found in the literature , e.g. , see takada ( 2002 ) and more recent reports by the same author and gottschalk .the second scatterer is usually made of two materials : a high- ( such as lead ) material at the centre ( i.e. , close to the central beam axis ) , surrounded by a low- ( such as aluminium , lexan , etc . )material ( which is frequently , but not necessarily , shaped as a concentric ring ) .the arrangement produces more scattering at the centre than the periphery , leading ( after sophisticated fine - tuning ) to the creation of a broad flat field at isocentre . to simulate the effect of the second scatterer in the present work , ( , )events are generated at as follows .the variable is sampled from the gaussian distribution with mean and variance ( the source - size calibration must precede this step ) ; depends on the incident energy and net . to account for the lateral limits of the beam , is restricted within the interval $ ] , where the characteristic length is taken herein to be the radius of the second scatterer .the variable is first sampled from the gaussian distribution with mean and a -dependent variance according to the formula where denotes the lateral position as a fraction of ; . to account for the -dependent bias in , we then use the transformation , where stands for the distance between the first scatterer and the source .obviously , denotes the ratio of two values , i.e. , the value at the centre of the second scatterer over the one at the rim ; is the only free parameter of the model . the value of is obtained from the incident energy and net ; the angular divergence of the beam at nozzle entrance is ( currently ) assumed to be .it is not difficult to prove that , given the aforementioned rules for generating the ( , ) events , the open - field fluence at a lateral position at depth is given by where .it has to be emphasised that this definition of the fluence does not involve an overall -factor ( being the distance between the calculation plane and the source ) ; therefore , this formula is compatible with the format in which the lateral fluence measurements , used during the beam - configuration phase , appear . it can be shown that the only modification in case that a half - block is inserted into the beamline ( e.g. , as shown in fig .[ fig : model1 ] ) involves the upper limit of integration in eq .( [ eq : fluenceopen ] ) ; instead of , one must now use if , or otherwise .although the method of the present paper was originally developed for the double - scattering technique , it is also applicable in single scattering and uniform scanning ; in both cases , one simply has to fix the parameter to . as the method is applicable only in case of broad fields , it has no bearing on modulated scanning .the elements needed for the description of the passage of particles through matter may be found ( in a concise form ) in yao ( 2006 ) , starting on page 258 ; most of the deflection of a charged particle traversing a medium is due to coulomb scattering off nuclei . despite its incompleteness ( e.g. ,see the discussion in the geant4 physics - reference manual , section on ` multiple scattering ' , starting on page 71 ) , the multiple - scattering model of molire is used here .the large - angle scattering is not taken into account ; the angular distribution of the traversing beam is assumed gaussian .highland s logarithmic term , appearing in the expression of , will be approximated by a constant factor involving the block thickness ; a similar strategy was followed in gottschalk ( 1993 ) .the lynch - and - dahl ( 1991 ) values will be used in the expression for : where denotes the depth along the original trajectory , the velocity and the momentum of the proton , and the radiation length in the material of the block ( for a convenient parameterisation of , see yao ( 2006 ) , page 263 ) .equation ( [ eq : theta0 ] ) applies to ` thin ' targets . for ` thick ' targets, the dependence of the proton momentum on the depth must be taken into account . omitting the logarithmic term on the right - hand side, one may put eq .( [ eq : theta0 ] ) in the form where on the practical side , eq .( [ eq : theta0simple ] ) with a constant ( or at least ` not too complicated ' ) factor would be attractive as one would then be able to obtain fast analytical solutions for the propagation of a simulated track inside the material of the block .therefore , it is worth determining the range of thickness values within which the constancy of the factor remains a reasonable assumption .the direct comparison with the data of gottschalk ( 1993 ) revealed that the ` thick - target ' corrections are unfortunately indispensable at depths exceeding about of the range of -mev protons , incident on a variety of materials .to abide by the original goal of obtaining a fast solution , we had to follow an alternative approach ( to that of using eq .( [ eq : fq ] ) ) , by parameterising the -dependence of the factor in a simple manner ; at present , the best choice seems to be to make use of the empirical formula where denotes the ( energy - dependent ) range of the incident proton in the material of the block and is now a constant , depending only on the initial value of .the validation of eqs .( [ eq : theta0new ] ) and ( [ eq : f ] ) was made on the basis of a comparison with the experimental data of gottschalk ( 1993 ) , namely the measured values of that paper .good agreement with the data was obtained for four materials which are of interest in the context of the present study ( carbon , lexan , aluminum , and lead ) . only at one entry ( one of the largest depth values in lead , i.e. , the measurement at ) , was a significant difference ( of slightly less than ) found ;the origin of that difference was not sought .figure [ fig : model2 ] shows an example of a trajectory of one proton inside a block .the incident proton , an ot in this figure , hits the upstream face of the block at angle with respect to the beam axis .two new _ in - plane _ variables are introduced to describe the kinematics at depth ( along the direction of the original trajectory ) : the deflection ( off the original - trajectory course ) and the angle ( with respect to the direction of the original trajectory ) .although the proton moves in an irregular path inside the block , the ` history ' of the actual motion will be replaced by a smooth movement leading to the same value of at .this ` smooth - deflection ' approximation will enable the association of the energy loss in the material of the block with the doublet of ( , ) values .since the path length , which is calculated in this approximation , is an underestimate of the actual path length , a constant conversion factor ( true - path correction ) of has been used ; there is some arbitrariness concerning this choice , yet it appears to be reasonable .it needs to be stressed that , in fig .[ fig : model2 ] , denotes the direction of the beam propagation ; the auxiliary coordinate system introduced in this figure should not be confused with the coordinate system of fig .[ fig : geometry1 ] , which is the formal one in medical applications . in the generation of the mc events ,the suggestion of yao ( 2006 ) , page 262 , for the quantities and has been followed .two independent gaussian random variables ( , ) with mean and variance are first created in each event ( track hitting the block ) .the quantities and are expressed in terms of ( , ) as follows : and where is taken from eq .( [ eq : theta0new ] ) .the values of the doublet ( , ) fix the dependence of and on in each generated event .it may then be determined ( either analytically or numerically ) whether the particular track leaves the block . finally , for those tracks leaving the block , a simple rotation yields the coordinates of the exit point in the ( , ) coordinate system of fig .[ fig : model2 ] .the energy of the leaving proton is determined from its residual range ( original range minus the actual path length inside the material of the block ) .the definitions of the three types of scattered protons have been given in section [ sec : introduction ] ; in fig .[ fig : model1 ] , one obtains a rough schematic view of the corresponding contributions to the fluence . in this section , we will introduce convenient forms to parameterise the lateral fluence distributions of the scattered protons . as far as these distributions are concerned , the uninteresting offset ( lateral displacement of the block ) in fig .[ fig : model1 ] will be omitted .therefore , for the needs of this section , mm at the extension of the block in fig .[ fig : model1 ] , that is , not at the position where the central beam axis intersects the calculation plane . to obtain the lateral fluence distributions , literally corresponding to fig .[ fig : model1 ] , one has to take the offset into account . in the formulae below ,the placement of the half - block is assumed as shown in fig .[ fig : model1 ] ( i.e. , extending to ) . the empirical formula for the description of the lateral fluence distribution of the ots reads as where .four conditions for the parameters , , and must be fulfilled : , , , and .the same empirical formula is used in the parameterisation of the lateral fluence distribution of the bsits ; the four aforementioned conditions also apply .( the resulting optimal parameter values are , of course , different . ) the optimal description of the ( broader distribution of the ) gtits is achieved on the basis of a lorentzian multiplied by the asymmetry factor to account for the observed skewness of the lateral fluence distribution ( toward positive values ) .three conditions must be fulfilled : , , and .the lateral fluence distribution of the pristine tracks is fitted by using the standard formula where and ; is the cumulative gaussian distribution function . only the factor be retained from the fits to the pristine - beam data , to be used in the _ normalisation _ of the fluence corresponding to each of the three types of scattered protons .expressing the contributions of the scattered protons as fractions of the pristine - beam fluence enables the efficient application of the corrections at planning time . from all the above ,it is evident that the description of the contribution to the fluence ( at fixed ) of any of the three types of the scattered protons is achieved on the basis of three parameters ; hence , there are nine parameters in total . at the end of each cycle ( comprising a set of mc runs for a number of values , for a given energy - net combination ) , each of these nine parameters is expanded in terms of and , using the quadratic model the final results are the coefficients , , , obtained at several energy - net combinations for the option of the machine which is under calibration . at planning time , the values of the parameters are reconstructed from the existing results ( via simple 2d interpolations in the incident energy and net ) , the values of ( corresponding to the particular miniblock which is processed ) , and ( corresponding to the calculation plane which is processed ) .let us assume that one option of a selected machine has been chosen for calibration ( at beam - configuration phase ) .all half - block data ( which is contained in that option ) is used in the determination of the parameter on the basis of an optimisation scheme ( featuring the c++ implementation of the standard optimisation package minuit of the cern library ) , along with eq .( [ eq : fluenceopen ] ) with the value of eq .( [ eq : t0 ] ) , one could also use the measurements of the open - field fluence , along with eq .( [ eq : fluenceopen ] ) with .however , what is habitually called ` open - field measurements ' in the field of radiation therapy corresponds to beams which have already been restricted in size by the primary collimator . ] .representative incident - energy and net values are chosen from the ranges of values associated with the option which is under calibration .for each acceptable energy - net combination , a number of mc runs are performed , each corresponding to one value . in each of these mc runs , events (i.e. , ( , ) pairs , each corresponding to one proton track ) are generated according to the formalism developed in section [ sec : parameterisation ] .the value of the parameter , obtained from the half - block data ( for the option in question ) at the previous step , comprises ( i.e. , apart from geometrical characteristics of the machine which is configured ) the only input to these mc runs .the resulting tracks are followed until they either hit the block or pass through it .the tracking of the protons inside the material of the block is done according to section [ sec : mcruns ] ; finally , these tracks either vanish ( being absorbed in the material of the block ) or emerge from it ( bore , downstream face ) and deliver dose . the tracks which emerge from the block are properly flagged ( ots , bsits , or gtits ) and their contributions to the fluence at a number of depths are stored ( histogrammed ). fits to these distributions , using the empirical formulae of section [ sec : latdist ] , lead to the extraction of the parameters achieving the optimal description of the stored data . after the completion of all the runs for all the chosen values of , the entire set of the parameter valuesis subjected to fits by using eq .( [ eq : sr ] ) for each parameter separately .finally , the coefficients , , , appearing in eq .( [ eq : sr ] ) , are stored in files along with the values of the incident energy and net corresponding to the particular mc run ; these output files will comprise the only input when the corrections to a particular plan will be derived ( planning time ) .the procedure above is repeated until all options of the given proton - treatment machine have been calibrated .the variability in the material and in the thickness of the block is also taken into account in the current implementation ( by looping over those combinations requested by the user ) .finally , it is worth repeating that this time - consuming part of obtaining the files containing the coefficients ( a question of a few hours per option ) has to be performed only once , when the proton - treatment machine is configured . at planning time, the pristine - beam fluence is calculated first .the beam - scattering corrections are then obtained ( i.e. , if they have been requested ) after employing a number of elements developed in the course of the present section : * the concept of miniblocks introduced in section [ sec : miniblocks ] . * the reconstruction of each of the nine parameters , used in the description of the scattered protons , at a few values , on the basis of eq .( [ eq : sr ] ) from the results pertaining to the option selected in the treatment plan . the appropriate file ( corresponding to the block material and thickness in the plan )is used as input .the final results for the various parameters are obtained via simple 2d interpolations in the incident energy and net . * the empirical formulae of section [ sec : latdist ] . * a simple ( linear ) interpolation to obtain the corrections at all depths in the plan .the final step involves the application of the corrections to the pristine - beam fluence .the break - up of the task of determining the scattering corrections into two steps , as described in this section of the paper , enables the minimisation of the time consumption during the evaluation and application of these corrections in proton planning ; of central importance , in this respect , is the concept of miniblocks .the measurements , which are analysed in this paper , have been obtained at the proton therapy centre of the national cancer centre ( ncc ) , south korea .the first report on the clinical commissioning and the quality assurance for proton treatment at the ncc appeared slightly more than one year ago , see lee ( 2007 ) .the ncc proton - treatment machine has been manufactured by ion beam applications ( iba ) , louvain - la - neuve , belgium .its nominal sad is mm and the distance between the first scatterer and the isocentre is mm .the double - scattering technique currently supports eight options with incident energies ranging from to mev .all half - block measurements have been taken in air , using mm .the lateral displacement of the - thick brass block , used in these measurements , was mm ; the block was positioned opposite to what is shown in fig .[ fig : model1 ] , i.e. , blocking to beam from mm to ( theoretically ) . to apply eq .( [ eq : fluenceopen ] ) with the value of eq .( [ eq : t0 ] ) , the axis was inverted , as a result of which the value of mm was finally used in eqs .( [ eq : fluenceopen ] ) and ( [ eq : t0 ] ) .each option of the double - scattering technique at the ncc comprises energy - net combinations ; in each of these combinations , the lateral fluence distributions have been obtained at four positions , namely at , , , and mm .one example of these profiles is given in fig .[ fig : halfblockprofiles ] .it is worth mentioning that each profile had been separately normalised ( during the data taking ) by setting the corresponding average value of the fluence ` close ' to the central beam axis to ( the arbitrary value of ) of the corresponding maximal fluence ( of each set , separately ) be brought to mm ; thus , an important offset is irretrievably lost . ] ; unfortunately , the individual normalisation factors are not available .the ` ears ' of the distribution for the data set at mm , which are presumably due to block scattering , have been removed via software cuts . to avoid fitting the noise , fluence values below of the maximal value in each data set were not processed .the block - thickness effects were removed from the data prior to processing . before advancing to the results ,one important remark is prompt .a significant deterioration of the description of the data in the last of the options ( option ) was discovered during the analysis ; at present , it is not entirely clear what causes this problem .it seems that our beamline model leads to a systematic overestimation of the penumbra in option .it has to be mentioned that the data in that option yields an unusually small spot size ( ) , about times smaller than the typical values extracted in the other seven options .option is the only one in which the iba second scatterer ( which is admittedly of peculiar design , is smaller than in the cases of the and scatterers which are used in options - .furthermore , the thickness of lexan _ abruptly _ increases close to the rim of this scatterer .evidently , the amount of material used in the can not provide efficient broadening of the beam ; adding material would imply smaller beam energy at nozzle exit , at a time when the emphasis in this option is obviously put on the high end of the energy spectrum . to be compatible with the requirement for flatness of the resulting field at isocentre , a smaller maximal field diameter ( versus to mm of the other options ) in the clinical application of option had to be imposed . ] ) is used . in all probability ,the problems seen in option originate from the shape of the second scatterer , namely from the fact that the energy loss in its material is not kept constant radially ( actually , it is ` discontinuous ' at ) . in order to investigate the sensitivity of our conclusions to the treatment of option , we will perform the analysis both including in and excluding from the database the measurements of that option .the extracted values of the parameter for all the energy - net combinations of all double - scattering options of the ncc machine are shown in fig .[ fig : lambda ] .the uncertainties in the case of option are ( on average ) larger than in the other options .the variability of the values within each option is due to the fact that , as a result of the numerous assumptions made to simplify the problem , ( which should , in principle , characterise only the second scatterer ) was finally turned into an effective parameter . to decrease the ` noise ' seen in fig .[ fig : lambda ] , the weighted average of the extracted values within each option was finally used in the ensuing mc runs for all energy - net combinations in that option ( see table [ tab : lambda ] ) . million events have been generated in each energy - net combination , at each of three values ( , , and mm ) .the lateral fluence distributions have been obtained at positions in depth , from to mm .the parameters of these distributions for the three types of scattered protons have been extracted using the formulae of section [ sec : latdist ] . in all cases ( i.e. , including option ) , the description of the data was good ;the reduced values ( , ndf being the number of degrees of freedom in the fit ) came out reasonably close to .figures [ fig : outer ] and [ fig : inner ] show typical lateral fluence distributions for outer and inner tracks , respectively . figures [ fig : correctionsat100mm]-[fig : correctionsatm100 mm ] , obtained with a lateral block displacement of mm , show the scattering corrections ( to be applied to the lateral fluence distributions of the pristine tracks ) at three positions around the isocentre ( , , and mm , respectively ) .as expected , the distributions broaden when receding from the block ; the mode of the fluence contribution of the ots moves about mm away from the block extension for every mm of depth , thus indicating an ` average ' exit angle ( to the bore ) of about .concerning their magnitude , the corrections generally amount to a few percent of the corresponding pristine fluence for the typical distances involved in clinical applications .it is now time to enter the subject of the energy loss of the scattered protons . to a good approximation, one may assume that the energy distributions of the scattered protons depend only on the ratio , where is the energy of the scattered proton and denotes the energy of the pristine beam ( i.e. , the energy at nozzle exit ) . in their study, kimstrand ( 2008 ) made the same observation .the energy distributions of the scattered protons were investigated in the case of three energy - net combinations of the ncc machine : ( mev , mm ) , ( mev , mm ) , and ( mev , mm ) .these combinations have been selected from the data sets of options , , and , respectively , in which different second scatterers are employed ; the corresponding energies are : , , and mev .the results have been obtained using a -mm thick brass block .figures [ fig : energyouter ] and [ fig : energyinner ] show the energy distributions of the scattered protons as functions of the ratio ; instead of referring explicitly to each energy - net combination , we will simply use the option number ( or ) to identify the results .the following remarks are worth making .* at low and moderate values of the exit energy , the energy distributions of the two types of inner tracks are similar , following the probability distribution . at the high end of the energies used in the ncc machine ,the energy distribution of the bsits departs from this simplified picture , attaining a peak close to ; this is due to bsits which almost ` brush ' the surface of the bore .the energy distribution of the gtits remains unchanged .nevertheless , to retain simplicity , we will assume that the energy distribution of all inner tracks follows the formula .* the energy distribution of the ots is strongly smeared toward low values .the ots lose a significant amount of their energy when traversing the material of the block ; as shown in fig . [fig : energyouter ] , their energy distributions peak around to . to fit the energy distribution of the ots, we used the empirical formula where the parameter turns out to be around to ; the constant of proportionality is obtained from the normalisation of the probability distribution .courant s effective - size corrections , corresponding to the three aforementioned cases , are : , , and mm . at small aperture sizes ,the dominant contribution to the fluence ( of the scattered protons ) originates from ots ; the inner tracks dominate at high aperture values .the crossover is energy - dependent , ranging from about ( option- result ) to mm ( option- result ) .the absolute yields of the different types of the scattered protons are given in table [ tab : yields ] for an aperture size ( i.e. , the diameter , assuming circular shape ) of mm .we observe that the absolute yield of the ots increases by a factor of about between and mev , and by more than between and mev . at mev ,the bsits account for more than of the total yield of the scattered protons .however , the importance of the bsits diminishes with increasing energy , reaching the level of out of ots at mev . on the contrary, the yield of the gtits flattens out at about per ot .one might conclude that the bsits seem to be more important at low energies and the ots at high energies .the ratio of the yields of the gtits and ots is almost constant , varying only from to as the energy increases from to mev .the verification of the method of the present paper should obviously involve the reproduction of dedicated dose measurements obtained in some material which , as far as the stopping power for protons is concerned , resembles human tissue , e.g. , in water .the measurements should cover the region around the isocentre , where the tumour is usually placed , and , in order that the approach be validated also in the entrance region , they should extend to small distances from ( the downstream face of ) the block . finally , the method must be validated for a range of depth values ( associated with the energy at nozzle exit ) . at present ,given the lack of dedicated dose measurements , the only possibility for verification rests on re - using the calibration data ( i.e. , the half - block fluence measurements described in section [ sec : measurements ] ) .we are aware of the fact that using the same data for configuring a system and for validating its output does not constitute an acceptable practice .however , since parts of the data ( i.e. , the areas which are obviously contaminated by the block - scattering effects ) had been removed from the database before extracting the values , this approach becomes a valid option .luckily , as far as the validation of the scattering corrections to the fluence is concerned , our interest lies in ( the reproduction of ) those excluded areas ; naturally , for the rest of the measurements ( i.e. , for those which _ were _ used in the determination of the values ) , it has to be verified that the quality of the description of the experimental data is not impaired by the inclusion of the block - scattering corrections .a typical reproduction of the measurements is given in figs .[ fig : rprdctn1 ] and [ fig : rprdctn2 ] ; the data correspond to the first energy - net combination of option of the ncc machine , taken at mm ( i.e. , mm away from the block ) . shown in fig .[ fig : rprdctn1 ] are the lateral fluence measurements ( continuous line ) along with the mc data corresponding only to the pristine beam ; the effects of the scattered protons are added to the pristine - beam fluence ( resulting in what will be called henceforth ` total fluence ' ) in fig .[ fig : rprdctn2 ] . on the basis of the visual inspection of these two figures ,there is no doubt that the quality of the reproduction of the measurements in the latter case ( i.e. , when including the block - scattering effects ) is superior .we will next investigate the goodness of the reproduction of all measurements on the basis of a commonly - used statistical measure , e.g. , of the standard function .alternative options have been established ( e.g. , the -index approach of low ( 1998 ) ) , but have not been tried in this work .one has to bear in mind that the block - scattering contributions are larger at small distances from the block and in the area neighbouring its extension ; at large distances , the distributions of the scattered protons broaden as a result of the angular divergence of the scattered beam and , less importantly , of scattering in air ( an effect which has not been included in this paper ) .evidently , the assessment of the goodness of the reproduction of the data , paying no attention to the characteristics of the effect in terms of the depth , makes little sense .the measurements in the area corresponding to the penumbra are very sensitive to the ( input ) value used for the lateral displacement of the block ; small inaccuracy in this value affects the description of the data significantly , introducing spurious effects in the function .this area , albeit very important in the determination of the value of the parameter , was not included in this part of the analysis .the resulting values are given in table [ tab : chisquare ] , separately for the four positions at which the measurements have been obtained ; it is evident that , for all depth values , the quality of the reproduction of the measurements when including the block - scattering effects is superior to the case that only the pristine - beam contribution is considered .the importance of the inclusion of these effects decreases with increasing distance to the block .the improvement for mm when including the block - scattering effects is impressive .judging from the values for the given degrees of freedom , there can be no doubt that the overall reproduction of the data is satisfactory .last but not least , despite the fact that the description of the option- measurements is debatable , our conclusions do not depend on the treatment of the data in that option ( i.e. , inclusion in or exclusion from the database ) . out of the available lateral fluence profiles , corresponding to mm, only five profiles did not show improvement after the scattering effects were included . in two of these profiles ,namely the energy - net combinations of ( mev , mm ) and ( mev , mm ) , the scattering contributions are present in the measurements , yet at different amounts compared to the mc - generated data ; additionally , a hard - to - explain slope ( i.e. , of different sign to what is expected after including the scattering contributions ) is clearly seen in these measurements around mm . on the other hand ,no scattering effects ( ` ears ' ) are discernible in the ( mev , mm ) , ( mev , mm ) , and ( mev , mm ) combinations at mm .it has to be stressed that these five data sets are surrounded by a multitude of measurements showing an impressive agreement between the experimentally - obtained and the mc - generated total - fluence distributions ; due to this reason , we rather consider the absence of improvement ( that is , after the scattering contributions are included ) in the description of the data in these five profiles as indicative of experimental problems . an alternative way of displaying the content of figs .[ fig : rprdctn1 ] and [ fig : rprdctn2 ] is given in fig .[ fig : normres ] ; instead of showing separately the measurements and the mc data , shown in fig .[ fig : normres ] are the normalised residuals , defined as where denotes the -th measurement and the corresponding mc - obtained fluence ; and represent their uncertainties .the advantage of such a plot is evident as direct information on the reproduction may be obtained faster than from figs .[ fig : rprdctn1 ] and [ fig : rprdctn2 ] ; for instance , not only can one immediately become aware of the failure of the pristine - beam data close to the borders of the block , but also of the severity of this failure .evidently , the pristine - beam contribution underestimates the fluence by about to standard deviations for mm mm .it is also interesting to note that , after the scattering effects have been included , the normalised residuals show significantly smaller dependence on the lateral distance ( ideally , no dependence should be seen ) .the aim of the present paper was to provide the systematic description of a method to be used in the determination and application of the corrections which are due to the presence of bl / bsds in proton planning . despite the fact that no emphasis was meant to be put on a clinical investigation , one simple example ( of the application of the corrections )may nevertheless be called for . to this end , the current version of the treatment planning system eclipse ( varian medical systems inc . , palo alto , california )was extensively modified to include the derivation ( in beam configuration ) and the application ( in planning ) of both block - relating corrections ; the application of each of the two corrections may be requested separately in the user interface .( a review article , providing the details of the dose evaluation in proton therapy , as well as its implementation in eclipse , will appear soon , see ulmer ( 2009 ) . )a simple rectangular water phantom was created , within which a planning treatment volume ( ptv ) of was arbitrarily outlined .four one - field treatment plans were subsequently created as follows : * \a ) a plan without block - relating corrections , * \b ) a plan with block - thickness corrections , * \c ) a plan with block - scattering corrections , and * \d ) a plan with both block - relating corrections . in each of these plans ,a brass block was inserted and fitted to the cross section of the ptv .subsequently , a dose of gy was delivered to the ptv , using the double - scattering technique of the ncc machine .the block - relating corrections were estimated on the basis of ; in fact , the results are practically insensitive to the value of , for .finally , the resulting dose maps were compared ; the dose differences of plans ( b ) , ( c ) , and ( d ) to plan ( a ) were estimated and compared ( figs . [fig : eclipse1 ] and [ fig : eclipse2 ] ) , leading to the following conclusions . * as expected , the application of the block - thickness corrections results in lower dose values .this is due to the fact that part of the incident flux is blocked as a result of the nonzero thickness of the block . *as expected , the application of the block - scattering corrections leads to higher dose values .this is because some protons , which would otherwise fail to contribute to the fluence ( as they impinge upon the block ) , scatter off the material of the block and ` re - emerge ' at positions in the bore or on the downstream face of the block . *as far as the delivered dose is concerned , the effects of the block thickness and block scattering ` compete ' one another . in the water phantom used in this section , the block - thickness corrections dominate .* the presence of the block in the plan of the water phantom used in this section induces effects which amount to a few percent of the prescribed dose ( see fig . [fig : eclipse2 ] ) .the largest effects appear in the area neighbouring the border of the block .it is also worth noticing the characteristic contributions of the scattered protons in the entrance region in the frontal and sagittal views in fig .[ fig : eclipse2 ] ; the largest part of the dose in the entrance region corresponds to the low - energy component of the scattered beam . as the _ local _ dose , delivered in the entrance region , is significantly smaller than the corresponding value delivered to the target ( which , in fact , is an agrument in favour of the use of protons in radiation therapy ) , the corrections which one has to apply to it , though representing a small fraction of the _ prescribed _ dose , are sizablethe present work deals with corrections which are due to the presence of beam - limiting and beam - shaping devices in proton planning .the application of these corrections is greatly facilitated by decomposing the effects of two - dimensional objects into one - dimensional , easily - calculable contributions ( miniblocks ) . in the derivation of the thickness corrections , we follow the strategy of slopsema and kooy ( 2006 ) . given the time restrictions during the planning , the derivation of the scattering corrections necessitates the introduction of a two - step approach .the first step occurs at the beam - configuration phase . at first, the value of the only parameter of our model ( ) is extracted from the half - block fluence measurements .a number of monte - carlo runs follow , the output of which consists of the parameters pertaining to convenient parameterisations of the fluence contributions of the scattered protons .these runs take account of the variability in the block material and thickness , incident energy , and nozzle - equivalent thickness in all the options for which a proton - treatment machine is configured . to enable the easy use of the mc results , the output is put in the form of expansion parameters in two geometrical quantities which are involved in the description of the scattering effects .the scattering corrections for all the blocks in a particular plan are determined from the results , obtained at beam - configuration phase , via simple interpolations .the verification of the method should involve the reproduction of dedicated dose measurements . at present , given the lack of such measurements , the only possibility for verification rested on re - using the half - block fluence measurements , formerly analysed to extract the value ; this is a valid option because parts of the input data had been removed from the database to suppress the ( present in the measurements ) block - scattering contributions .we investigated the goodness of the reproduction of the measurements on the basis of the function and concluded that the inclusion of the scattering effects leads to substantial improvement .the method presented in this paper was applied to one plan involving a simple water phantom ; the different contributions from the two block - relating effects have been separately presented and compared .these effects amount to a few percent of the prescribed dose and are significant in the entrance region and in the area neighbouring the border of the block .the author acknowledges helpful discussions with barbara schaffner concerning the optimal implementation of the method in eclipse .barbara also modelled and implemented in eclipse the important ( in the entrance region ) dose contributions of the low - energy scattered protons ( i.e. , those with energies below the lowest value used in the particular plan to ` spread out ' the bragg peak ) .the author is grateful to se byeong lee for providing the original ncc half - block fluence measurements , as well as important information on the data taking ..[tab : lambda]the weighted averages of the extracted values ( and their statistical uncertainties ) for the eight options of the ncc machine .the uncertainties are shown only for the sake of completeness ; they have not been taken into account in the results of section [ sec : output ] . [ cols="^,^",options="header " , ] evaluation of the effects of the bl / bsd at the point q of fig .[ fig : miniblock2 ] ( not shown ) , whose projection onto the bl / bsd plane is the point p , on the basis of four directions ( resulting , in this case , in four miniblocks ) . in this figure ,the point p lies within the area b.,width=585 ] evaluation of the effects of the bl / bsd at the point q of fig .[ fig : miniblock2 ] ( not shown ) , whose projection onto the bl / bsd plane is the point p , on the basis of four directions ( resulting , in this case , in five miniblocks ) . in this figure ,the point p lies outside the area b.,width=585 ] due to one - dimensional nature of the miniblocks , a point lying within the extension ( of a miniblock ) may also receive contributions which are characteristic to points lying in the exterior of the extension.,width=585 ] the description of the kinematics inside the block .the auxiliary coordinate system introduced in this figure should not be confused with the formal coordinate system of fig .[ fig : geometry1].,width=585 ] one case of the scattering corrections ( to be applied to the lateral fluence distribution of the pristine tracks ) at mm ( i.e. , at isocentre ) .the lateral displacement of the block was mm.,width=585 ] the lateral fluence measurements ( continuous line ) corresponding to one energy - net combination of one option of the ncc machine , taken mm away from the downstream face of the block .the monte - carlo data shown correspond only to the pristine - beam fluence obtained at the same incident - energy , net , and values ; the measurements have been scaled up by a factor which is equal to the ratio of the median values ( of the two distributions ) , estimated over the fluence plateau.,width=585 ] the lateral fluence measurements ( continuous line ) corresponding to one energy - net combination of one option of the ncc machine , taken mm away from the downstream face of the block .the monte - carlo data shown correspond to the total ( pristine - beam plus scattered - protons ) fluence obtained at the same incident - energy , net , and values ; the measurements have been scaled up by a factor which is equal to the ratio of the median values ( of the two distributions ) , estimated over the fluence plateau.,width=585 ] an alternative way of displaying the contents of figs .[ fig : rprdctn1 ] and [ fig : rprdctn2 ] ; shown in this figure are the normalised residuals , plotted versus the lateral distance .evidently , the pristine - beam contribution underestimates the fluence by about to standard deviations for mm mm . after the inclusion of the block - scattering effects , the residuals nicely cluster around .,width=585 ] | the present paper pertains to corrections which are due to the presence of beam - limiting and beam - shaping devices in proton planning . two types of corrections are considered : those originating from the nonzero thickness of such devices ( geometrical effects ) and those relating to the scattering of beam particles off their material . the application of these two types of corrections is greatly facilitated by decomposing their effects on the fluence into easily - calculable contributions . to expedite the derivation of the scattering corrections , a two - step process has been introduced into a commercial product which is widely used in planning ( treatment planning system eclipse ) . the first step occurs at the beam - configuration phase and comprises the analysis of half - block fluence measurements and the extraction of the one parameter of the model which is used in the description of the beamline characteristics ; subsequently , a number of monte - carlo runs yield the parameters of a convenient parameterisation of the relevant fluence contributions . the second step involves ( at planning time ) the reconstruction of the parameters ( which are used in the description of the scattering contributions ) via simple interpolations , performed on the results obtained during the beam - configuration phase . given the lack of dedicated data , the validation of the method is currently based on the reproduction of the parts of the half - block fluence measurements ( i.e. , of the data used as input during the beam configuration ) which had been removed from the database to suppress unwanted ( block - scattering ) contributions ; it is convincingly demonstrated that the inclusion of the scattering effects leads to substantial improvement in the reproduction of the experimental data . the contributions from the block - thickness and block - scattering effects ( to the fluence ) are presented separately in the case of a simple water phantom ; in this example , the maximal contribution of the two block - relating effects amounts to a few percent of the prescribed dose . _ keywords _ : particle therapy , proton , block , collimator , slit , corrections , scattering + |
a finite ergodic markov chain is said to exhibit _ cutoff _ in total - variation if its -distance from the stationary distribution drops abruptly from near its maximum to near .in other words , one should run the markov chain until the cutoff point for it to even slightly mix in whereas running it any further is essentially redundant .let be an aperiodic irreducible discrete - time markov chain on a finite state space with stationary distribution .the worst - case total - variation distance to stationarity at time is defined as where denotes the probability given and where , the _ total - variation distance _ of two distributions on , is given by define , the total - variation _ mixing - time _ of for , to be let be a family of such chains , each with its total - variation distance from stationarity , its mixing - time , etc .this family exhibits _ cutoff _ iff the following sharp transition in its convergence to equilibrium occurs : the rate of convergence in is addressed by the following notion of a _ cutoff window _ : for two sequences with we say that has cutoff at time with window if and only if or equivalently , cutoff at time with window occurs if and only if the cutoff phenomenon was first identified for random transpositions on the symmetric group in and for random walks on the hypercube in .the term `` cutoff '' was coined by aldous and diaconis in , where cutoff was shown for the top - in - at - random card shuffling process . while believed to be widespread ,there are relatively few examples where the cutoff phenomenon has been rigorously confirmed .even for fairly simple chains , determining whether there is cutoff often requires the full understanding of their delicate behavior around the mixing threshold .see and the references therein for more on the cutoff phenomenon .a specific markov chain which found numerous applications in a wide range of areas in mathematics over the last quarter of a century is the simple random walk ( ) on a bounded - degree _ expander _ graph .a finite graph is called an expander if every small subset of the vertices has a relatively large edge boundary .formally , the cheeger constant of a -regular graph on vertices ( also referred to as the edge isoperimetric constant ) is defined as where is the set of edges with exactly one endpoint in .we say that is a -edge - expander for some fixed if it satisfies .the well - known discrete analogue of cheeger sinequality implies that the spectral - gap of the on a family of -edge - expander graphs on vertices is uniformly bounded away from , hence these chains rapidly converge to equilibrium within steps .see the survey for more on the applications of random walks on expanders . in 2004, peres observed that for any family of reversible markov chains , total - variation cutoff can only occur if the inverse spectral - gap has smaller order than the mixing time .note that this condition clearly holds for the simple random walk on an -vertex expander , where the inverse - gap is whereas .it was shown by chen and saloff - coste that when measuring convergence in -distance for this criterion does ensure cutoff , however the case ( cutoff in total - variation ) has proved to be significantly more complicated .there are known examples where the above condition does not imply cutoff ( see *section 6 ) , yet it was conjectured by peres to be sufficient in many natural families of chains ( e.g. confirming this for birth - and - death chains ) .in particular , this was conjectured for the lazy random walk on bounded - degree transitive graphs .the first example of a family of bounded - degree graphs where the random walk exhibits cutoff in total - variation was provided only very recently , when the authors showed this for a typical random regular graph .it is well known that for any fixed , a random -regular graph is with high probability ( w.h.p . ) a very good expander , hence the simple random walk on almost every -regular expander exhibits worst - case total - variation cutoff . however , to this date there were no known examples for an _ explicit _ ( deterministic ) family of expanders with this phenomenon . in section [ sec - explicit ]we provide what is , to the best of our knowledge , the first explicit construction of a family of bounded - degree expanders where the simple random walk has worst - case total - variation cutoff .[ thm-1 ] there is an explicit family of -regular expanders on which the _ _ from a worst case initial position exhibits total - variation cutoff .the construction mimics the behavior of the on random regular graphs , whose mixing was shown in ( as conjectured by durrett and berestycki ) to resemble that of a walk started at a root of a -regular tree .two smaller expanders that are embedded into the graph structure allow careful control over the mixing time from all possible initial positions . a straightforward modification of the above construction yields an explicit family of cubic expanders where the from a worst - case initial position does _ not _ exhibit cutoff in total - variation ( despite peres cutoff criterion ) .note that peres and wilson had already sketched an example for a family of expanders without total - variation cutoff .we describe our simple construction achieving this in section [ sec - nocutoff ] for completeness .a final variant of the construction , presented in section [ sec - prescribed ] , provides cubic graphs with cutoff occurring at essentially any prescribed order of location .namely , there is an explicit family of cubic graphs where the has cutoff at any specified order between whereas for there can not be cutoff ( it is well - known that on any family of bounded - degree graphs on vertices the has for some fixed ) .[ thm-2 ] let be a monotone sequence with and .there is an explicit family of -regular graphs with vertices where the _ _ from a worst - case initial position exhibits total - variation cutoff at .furthermore , for any family of bounded - degree -vertex graphs where the _ _ has ( largest possible order of mixing ) there can not be cutoff .to simplify the exposition , we will first construct a family of 5-regular expanders where the from a worst initial position exhibits cutoff .subsequently , we will describe how to modify the construction to yield a family of cubic expanders with this property ( as per the statement of theorem [ thm-1 ] ) .the graph we construct will contain a smaller explicit expander on a fixed proportion of its vertices , connected to what is essentially a product of another expander with a `` stretched '' regular tree ( one where the edges in certain levels are replaced by paths ) .let and let be two explicit expanders as follows ( cf .e.g. for an explicit construction of a -regular graph , as well as and the references therein for additional explicit constructions of constant - degree expander graphs ) : * : an explicit -regular expander on vertices .* : an explicit -regular expander on vertices .let denote the largest absolute - value of any nontrivial eigenvalue of for .finally , let be some sufficiently large fixed integer whose value will be specified later . our final construction for -regular expander will be based on a ( modified ) regular tree , hence it will be convenient to describe its structure according to the various levels of its vertices .let the vertex denote the root of the tree , and construct the graph as follows : levels : first levels of a -regular tree rooted at .= 1em denote by the vertices comprising level .[ cons - part-1 ] levels : stretched -ary trees rooted at : = 1em for each place an -level -ary tree rooted at and identify the vertices of and via the trivial isomorphism . [ cons - part-2 ] replace every edge of each by a ( disjoint ) path of length .connect , the new interior vertices in ( with initial degree ) to their isomorphic counterparts in ( add -cliques between identified interior vertices ) and similarly for etc .let denote the final vertices comprising level , and associate the vertices of with those of .[ cons - part-3 ] levels : product of and a stretched -ary tree .= 1em for each place an -level -stretched -ary tree .connect vertices in with their counterparts in for .let denote the final vertices comprising level .levels : a forest of -ary trees rooted at .last level :associate leaves with and interconnect them accordingly . -regular graph on which the random walk exhibits total - variation cutoff.,width=432 ] finally , the aforementioned parameter is chosen as follows : denote by the minimum spectral - gaps in the explicit expanders that were embedded in our construction ( recalling from the introduction that for both by the definition of expanders together with the discrete analogue of cheeger s inequality ) , and define see fig . [ fig - expcons ] for an illustration of the above construction .we begin by establishing the expansion of the above constructed . throughout the proofwe omit ceilings and floors in order to simplify the exposition .[ lem - exp ] let for our explicit -regular expander .for any integer , the cheeger constant of the above described -regular graph with parameter satisfies . moreover , the induced subgraph on the last levels ( i.e. , levels ) has .first consider the entire graph .since we are only interested in a lower bound on , clearly it is valid to omit edges from the graph , in particular we may erase the cross edges between any subtrees described in items [ cons - part-2],[cons - part-3 ] of the construction .this converts every stretched edge of the -regular tree of simply into a -path ( one where all interior vertices have degree 2 ) of length .next , we contract all the above mentioned -paths into single edges and denote the resulting graph by .the next simple claim shows that this decreases the cheeger constant by at most .let be a connected graph with maximal degree and let be a graph on at most vertices obtained via replacing some of the edges of by -paths of length . then .let be a set of cardinality at most achieving the cheeger constant of .we may assume that otherwise is a disjoint union of paths and cycles and the result holds trivially .notice that if contains two endpoints of a -path while only containing interior vertices of then we can assume that , i.e. all the interior vertices are adjacent ( this maintains the same cardinality of while not increasing ) . with this in mind ,modify the set into the set by repeating the following operation : as long as there is a 2-path as above ( with and in for some ) we replace by .this maintains the cardinality of the set while increasing its edge - boundary by at most ( as formerly contributed at least edge to this boundary due to ) .altogether , this yields a set where no -path has both , while satisfies the obtained subset is possibly disconnected , and we will next argue that its connected components satisfy an appropriate isoperimetric inequality .consider , the connected component of that minimizes .if is completely contained in the interior of one of the new -paths then the statement of the claim immediately holds since with the last inequality due to the fact .suppose therefore that this is not the case hence we may now assume that contains at least one endpoint of any -path it intersects .let , i.e. the subset of the vertices of obtained from by excluding any vertex that was created in due to subdivision of edges .observe that our assumption on implies that since either a -path is completely contained in ( not contributing to ) or for some ( contributing the edge to , corresponding to the edge in ) .it remains to consider .clearly , can be obtained from by adding at most new 2-paths with new interior points per vertex , hence on the other hand , since and we have which together with the fact that implies that altogether , in light of the above claim we have where the graph is the result of taking a complete -regular tree of height levels and connecting its leaves , denoted by , via the -regular expander .it therefore remains to show that .let be a set of size vertices that achieves .define its subset of the leaves and set . since we clearly have hence .we thus have the following two options : 1 . : in this case 2 . : letting denote the infinite -regular tree ( whose cheeger constant equals 3 ) we get altogether we deduce that the second part of the lemma ( the statement on the subgraph ) follows from essentially the same argument given above for , as is precisely a forest of -regular trees of height where all the leaves are connected via the expander .again we get , completing the proof .consider the started levels above the bottom ( i.e. at level ) of the graph .the height of the walk is then a one - dimensional biased random walk with positive speed , implying that it would reach the bottom after steps with high probability .on the other hand , if the is started closer to the root , i.e. at level , then the one - dimensional random walk is delayed by two factors , horizontal ( cross - edges ) and vertical ( stretching the edges into paths ) . until reaching level ( after which the previous analysis applies ) ,these delays are encountered along stretched edges with the following effect : the former incurs a laziness delay with probability whenever the walk is positioned on an interior vertex of a 2-path .the latter delays the walk by the passage time of a through an -long 2-path , where the walk leaves the origin with probability .it is well - known ( and easy to derive ) that the expected passage time of the one - dimensional from to is precisely and the expected number of visits to the origin by then ( including the starting position ) is exactly .it thus follows that the expected delay of the one - dimensional walk representing our height in the tree along a single stretched edge is combining the above cases we arrive at the following conclusion : [ clm - tau - ell ] consider the __ on the graph started at a vertex on level .set and let be the hitting time of the walk to the leaves ( i.e. to level ). then w.h.p . h & \mbox{if }\,,\\ \frac53 ( 3-\alpha ) h+o(h ) & \mbox{if } \ , .\end{array}\right.\ ] ] the next lemma relates , the hitting time to the leaves ( addressed by the above claim ) , and the mixing of the on the graph .[ lem - tmix - upper - bound - from - top ] let , let be some vertex on level and for fixed , where is the hitting time of the _ _ to the leaves .then for any sufficiently large .let denote the started at some vertex in level and be the uniform distribution on .let be a random walk started from the uniform distribution .write for for the vertices of level in ( accounting for all the vertices except interior ones along the 2-paths of length corresponding to stretched edges ) and let map vertices in the graph to their level ( while mapping interior vertices of -paths to the lower of their endpoint levels ) .further let .clearly , for large enough we have furthermore , due to the bias of the towards the leaves and the fact that ( recall claim [ clm - tau - ell ] and that is fixed ) we deduce that except with probability exponentially small in , and in particular for any sufficiently large therefore , an elementary calculation shows that if and are close in total - variation and so are given and given for all , then the required statement on would follow .namely , if we should show that at time we have then we would get that ( with room to spare ) .examine the period spent by in levels .the graph in these levels is essentially a product of a -ary tree whose edges are stretched into -long -paths and the expander .let map the vertices in these levels to their corresponding vertices in , and let be the hitting times of to levels and respectively .as argued above , the started at level is a one - dimensional biased random walk that w.h.p. passes through stretched edges until reaching level for the first time .in particular , between times the walk w.h.p . passes through stretched edges . along eachstretched edge among levels , the walk traverses a cross - edge in ( that is , is uniformly distributed over the neighbors of in ) with probability whenever it is in an interior vertex in the -path , for a total expected number of such moves . finally , due to its bias towards the leaves , with high probabilitythe from level reaches level ( the vertices ) before hitting level . applying cltwe conclude that the w.h.p .traverses cross - edges ( each corresponding to a single step of the on ) between times , where the last inequality holds for and large enough .this amounts to at least consecutive steps of a along .aiming for a bound on the total - variation mixing , we may clearly condition on events that occur with high probability : condition therefore throughout the proof that indeed the above statement holds .in particular , letting be the on the expander and recalling that and it follows that recalling that is a -regular with second largest ( in absolute value ) eigenvalue and writing , < |a|^{-2}\,,\end{aligned}\ ] ] with the last inequality justified for any sufficiently large provided that a fact inferred from the choice of in . in this case , for any by symmetry we now conclude that for any and we have this immediately establishes . to obtain ,note that for w.h.p .satisfies . conditioned on this eventwe can apply a monotone - coupling to successfully couple and such that , yielding .the concentration of established in claim [ clm - tau - ell ] carries the above two bounds to time , thus completing the proof .let denote the total - variation mixing time from a given starting position .that is , if is an ergodic markov chain on a finite state space with stationary distribution then the above lemma gives an upper bound on this quantity for the started at one of the levels , which we now claim is asymptotically tight : [ cor - mixing - top ] consider the _ _ on started at some vertex on level and let be the hitting time of the walk to the leaves .then for any fixed we have .the upper bound on was established in lemma [ lem - tmix - upper - bound - from - top ] . for a matching lower bound on choose some fixed integer such that the bottom levels of the graph comprise at least a -fraction of the vertices of , i.e. the lower bound now follows from observing that , by the same arguments that established claim [ clm - tau - ell ] , the hitting time from level to level is w.h.p . for any sufficiently large .having established the asymptotic mixing time of the started at the top levels , we next wish to show that from all other vertices the mixing time is faster .[ clm - mix - bottom ] let be the _ _ started at some vertex in level . for every andany sufficiently large we have .let denote the induced subgraph on the bottom levels of the graph ( i.e. , levels ) . since it clearly suffices to show that mixes within total - variation distance on . by claim [ clm - tau - ell ]( taking the worst case corresponding to ) with high probability we have < 5l^2 h { \stackrel{\scriptscriptstyle\triangle}{=}}t_1\,,\ ] ] where the above strict inequality holds for any sufficiently large ( as ) .recall that by lemma [ lem - exp ] , where is the explicit -regular expander with second largest ( in absolute value ) eigenvalue .further consider the graph obtained by adding to a perfect matching on level , thus making it -regular .clearly , adding edges can only increase the cheeger constant and so as well .moreover , the discrete form of cheeger s inequality ( ) , which for a -regular expander with second largest ( in absolute value ) eigenvalue states that here gives the following : in particular we obtain that satisfies while a simple random walk on is well - known to satisfy as we infer that after ( the strict inequality holding for large enough ) steps we have recall that our choice of is such that w.h.p . , i.e. the reaches level by that time , and thereafter ( due to its bias towards the leaves ) it does not revisit level until time except with a probability that is exponentially small in . since and are identical on levels we deduce that w.h.p . the performs at least consecutive steps in following .altogether , for large enough we have and so . finally , bearing and the choice of in in mind , hence and so and , as required .we now claim that the worst - case mixing time within any is attained by an initial vertex at distance from the root .fix , let be the initial vertex maximizing and recall that denotes its level in the graph .the combination of claim [ clm - tau - ell ] and corollary [ cor - mixing - top ] ensures that if then necessarily , in which case an immediate consequence of the requirement [ eq - l - cond2 ] on is that , hence for any sufficiently large .therefore , we can not have since by claim [ clm - mix - bottom ] that would imply that contradicting the fact that achieves the worst - case mixing time .overall we deduce that for any we have thus confirming that the on the above constructed family of -regular expanders exhibits total - variation cutoff from a worst starting location .it remains to describe how our construction can be ( relatively easily ) modified to be -regular rather than -regular .the immediate step is to use binary trees instead of -ary trees , after which we are left with the problem of embedding the explicit expanders and without increasing the degree .this will be achieved via the line - graphs of these expanders , hence our explicit expanders will now have slightly different parameters : * : an explicit -regular expander on vertices . * : an explicit -regular expander on vertices . recall that given a tree rooted at some vertex , denoted by , its edge - stretched version is obtained by replacing each edge by a -path of length , and the collection of all new interior vertices ( due to subdivision of edges ) is denoted by .the modified construction is as follows : 1 .levels 0,1,2 : first levels of a binary tree .* denote by the vertices in level .levels : stretched binary trees rooted at : * connect vertices from ( interior vertices along -paths ) to the corresponding ( isomorphic ) vertices in , i.e. inter - connect the interior vertices via perfect matchings .* denote by the vertices in level .levels : edge - stretched binary trees rooted at and inter - connected via the line - graph of using auxiliary vertices : * associate each binary tree rooted at to an _ edge _ of .* for each ( interior vertex on a -path ) we connect it to a new auxiliary vertex and associate with a unique edge of .* we say that and are isomorphic if the isomorphism from to maps to .add new auxiliary vertices per equivalence class of such isomorphic vertices , identify them with the vertices of and connect every new vertex to the auxiliary vertices representing the edges incident to it .levels : a forest of binary trees .last level : leaves are inter - connected via the line - graph of : * associate the vertices with the edges of . *add new auxiliary vertices , each connected to the leaves corresponding to edges that are incident to it in .it is easy to verify that the walk along the cross - edges of the s now corresponds to a lazy ( unbiased ) random walk on the edges of .similarly , the walk along the cross - edges connecting the leaves corresponds to the on the edges of .hence , all of the original arguments remain valid in this modified setting for an appropriately chosen fixed .this completes the proof of theorem [ thm-1 ] .the explicit cubic expanders with cutoff constructed in the previous section ( illustrated in fig .[ fig - expcons ] ) can be easily modified so that the on them from a worst starting position would _ not _ exhibit total - variation cutoff .to do so , recall that in the above - described family of graphs , each vertex of the subset was the root of a regular tree of height whose edges were stretched into -long -paths ( see item [ cons - part-1 ] of the construction ) .we now tweak this construction by stretching some of the edges into -paths of length .namely , for subtrees rooted at the _ odd _ vertices in level of these trees we stretch the edges into paths of length . under this modified stretching the trees clearly still isomorphic , hence the cross edges are inter - connecting -paths of the same lengths . by the arguments above , starting from any level the mixing is faster compared to the root , and if is sufficiently small then the root remains the asymptotically worst starting position .however , starting from the root ( and in fact , starting from any level ) the hitting time to the set is no longer concentrated due to the odd / even choice of subtree at level .therefore , from the worst starting position we have that the hitting time to the leaves is concentrated on two distinct values ( differing by a fixed multiplicative constant ) , each with probability .this implies that the ratio is bounded away from and in particular this explicit family of expanders does not have total - variation cutoff .suppose is an explicit -regular expander on vertices provided by theorem [ thm-1 ] , and recall that the on this graph exhibits cutoff at where is some absolute constant .our graph will be the result of replacing every edge of by the -regular analogue of a -path , which we refer to as a `` cylinder '' , illustrated in fig . [ fig - expconsgen ] .the length of each cylinder is set to be satisfying . notice that the total number of vertices in is since and the on , started at a worst - case starting position , traverses edges until mixing , we infer from clt , as well as the fact that the expected passage - time through an -long cylinder is , that the analogous random walk on has cutoff at when we have . on the other extreme end , when grows arbitrarily fast as a function of we obtain that it approaches arbitrarily closely but we must still having since . in that case arbitrarily closely while having a strictly smaller order . to complete the construction it remains to observe that we may choose and so that and .this can be achieved by first selecting so that and then selecting a graph constructed through theorem [ thm-1 ] on vertices for some ( note that in the theorem we construct graphs of size essentially so this is always possible ) . to show that there is no cutoff whenever we will argue that in that case we have , in contrast to the necessary condition for cutoff due to peres ( cf . ) , discussed in the introduction .[ lem - trel - no - cutoff ] let be a graph on vertices with degrees bounded by some fixed on which the _ _ has .then the spectral - gap of the walk satisfies .in particular , the _ _ on does not exhibit cutoff .observe that the above graph must satisfy for some fixed as it is well - known ( cf ., e.g. , ) that the lazy walk on any graph has and in our case .let be two vertices whose distance in is our lower bound on the gap will be derived from its representation via the dirichlet form , according to which ^ 2\pi(x)p(x , y)}{\operatorname{var}_\pi f}\,,\end{aligned}\ ] ] where is the ( uniform ) stationary measure and is the transition kernel of the . as a test - function in the above formchoose clearly we have and a lower bound of order on the variance follows from the fact that two sets of linear size each have a linear discrepancy according to .namely , as a result of which we conclude that , thus completing the proofs of lemma [ lem - trel - no - cutoff ] and theorem [ thm-2 ] .= 2em recent results in showed that almost every regular expander graph has total - variation cutoff ( prior to that there were no known examples for bounded - degree graphs with this phenomenon ) ; here we provided a _ first explicit _construction for bounded - degree expanders with cutoff .the expanders constructed in this work are non - transitive. moreover , our proof exploits their highly asymmetric structure in order to control the mixing time of the random walk from various starting locations .it would be interesting to obtain an explicit construction of transitive expanders with total - variation cutoff . a slight variant of our construction gives an example of a family of expanders where the does _ not _ exhibit cutoff , thereby disagreeing with peres cutoff - criterion .both here and in another such example due to peres and wilson the expanders are non - transitive ( hence the restriction to transitive graphs in peres conjecture stated next ) . while it is conjectured by peres that the random walk on any family of transitive bounded - degree expanders exhibits total - variation cutoff , there is not even a single example of such a transitive family where cutoff was proved ( or disproved ) . for general ( not necessarily expanding ) bounded - degree graphs on verticesit is well - known that .here we showed that cutoff can occur essentially anywhere up to by constructing cubic graphs with cutoff at any such prescribed location .furthermore , this is tight as we prove that if then cutoff can not occur . | the cutoff phenomenon describes a sharp transition in the convergence of an ergodic finite markov chain to equilibrium . of particular interest is understanding this convergence for the simple random walk on a bounded - degree expander graph . the first example of a family of bounded - degree graphs where the random walk exhibits cutoff in total - variation was provided only very recently , when the authors showed this for a typical random regular graph . however , no example was known for an explicit ( deterministic ) family of expanders with this phenomenon . here we construct a family of cubic expanders where the random walk from a worst case initial position exhibits total - variation cutoff . variants of this construction give cubic expanders without cutoff , as well as cubic graphs with cutoff at any prescribed time - point . |
a very interesting empirical phenomenon in the study of weighted networks is the power - law correlation between strength and degree of nodes . very recently , wang _et al _ have proposed a mutual selection model to explain the origin of this power - law correlation .this model can provide a partial explanation for social weighted networks , that is , although the general people want to make friend with powerful men , these powerful persons may not wish to be friendly to them .however , this model can not explain the origin of power - law strength - degree correlation in weighted technological networks . in many cases ,the concepts of edge - weight and node - strength are associated with network dynamics .for example , the weight in communication networks is often defined by the load along with the edge , and the strength in epidemic contact networks is defined by the individual infectivity . on the one hand , although the weight / strength distribution may evolve into a stable form , the individual value is being changed with time by the dynamical process upon network . on the other hand , the weight / strength distribution will greatly affect the corresponding dynamic behaviors , such as the epidemic spreading and synchronization . inspired by the interplay of weight and network dynamics , barrat , barthlemy , and vespignani proposed an evolution model ( bbv model for short ) for weighted networks .although this model can naturally reproduce the power - law distribution of degree , edge - weight , and node - strength , it fails to obtain the power - law correlation between strength and degree . in bbv model ,the dynamics of weight and network structure are assumed in the same time scale , that is , in each time step , the weight distribution and network topology change simultaneously .here we argue that the above two time scales are far different .actually , in many real - life situations , the individual weight varies momently whereas the network topology only slightly changes during a relatively long period .similar to the traffic dynamics based on the local routing protocol , we investigate the dynamic behaviors of resource / traffic flow on scale - free networks with given structures , which may give some illuminations about the origin of power - law correlation between strength and degree in weighted scale - free networks .as mentioned above , strength usually represents resources or substances allocated to each node , such as wealth of individuals of financial contact networks , the number of passengers in airports of world - wide airport networks , the throughput of power stations of electric power grids , and so on .these resources also flow constantly in networks : money shifts from one person to another by currency , electric power is transmitted to every city from power plants by several power hubs , and passengers travel from one airport to another . further more , resources prefers to flow to larger - degree nodes . in transport networks ,large nodes imply hubs or centers in traffic system .so passengers can get a quick arrival to the destinations by choosing larger airports or stations . in financial systems , people also like to buy stocks of larger companies or deposit more capital in the banks with more capital because larger companies and banks generally have more power to make profits and more capacity to avoid losses .inspired by the above facts , we propose a simple mechanism to describe the resource flow with preferential allocation in networks .[ 0.5 ] resources in node are divided into several pieces and then flow to its neighbors .the thicker lines imply there are more resources flowing .it is worth pointing out that , in order to give a clearer illustration we do not plot the resource flow into node or out of node .,title="fig : " ] at each time , as shown in fig .1 , resources in each node are divided into several pieces and then flow to its neighbors .the amount of each piece is determined by its neighbors degrees .we can regulate the extent of preference by a tunable parameter .the equations of resource flow are where is the amount of resources moving from node to at time , is the amount of resources owned by node at time , is the degree of node and is the set of neighbors of node .if and are not neighboring , then . meanwhile each node also gets resources from its neighbors , so at time , [ 1 ]( color online ) the evolution of the strengths of node and , where nodes and are randomly selected for observation .the three cases are in different initial states which simply satisfy . the exponent .,title="fig : " ] [ 0.8 ] scatter plots of vs for all the nodes , title="fig : " ]the eq . [ strength of node i ] can be expressed in terms of a matrix equation , which reads where the elements of matrix are given by since , , the spectral radius of matrix obeys the equality , according to the gershgrin disk theorem . here , the spectral radius , , of a matrix , is the largest absolute value of an eigenvalue .further more , since the considered network is symmetry - free ( that is to say , the network is strongly connected thus for any two nodes and , these exists at least one path from to ) , will converge to a constant matrix for infinite .that is , if given the initial boundary condition to eq .[ matrix ] ( e.g. let , where denotes the total number of nodes in network ) , then will converge in the limit of infinite as for each node .consequently , denote , one can obtain that is , for any , from eq .[ kinetic equilibrium ] , it is clear that is just the kinetic equilibrium state of the resource flow in our model . since , is determined only by matrix , if given the initial boundary condition with satisfying .since matrix is determined by the topology only , for each node in the kinetic equilibrium , is completely determined by the network structure . denotes the amount of resource eventually allocated to node , thus it is reasonable to define as the strength of node .[ 0.8 ] ( color online ) the correlation between degree and strength with different . in the inset, the relation between and is given , where the squares come from the simulations and the solid line represents the theoretical result .,title="fig : " ] [ 0.8 ] ( color online ) the distribution of strength with different .the inset exhibits the relation between and , where the squares come from the simulations and the solid line represents the theoretic analysis .,title="fig : " ]the solution of eq . ( 6 ) reads where is a normalized factor . in principle, this solution gives the analytical relation between and when can be analytically obtained from the degree distribution . for uncorrelated networks , statistically speaking we have where denotes the probability a randomly selected node is of degree . since is a constant when given a network structure , one has , thus where denotes the average strength over all the nodes with degree .this power - law correlation where , observed in many real weighted networks , can be considered as a result of the conjunct effect of the above power - law correlation and the scale - free property . obviously ,if the degree distribution in a weighted network obeys the form , one can immediately obtain the distribution of the strength where the power - law exponent .recent empirical studies in network science show that many real - life networks display the scale - free property , thus we use scale - free networks as the samples .since the barabsi - albert ( ba ) model is the mostly studied model and lacks structural - biases such as non - zero degree - degree correlation , we use ba network with size and average degree for simulations .the dynamics start from a completely random distribution of resource . as is shown in fig .2 , we randomly pick two nodes and , and record their strengths vs time and for three different initial conditions . clearly , the resource owned by each node will reach a stable state quickly . and no matter how and where the one unit resource flow in , the final state is the same .[ 0.8 ] ( color online ) the distributions of degree ( left panel ) and strength ( right panel ) with different .the networks are generated by the strength - pa mechanism , and those shown here are the sampling of size .,title="fig : " ] [ 0.8 ] ( color online ) the correlation between degree and strength with different . in the inset ,the relation between and is given , where the squares come from the simulations and the solid line represents the theoretical result .the networks are generated by the strength - pa mechanism , and those shown here are the sampling of size .,title="fig : " ] similar to the mechanism used to judge the weight of web by google - searching ( see a recent review paper about the _ pagerank algorithm _ proposed by google ) , the strength of a node is not only determined by its degree , but also by the strengths of its neighbors ( see eq . 7 ) .although statistically for uncorrelated networks , the strengths of the nodes with the same degree may be far different especially for low - degree nodes as exhibited in fig .3 . in succession, we average the strengths of nodes with the same degree and plot fig . 4 to verify our theoretical analysis that there is a power - law correlation between strength and degree , with exponent .5 shows that the strength also obeys power - law distribution , as observed in many real - life scale - free weighted networks . and the simulations agree well with analytical results .in this paper , we proposed a model for resource - allocation dynamics on scale - free networks , in which the system can approach to a kinetic equilibrium state with power - law strength - degree correlation . if the resource flow is unbiased ( i.e. ) , similar to the bbv model , the strength will be linearly correlated with degree as . therefore , the present model suggests that the power - law correlation between degree and strength arises from the mechanism that resources in networks tend to flow to larger nodes rather than smaller ones .this preferential flow has been observed in some real traffic systems .for example , very recently , we investigated the empirical data of chinese city - airport network , where each node denote a city , and the edge - weight is defined as the number of passengers travelling along this edge per week .we found that the passenger number from one city to its larger - degree neighbor is much larger than that from this city to its smaller - degree neighbor .in addition , in chinese city - airport network and us airport network , the power - law exponents are and , respectively , which is within the range of predicted by the present model .the readers should be warned that the analytical solution shown in this paper is only valid for static networks without any degree - degree correlation .however , we have done some further simulations about the cases of growing networks ( see appendix a ) and correlated networks ( see appendix b ) .the results are quantitatively the same with slight difference in quantity .[ 0.9 ] ( color online ) the correlation between degree and strength with different assortative coefficients .the parameter is fixed .the inset shows the numerically fitting value of vs assortative coefficients .the networks are generated by the generalized ba algorithm of size and average degree .,title="fig : " ] finally , in this model , the resource flow will approach to a kinetic equilibrium , which is determined only by the topology of the networks , so we can predict the weight of a network just from its topology by the equilibrium state . therefore , our proposed mechanism can well apply to estimate the behaviors in many networks .when given topology of a traffic network , people can easily predict the traffic load in individual nodes and links by using this model , so that this model may be helpful to a better design of traffic networks . the authors wish to thank miss . ming zhao for writing the c++ programme that can generate the scale - free networks with tunable assortative coefficients . this work has been partially supported by the national natural science foundation of china under grant nos .70471033 , 10472116 , and 10635040 , the special research founds for theoretical physics frontier problems under grant no .a0524701 , and specialized program under president funding of chinese academy of science .since many real networks , such as www and internet , are growing momently .the performance of the present resource - allocation flow on growing networks is thus of interest .we have implemented the present dynamical model on the growing scale - free networks following the usual preferential attachment ( pa ) scheme of barabsi - albert .since the topological change is independent of the dynamics taking place on it , and the relaxation time before converging to a kinetic equilibrium state is very short ( see fig .2 ) , if the network size is large enough ( like in this paper ) , then the continued growth of network has only very slight effect on topology and the results is almost the same as those of the ungrowing case shown above . furthermore , we investigate the possible interplay between the growing mechanism and the resource - allocation dynamics . in this case , the initial network is a few fully connected connected nodes , and the resource is distributed to each node randomly .then , the present resource - allocation process works following eq .( 2 ) , and simultaneously , the network itself grows following a strength - pa mechanism instead of the degree - pa mechanism proposed by ba model . that is to say , at each time step , one node is added into the network with edges attaching to the existing nodes with probability proportional to their strengths ( in a growing ba network , the corresponding probability is proportional to their degrees ) .clearly , under this scenarios , there exists strong interplay between network topology and dynamic . when the network becomes sufficient large ( ) , as shown in fig .6 , the evolution approaches a stable process with both the degree distribution and strength distribution approximately following the power - law forms .furthermore , we report the relationship between strength and degree in fig . 7 , which indicate that the power - law scaling , with , also holds even for the growing networks with strong interplay with the resource - allocation dynamics .note that , the eq .( 8) is valid under the assumption that the underlying network is uncorrelated . however , many real - life networks exhibit degree - degree correlation in some extent . in this section , we will investigate the case of correlated networks .the model used in this section is a generalized ba model : starting from fully connected nodes , then , at each time step , a new node is added to the network and ( ) previously existing nodes are chosen to be connected to it with probability where and denote the choosing probability and degree of node , respectively . by varying the free parameter , one can obtain the scale - free networks with different assortative coefficients ( see the ref . for the definition of assortative coefficient ) .the simulation results are shown in fig .8 , from which one can find that the power - law correlations between strength and degree in the correlated networks are quantitatively the same as that of the uncorrelated networks , however , the power - law exponents , , are slightly different . actually , in the positive correlated networks , the large - degree nodes prefer to connect with some other large - degree nodes rather than those small - degree nodes , thus there may exist a cluster consisting of large - degree nodes that can hold the majority of resource .that cluster makes the large - degree nodes having even more resource than in the uncorrelated case , thus leading to a larger . in the inset of fig .8 , one can find that is larger in the positive correlated networks , and smaller in the negative correlated networks .however , the analytical solution have not yet achieved when taking into account the degree - degree correlation , which needs a in - depth analysis in the future .li2004 w. li , and x. cai , phys .e * 69 * , 046106 ( 2004 ) .a. barrat , m. barthlemy , r. pastor - satorras , and a. vespignani , proc .101 * , 3747 ( 2004 ) .wang , b. -h .wang , b. hu , g. yan , and q. ou , phys . rev . lett . * 94 * , 188702 ( 2005 ) . h. -k .liu , and t. zhou , acta physica sinica ( to be published ) .wang , b. hu , t. zhou , b. -h .wang , and y. -b .xie , phys .e * 72 * , 046140 ( 2005 ) .m. e. j. newman , and m. girvan , phys .e * 69 * , 026113 ( 2004 ) .t. zhou , z. -q .fu , and b. -h .wang , prog .* 16 * , 452 ( 2006 ) .g. yan , t. zhou , j. wang , z. -q .fu , and b. -h .wang , chin .* 22 * , 510 ( 2005 ) .a. e. motter , c. zhou , and j. kurths , phys .e * 71 * , 016116 ( 2005 ) .m. chavez , d. -u .hwang , a. amann , h. g. e. hentschel , and s. boccaletti , phys .lett . * 94 * , 218701 ( 2005 ) .m. zhao , t. zhou , and b. -h .wang , eur .j. b ( to be published ) .a. barrat , m. barthlemy , and a. vespignani , phys .. lett . * 92 * , 228701 ( 2004 ) .a. barrat , m. barthlemy , and a. vespignani , phys .e * 70 * , 066149 ( 2004 ) .p. holme , adv .complex syst .* 6 * , 163 ( 2003 ) b. tadi , s. thurner and g. j. rodgers , phys .e * 69 * , 036102 ( 2004 ) . c. -y .yin , b. -h .wang , w. -x .wang , t. zhou , and h. -j .yang , phys .lett . a * 351 * , 220 ( 2006 ) .wang , b. -h .wang , c. -y .yin , y. -b .xie , and t. zhou , phys .e * 73 * , 026111 ( 2006 ) .xie , b. -h .wang , b. hu , and t. zhou , phys .e * 71 * , 046135 ( 2005 ) .r. guimera , s. mossa , a. turtschi , and l. a. n. amaral , proc .* 102 * , 7794 ( 2005 ) .r. albert , i. albert , and g. l. nakarado , phys .e * 69 * , 025103 ( 2004 ) .r. a. horn , and c. r. johnson , _ matrix analysis _ ( cambridge university press , cambridge , 1985 ) .m. e. j. newman , phys .lett . * 89 * , 208701 ( 2002 ) .r. albert , and a. -l .barabsi , rev .* 74 * , 47 ( 2002 ) .barabsi , and r. albert , science * 286 * , 509 ( 1999 ) .k. bryan , and t. leise , siam rev . * 48 * , 569 ( 2006 ) .s. n. dorogovtsev , j. f. f. mendes , and a. n. samukhin , phys .lett . * 85 * , 4633 ( 2000 ) .p. l. krapivsky , and s. redner , phys .e * 63 * , 066123 ( 2001 ) . | many weighted scale - free networks are known to have a power - law correlation between strength and degree of nodes , which , however , has not been well explained . we investigate the dynamic behavior of resource / traffic flow on scale - free networks . the dynamical system will evolve into a kinetic equilibrium state , where the strength , defined by the amount of resource or traffic load , is correlated with the degree in a power - law form with tunable exponent . the analytical results agree well with simulations . |
the era of large spectroscopic surveys of massive stars has already begun , providing us with a huge amount of high - quality spectra of galactic and extragalactic o - type stars .the _ iacob spectroscopic survey of northern galactic ob - stars _ is one of them .this long - term observational project is aimed at building a multi - epoch , homogeneous spectroscopic database of high - resolution , high signal - to - noise ratio spectra of galactic bright ob - stars and [ 2 ] . ] . associated with this spectroscopic datasetare several working packages aimed at its scientific exploitation . within the framework of the working package _wp3 : quantitative spectroscopic analysis _, we have developed a powerful tool for the automatic analysis of optical spectra of o - type stars .the tool provides a fast and objective way to determine the stellar parameters and the associated uncertainties of large samples of o - stars within a reasonable computational time .initially developed to be used for the analysis of spectra of o - type stars from the _ iacob spectroscopic database _ , the tool is now also being applied in the context of the _ vlt - flames tarantula survey _( vfts ) project [ 3 ] , and other studies of stars of this type .apart from the already mentioned characteristics ( automatic , objective , and fast ) , we also aimed at a tool that is portable , versatile , adaptable , extensible , and easy to use .as shown throughout the following text , this philosophy has guided the whole development of the tool .+ one of the key - stones of any automatic quantitative spectroscopic analysis is the computation of large samples of synthetic spectra ( using a stellar atmosphere code ) , to be compared with the observed spectrum .in contrast to other possible alternatives ( the _ genetic algorithm _ ga employed by [ 4 ] , or the _ principal component analysis _pca approach followed by [ 5 ] ) , we decided to base our automatic tool on an extensive , precomputed grid of stellar atmosphere models and a line - profile fitting technique ( i.e. , a grid - based approach for the analysis of a sample of 12 low resolution spectra of b supergiants in ngc55 . ] ) this option was possible thanks to the fast performance of the fastwind code is the fastest . ] , and the availability of a cluster of computers at the instituto de astrofsica de canarias ( iac ) connected through the condor workload management system . + the grid is complemented by a variety of programs implemented in idl , to handle the observations , perform the automatic analysis , and visualize the results .the idl - package has been built in a modular way , allowing the user , e.g. , to easily modify how the mean values and uncertainties of the considered parameters are computed , or which evolutionary tracks will be used to estimate the evolutionary masses . in the following we outlinethe main characteristics of the fastwind grid used for the analysis of galactic o - type stars that is presently incorporated within the iacob grid - based tool .+ * * effective temperature ( ) and gravity ( log_g _ ) : * table [ grid_table ] indicates the ranges of and log_g _ considered in the grid .basically , the grid points were selected to cover the region of the log _ diagram where the o - type stars are located . ** helium abundance ( ) and microturbulence ( ) * : the grid includes six values of helium abundance =n/(n+n ) , indicated in table [ grid_table ] . for all models , a microturbulence = 15kms was adopted in the computation of the atmospheric structure , and four values of the microturbulence were considered in the formal solution . * * radius ( ) : * computing a fastwind model requires an input value for the radius .this radius has to be close to the actual one ( which will be derived in the final step of the analysis , and hence is not know from the beginning on ) .we assumed a radius for each ( , log_g _ ) pair following the calibration by [ 8 ] .the grid is hence divided in 20 regions in which a different radius is considered . for example , for the case of log_g_=4.0 and 3.5 dex , the radii range from 7 to 12 and from 19 to 22 , respectively . ** wind parameters ( , , ) : * as conventional for grid computations of stellar atmosphere models for the optical analysis of o - stars , the mass loss rate ( ) and terminal velocity ( ) of the wind have been compressed , together with the radius , into the wind strength parameter ( or optical depth invariant ) , ( see [ 7 ] ) .ten log-planes were considered for the grid . for each fastwind model , and to be specified .first , a terminal velocity =2.65 was adopted , following [ 9 ] ; then a mass loss rate was computed for the given log , , and .finally , the exponent of the velocity law , , was assumed as a free parameter ranging from 0.8 to 1.8 . * * metallicity * : a solar metallicity ( following [ 10 ] ) was assumed for the whole grid=0.5 , 0.4 , and 0.2 ) . ] .+ _ synthetic lines : _ the following ( optical ) lines where synthesized in the formal solution : h , hei , 4120 , 4143 , 4387 , 4471 , 4713 , 4922 , 5015 , 5048 , 5875 , 6678 , and heii , 4200 , 4541 , 4686 , 5411 , 6406 , 6527 , 6683 . location of ( , log_g_)-pairs considered in the fastwind grid incorporated into the present version of the iacob grid - based tool .evolutionary tracks from [ 11 ] . ]r l log_g _ : & [ 2.64.3]dex ( with step size 0.1 dex ) + : & ( step size 500k ) , upper limit defined by the 120 m track ( see fig . [ grid_figure ] ) + : & model : 15 kms , formal solution : 5 , 10 , 15 , 20 kms + : & 0.06 , 0.09 , 0.13 , 0.17 , 0.20 , 0.23 + log : & -15.0 , -14.0 , -13.5 , -13.0 , -12.7 , -12.5 , -12.3 , -12.1 , -11.9 , -11.7 + : & 0.8 , 1.0 , 1.2 , 1.5 , 1.8 + in order to optimize the size and the read - out time of the grid , only part of the output from the fastwind models is kept and stored in idl xdr - files .this includes the input parameters , the h / he line profiles and equivalent widths , the information about the stellar atmosphere structure and the emergent flux distribution , and the synthetic photometry resulting from the computed emergent flux distribution . using these xdr - files , the size of the grid can be reduced to a 10% of the original one .idl can restore each xdr - file and compute the corresponding quantity ( see sect .[ idl ] ) in 0.020.1 s per model , i.e. , the tool can pass through 80,000 models in 30 min1 hour .following the main philosophy of the iacob grid - tool , only the reduced grid is used by the tool , whilst the original grid is safely kept on hard - disk at the iac .this allows for an easy transfer of the grid to an external disk , hence satisfying our constraint for portability . +the fastwind grid presently incorporated into the iacob grid - based tool consists of ,000 models ( ,000 models per he - plane ) .the reduced size is gb . this predefined grid can be easily updated and/or extended if necessary using appropriate scripts implemented in idl , condor , and linux .for example , a new he - plane can be computed and prepared for use in days .the guideline of the automatic analysis is based on standard techniques applied within the quantitative spectroscopic analysis of o - stars using optical h / he lines that have been described elsewhere ( e.g. [ 12 ] , [ 13 ] ) .the whole spectroscopic analysis is performed by means of a variety of idl programs following the steps indicated below . in brief , once the observed spectrum is processed , the tool obtains the quantity ( i.e. an estimation of the goodness of fit ) for each model within a subgrid of models selected from the global grid , and determines the stellar parameters and their associated uncertainties by interpreting the obtained distributions .+ + in this first step , the user has to provide the observed spectrum , to indicate the corresponding resolving power ( ) , and to give some pre - determined information about the star the projected rotational velocity ( ) , the size of the extra line - broadening ( ) , and the absolute visual magnitude ( ) .+ concerning the grid , the user must select the appropriate metallicity , and indicate the range of values for the various free parameters ( defining the subgrid of models to be considered within the analysis ) .this latter option allows the tool to be faster ( using optimized ranges for the various free parameters ) , or the user to perform a preliminary quick analysis by fixing some of the parameters .for example , one can obtain a quick estimate on , log_g _ , and log in less than 5 min , by fixing the other three parameters ( , and/or ) .+ finally , the h / he lines which shall be considered in the analysis and the corresponding weights need to be specified .+ _ : processing of the observed spectrum _ + in many cases , the observed spectra need to be processed before launching the automatic analysis , because of , e.g. , nebular contamination affecting the cores of the h and hei lines , the need to improve the normalization of the continuum adjacent to the line , and the presence of cosmic rays .+ the possibility to correct the observed spectrum for these effects has been incorporated into corresponding idl procedures .the implemented options include ( for each of the considered lines ) : local renormalization , selection of the wavelength range of the line , clipping / restoring of certain parts of the line , and computation of the signal - to - noise ratio ( s / n ) in the adjacent continuum .+ once finalized , the processed spectrum is stored .this spectrum and all associated information is used in _step 3 _ , and also each time the analysis of the same star / spectrum is ( re-)launched . this way the user can easily reconsider his / her decisions after a first analysis .+ in this step , the user can select a model from the grid to help him / her with the processing of the observed spectrum .one can , for example , make a quick preliminary analysis ( see above ) fixing some parameters and including only few lines , and then use the resulting best model to check the processed lines and find constraints for the processing of the others . + + this step , together with _step 4 _ , are the most important ones , constituting the core of the program . in the present version of the iacob grid - based tool we have considered a procedure as described below ; however , we are aware that this procedure can be subject to discussion / improvements . having this in mind , and following the philosophy of the tool ( regarding versatility and adaptability ) ,the corresponding idl modules have been implemented with the possibility to be easily modified .the fast performance of the iacob grid - based tool makes it very powerful to investigate various alternative strategies .+ in our present version , the tool computes , for every model in the subgrid , the quantity for each considered line where and are the normalized fluxes corresponding to the synthetic and observed spectrum , respectively ; =(s / n) accounts for the s / n of the line ; and is the number of frequency points in the line .under ideal conditions ( e.g. , for a perfect model , but see below ) , corresponds to a reduced .+ in a second step , the values for each model are corrected for possible deficiencies in the synthetic lines ( due , for example , to deficiencies in the model , an incorrect characterization of the noise of the line , bad placement of the continuum , or bad characterization of the line - broadening ) . to this aimwe compute , for each line , the standard deviation of the residuals =( ) from that model that results in the minimum for the given line .then , the following correction is applied : using these individual values and the weights assumed for each of the considered lines ( , e.g. , [ 4 ] ) , a global is obtained : where is the number of lines . for a large number of frequency points per line , should be normally distributed .+ thus , _ step 3 _ provides the values of the quantity associated with each of the models included in the considered subgrid . as indicated in section [ grid ] , this step can last from a few seconds to less than about 1 hour , depending on the number of models in the subgrid .an example of -distributions ( actually , =e ) with respect to the various stellar parameters is presented in figure [ fig2 ] .+ = 46,000 ) . and were previously determined using a combined fourier transform + goodness of fit technique ( see e.g. [ 14 ] ) . in this example , only 5 h / he lines were used for the actual analysis . notethat also all other synthetic lines fit the observations perfectly at the derived parameters .the analysis was performed in min .see [ 2 ] for details on the distance , apparent visual magnitude and visual extinction used to determine the absolute visual magnitude ( ) .[ fig2 ] ] _ : computation of mean values and uncertainties _+ the previous step provides the for each of the six parameters derived through the spectroscopic analysis ( , log_g _ , , , log , and ) .if the absolute visual magnitude is provided , the tool automatically determines , log , and ( spectroscopic mass ) for each model , and hence the corresponding for these parameters are available as well .if also the terminal velocity is provided , a similar computation is performed for the mass - loss rate ( using , , and ) .finally , one of the idl modules computes the evolutionary masses ( ) of the models from an interpolation in the ( log , log_g_)-plane using the tracks provided by a stellar evolution code . since the for and log_g _ are computed and stored in _step 3 _ , the computation of the corresponding distributions for , log , , can be easily repeated in a few seconds , in case a different and/or evolutionary tracks need to be considered . + these distributions are then used to compute mean values and uncertainties for each parameter ( taking into account that models above a given threshold in can be discarded ) .+ the tool also allows to easily investigate possible degeneracies ( for example , for stars with weak winds the analysis of the optical h / he lines only provides an upper limit for the parameter , and does not allow to constrain ) , and the contribution of the various parameters to the final uncertainty ( by fixing one of the free parameters to a certain value and recomputing the statistics for the other parameters ) .+ + the last step is the creation of a summary plot for better visualization of the results ( see figure [ fig2 ] as an example ) .the present version of the tool includes : * for the various parameters involved in the analysis ( upper - left panels ) . in those panels ,vertical lines indicate the limiting values adopted for the six free parameters ( i.e. , the sub - grid of fastwind models for which a is computed ) , and horizontal lines indicate the value corresponding to (threshold ) .the user can select three of the models that will be indicated as red , blue , and pink dots ( the same colors are also used in other parts of the figure ) . * a summary of the input parameters ( , , and ) and the result from the statistics for each parameter resulting from the analysis ( upper - right part of the figure ) .* a log _ diagram including evolutionary tracks ( e.g. , from [ 11 ] , without rotation , in the figure ) and the position of all models with (threshold ) . *a set of panels ( lower part of the figure ) were the observed and synthetic h / he lines indicated by the user are compared .the spectral regions that are used to compute the for each line are indicated in black , while the clipped ( or not used ) regions are presented in green .note that the plotted lines are not limited to those used in the analysis process ( the latter are marked with an x ) . the fast performance and versatility of the iacob grid - based tool not only allows to analyze large samples of o - type stars in a reasonable time , but also to easily investigate the effects of * _ the assumed line - broadening _ : the common strategy in previous spectroscopic analyses of o - stars was to assume pure rotational line - broadening ( in addition to natural , thermal , stark / collisional and microturbulent broadening included in the synthetic spectra ) .however , recent studies have shown that an important extra - broadening contribution ( commonly quoted as _ macroturbulent broadening _ ) affects the shape of the line - profiles of this type of stars ( see e.g. [ 15 ] , and references therein ) .how much are the derived parameters affected when this extra - broadening is neglected ? can both broadening contributions be represented by one fake rotational profile ( ) , without disturbing the resulting parameters ? what is the effect of the uncertainty in the derived / assumed broadening ? some results from a preliminary investigation of these questions can be found in [ 16 ] . * _ the placement of the continuum _ : the normalization of the spectra of o - type stars is sometimes complicated , especially in the case of echelle spectra .it is commonly argued that the derived gravity can be severely affected by the assumed normalization , but , to which extent ? to investigate this effect one could use the iacob grid - based tool , applying small modifications to the continuum placement .an example of this type of analysis for the case of low resolution spectra of b - supergiants can be found in [ 6 ] . * _ neglecting / including a variety of different commonly used diagnostic lines _ : before computing the global ( see equation 3 ) , the resulting for each line are stored . this waythe user can easily recompute , discarding some of the initially considered lines .this option allows , for example , to investigate the change in the wind - strength parameter when both h and heii4686 lines or only one of them are included in the analysis . *_ clipping part of the lines _ : most of the o - stars are associated to hii regions . in some cases , the stellar spectra can be heavily contaminated by nebular emission lines ( mainly the cores of the hydrogen lines , but also the hei lines ) .this contamination must be identified and eliminated from the stellar spectrum to obtain meaningful results from the spectroscopic analysis .when the resolution of the spectra is high , the clipped region is small in comparison with the total line width ; however , even for a moderate resolution an important region of the line needs to be eliminated .what is the effect of such a strong clipping in the h line on the determination of the mass - loss rate ?how large is the effect of clipping the core of the hei lines in the determination ?* _ the way the statistics is derived from the _ : as commented in section [ idl ] , this is a critical point in the determination of final best values and uncertainties .these are only some examples of things can be investigated with the iacob grid - based tool .many other tests can be easily performed .in addition , we expect the tool to be of great benefit for the analysis of the o - star samples included in , e.g. , the eso - gaia , vfts , own , iacob and other similar large surveys. financial support by the spanish ministerio de ciencia e innovacin under projects aya2008 - 06166-c03 - 01 , aya2010 - 21697-c05 - 04 , and by the gobiernode canarias under project pid2010119 .this work has also been partially funded by the spanish micinn under the consolider - ingenio 2010 program grant csd2006 - 00070 : first science with the gtc ( http://www.iac.es/consolider-ingenio-gtc ) .16 simn - daz s , castro n , garcia m , herrero a and markova n 2011a , _ bsrs liege _ ,* 80 * , 514 simn - daz s , garcia m , herrero a , maz - apellniz j , and negueruela i 2011b , _ proc .`` star clusters and associations : a ria workshop on gaia '' _ preprint _ arxiv1109.2665s evans c j et al . 2011 , _a&a _ , * 530 * , a108 mokiem m r , de koter a , puls j , herrero a , najarro f and villamariz m r 2005 , _ a&a _ , * 441 * , 711 urbaneja m a , kudritzki r p , bresolin f , przybilla n , gieren w and pietrzyski g 2008 , _ apj _ , * 684 * , 118 castro , n. , herrero , a. , urbaneja , m. a , et al . (_ a&a _ , submitted ) puls j , urbaneja m a , venero r , repolust t , springmann u , jokuthy a and mokiem m r 2005 , _a&a _ , * 435 * , 669 martins f , schaerer d and hillier , d. j. 2005 , _ a&a _ , * 436 * , 1049 kudritzki r p and puls j 2000 , _ a&a _ , * 38 * , 613 asplund m , grevesse n , sauval a j and scott p 2009 , _ara&a _ , * 47 * , 481 schaller g , schaerer d , meynet g and maeder a 1992 , _ a&as _ , * 96 * , 269 herrero a , puls j. and najarro f 2002 , _ a&a _ , * 396 * , 949 repolust t , puls j and herrero a 2004 , _a&a _ , * 415 * , 349 simn - daz s , herrero a , uytterhoeven k , castro n , aerts c and puls j 2010 , _ apjl _ , * 720 * , l174 simn - daz s 2011 , _ brsr liege _ ,* 80 * , 86 sabn - sanjulian c , simn - daz s , garcia m , herrero a , puls j and castro n 2011 , _ proc ._ in honour of a. moffat | we present the iacob grid - based automatic tool for the quantitative spectroscopic analysis of o - stars . the tool consists of an extensive grid of fastwind models , and a variety of programs implemented in idl to handle the observations , perform the automatic analysis , and visualize the results . the tool provides a fast and objective way to determine the stellar parameters and the associated uncertainties of large samples of o - type stars within a reasonable computational time . |
statistical inference is of fundamental importance to science .inference enables the testing of theoretical models against observations , and provides a rational means of quantifying uncertainty in existing models .modern approaches to statistical inference , based on monte carlo sampling techniques , provide insight into many complex phenomena . inference can be described as follows : suppose we have a set of observations , ; a method of determining the likelihood of these observations , , under the assumption of some model characterized by parameter vector , ; and a prior probability density , .the posterior probability density , , can be computed using bayes theorem , explicit expressions for likelihood functions are rarely available ; motivating the development of likelihood - free methods , such as approximate bayesian computation ( abc ) .abc methods approximate the likelihood through evaluating the discrepancy between data generated by a simulation of the model of interest and the observations , yielding an approximate form of bayes theorem , here , is data generated by the model simulation process , , is a discrepancy metric , and is the acceptance threshold . due to this approximation ,monte carlo estimators based on equation ( [ eqn : approxbayes ] ) are biased .the most simple implementation of abc is abc rejection , see algorithm [ alg : abc - rej ] .[ line : sample1]sample prior , .generate data , .set .[ alg : abc - rej ] this method generates samples by accepting proposals , , when the data generated by the model simulation process is within of the observed data , .while abc rejection is simple to implement , it can be computationally prohibitive in practice . to improve the efficiency of abc, one can consider a likelihood - free modification of markov chain monte carlo ( mcmc ) in which a markov chain is constructed with a stationary distribution identical to the desired posterior .given the markov chain is in state , a state transition is proposed via a proposal kernel , . the classical metropolis - hastings state transition probability , , can be modified within an abc framework to yield if , or otherwise .the stationary distribution of such a markov chain is the desired approximate posterior , .algorithm [ alg : mcmc - abc ] provides a method for computing iterations of this markov chain .sample transition kernel , .generate data , .set . sample uniform distribution , .set .set .set .[ alg : mcmc - abc ] while mcmc - abc sampling can be highly efficient , the samples in the sequence , , are not independent .this can be problematic as it is possible for the markov chain to take long excursions into regions of low posterior probability ; thus incurring additional , and potentially significant , bias .a poor choice in the proposal kernel can also have considerable impact upon the efficiency of mcmc - abc .the question of how to choose the proposal kernel is non - trivial , and typically determined heuristically .sequential monte carlo ( smc ) sampling was introduced to address these potential inefficiencies and later extended within an abc context .a set of samples , referred to as particles , is evolved through a sequence of abc posteriors defined through a sequence of acceptance thresholds , . at each step , ] .that is , an estimate of } ] . by taking a sequence of time steps ,the indices of which are referred to as _ levels _ , we can arrive at a telescoping sum , } = { \mathbb{e}\left[z_t^{\tau_1}\right ] } + \sum_{\ell = 2}^l { \mathbb{e}\left[z_t^{\tau_\ell } - z_t^{\tau_{\ell-1}}\right]}.\ ] ] while computing this form of the expectation returns the same bias as when computing } ] in the context of stochastic differential equations ( sdes ) .this efficiency comes from exploiting the fact that the bias correction terms , } ] .we also use to denote the monte carlo estimator of the expectation , , obtained using samples .the standard monte carlo integration approach is to generate samples from the abc posterior , , then evaluate the empirical cdf ( ecdf ) , for , where a discretization of the parameter space . for simplicity , we will consider to be a -dimensional regular lattice .the ecdf is not , however , the only monte carlo approximation to the cdf one may consider . in particular , giles et al . demonstrate the application of mlmc to a univariate cdf approximation .we now present a multivariate equivalent of the mlmc cdf of giles et al . in the context of abc posterior cdf estimation . given a strictly decreasing sequence of acceptance thresholds , , we can represent the cdf ( equation ( [ eqn : cdf ] ) ) using the telescoping sum } = \sum_{\ell=1}^{l } y_\ell({\mathbf{s}}),\ ] ] where } & \text{if } \ell = 1 , \\ { \mathbb{e}\left[{\mathds{1}_{a_{\mathbf{s}}}({\boldsymbol{\theta}}_{\epsilon_\ell } ) } - { \mathds{1}_{a_{\mathbf{s}}}({\boldsymbol{\theta}}_{\epsilon_{\ell -1}})}\right ] } & \text{if } \ell > 1 .\end{cases}\ ] ] using our notation , the mlmc estimator for equation ( [ eqn : mlmccdf ] ) and equation ( [ eqn : mlmccdfterms ] ) is given by where & \text{if } \ell > 1 , \end{cases}\ ] ] and is a lipschitz continuous approximation to the indicator function ; this approximation is computed using a tensor product of cubic splines .such a smoothing is necessary to avoid convergence issues with mlmc caused by the discontinuity of the indicator function . to compute the term ( equation ( [ eqn : bmcest ] ) ), we generate samples from ; this represents a biased estimate for . to compensate for this bias ,correction terms ( equation ( [ eqn : bmcest ] ) ) , , are evaluated for , each requiring the generation of samples from and samples from .the goal is to introduce a coupling between levels that controls the variance of the bias correction terms . with an effective coupling ,the result is an estimator with lower variance , hence the number of samples required to obtain an accurate estimate is reduced .denote as the variance of the estimator . for can be expressed as } \\= & \ , { \text{var}\left[g_{\mathbf{s}}({\boldsymbol{\theta}}_{\epsilon_\ell})\right ] } + { \text{var}\left[g_{\mathbf{s}}({\boldsymbol{\theta}}_{\epsilon_{\ell-1}})\right ] } - { \text{cov}\left[g_{\mathbf{s}}({\boldsymbol{\theta}}_{\epsilon_\ell}),g_{\mathbf{s}}({\boldsymbol{\theta}}_{\epsilon_{\ell-1}})\right]}.\end{aligned}\ ] ] introducing a positive correlation between the random variables and will have the desired effect of reducing the variance of . in many applications of mlmc , a positive correlation is introduced through driving samplers at both the and level with the same _ randomness_. properties of brownian motion or poisson processes are typically used for the estimation of expectations involving sdes or markov processes . in the context of abc methods , however , simulation of the quantity of interest is necessarily based on rejection sampling .the reliance on rejection sampling makes a strong coupling , in the true sense of mlmc , a difficult , if not impossible task .rather , here we introduce a weaker form of coupling through exploiting the fact that our mlmc estimator is performing the task of computing an estimate of the abc posterior cdf .we combine this with a property of nested abc rejection samplers to arrive at an efficient algorithm for computing .we proceed to establish a correlation between levels as follows .assume we have computed , for some , the terms in equation ( [ eqn : mlmcest ] ) .that is , we have an estimator to the cdf at level by taking the sum with marginal distributions for .we can use this to determine a coupling based on matching marginal probabilities when computing . after generating samples from , we compute the ecdf using equation ( [ eqn : ecdf ] ) and obtain the marginal distributions for .we can thus generate coupled pairs by choosing the with the same marginal probabilities as the empirical probability of .that is , the component of is given by , where is the component of and is the inverse of marginal distribution of .this introduces a positive correlation between the sample pairs , , since an increase in any of the components of will cause an increase in the same component .this correlation reduces the variance in the bias correction estimator computed according to equation ( [ eqn : bmcest ] ) .we can then update the mlmc cdf using and apply an adjustment that ensures monotonicity .we continue this process iteratively to obtain .initialize , and prior .sample restricted to .generate data , .set . set .set . set /n_\ell ] , .this follows from the fact that if , since .therefore , we can truncate the prior to the support of when computing , thus increasing the acceptance rate of level samples . in practice , we approximate the support by the smallest bounding box that contains all generated samples .we now require the sample numbers that are the optimal trade - off between accuracy and efficiency .denote as the number of data generation steps required during the computation of and let be the average number of data generation steps per accepted abc posterior sample using acceptance threshold .given } ] , for , one can construct the optimal using a lagrange multiplier method under the constraint }~=~\mathcal{o}(h^2) ] , where is the cluster size of the genotype in the dataset .we perform likelihood - free inference on the tuberculosis model for the parameters with the goal of evaluating the efficiency of mlmc - abc , mcmc - abc and smc - abc .we use a target posterior distribution of with as defined in equation ( [ eqn : tb_disc ] ) and .the improper prior is , and . for the mcmc - abc and smc - abc algorithmswe apply a typical gaussian proposal kernel , , with covariance matrix such a proposal kernel is reasonable to characterize the initial explorations of an abc posterior as no correlations between parameters are assumed . , ( b ) and ( c ) the tuberculosis transmission stochastic model .estimate computed using using mlmc - abc with ( yellow lines ) , mcmc - abc over iterations ( blue lines ) , smc - abc with particles ( red lines ) and high precision solution ( dashed lines ) . ].comparison of mlmc - abc against mcmc - abc and smc - abc using a naive proposal kernel . [ cols="^,^,^,^,^,^,^,^,^ " , ]our results indicate that while smc - abc and mcmc - abc can be heuristically optimized to be highly efficient , an accurate estimation of the parameter posterior can be obtained using our mlmc - abc in a reasonably automatic fashion .furthermore , the efficiency of mlmc - abc is comparable or improved over mcmc - abc and smc - abc , even in the case of good proposal densities that have been heuristically determined .mlmc - abc shares some concepts with smc - abc .for example , both methods consider a sequence of abc posteriors .furthermore , both use information gained at each step to improve the acceptance rates of the next step while maintaining independence in the set of samples .mlmc - abc and smc - abc are still , however , distinct methods .for example , in smc - abc the prior distribution is only sampled at the initial step , whereas in mlmc - abc the prior must be sampled at all levels to ensure the telescoping sum does not introduce bias .another distinct difference is that mlmc does not evolve particles through the abc posterior sequence , rather each abc posterior is sampled independently to compute the bias correction terms .the need to estimate the variances of each bias correction term could be considered a limitation of the mlmc - abc approach .however , we find in practice that these need not be computed to high accuracy , since the relative variance between the levels is the main requirement and this can often be estimated with a relatively small number of samples . there could be examples of bayesian inference problems where mlmc - abc is inefficient on account of this fact , in which case the variance estimation would have to be considered as a more heuristic process .we have so far , however , failed to find an example for which samples of each bias correction term is insufficient to obtain a good mlmc - abc estimator .there are many modifications one could consider to further improve mlmc - abc .currently , we depend on the sequence of acceptance thresholds to be specified in advance and we acknowledge that , in practice , many smc - abc implementations determine these adaptively ; modification of mlmc - abc to allow for adaptive acceptance thresholds would make mlmc - abc even more practical .other improvements could focus on the discretization used for the ecdf calculations , for example , removing the requirement of a regular lattice would enable mlmc - abc to scale to much higher dimensional parameter spaces .coupling strategies are also possible improvement areas , our current coupling depends only on the computation of the full posterior cdf and assumes nothing about the underlying model ; can model specific features improve the coupling and obtain further variance reduction ?the rejection sampling method certainly makes this difficult , but there may still be opportunities for improvement .we have shown , in a practical way , how mlmc techniques can be applied to abc inference , and demonstrated that superior performance over modern advanced abc methods can be achieved .furthermore , the performance of our mlmc - abc method is not dependent on user specified functions such as proposal kernels .this work is supported by the australian research council ( ft130100148 ) .computational resources were provided by the high performance computing and research support group , qut .in this appendix , we demonstrate how the sequence of sample numbers is obtained , as stated in the main text .first , we assume the variances } ] , for , are known quantities .we also assume that , on average , data generation steps are required to compute the estimator .the sequence is considered optimal if is minimised subject to the constraint for some constant and target monte carlo error . to determine optimal , we consider the lagrangian and note that solutions to the optimisation problem exist at .that is , {\mathbf{e}}_\ell + \left[e(n_1,\ldots , n_l ) - kh^2\right]{\mathbf{e}}_{l+1 } & = { \mathbf{0}},\end{aligned}\ ] ] where are the standard orthonormal basis vectors of -dimensional euclidean space .we obtain the following system of equations , first we consider the forms of and . by definition of the mlmc estimator given in main text, we have } = { \text{var}\left[\sum_{\ell=1}^{l}\hat{y}^{n_\ell}_\ell({\mathbf{s}})\right]}. \end{aligned}\ ] ] furthermore , since are independent , } = & \sum_{\ell=1}^{l}{\text{var}\left[\hat{y}^{n_\ell}_\ell({\mathbf{s}})\right ] } \\ = & { \text{var}\left[\frac{1}{n_1}\sum_{i=1}^{n_1}g_{\mathbf{s}}({\boldsymbol{\theta}}_{\epsilon_1}^{i})\right ] } + \sum_{\ell=2}^{l } { \text{var}\left[\frac{1}{n_\ell}\sum_{i=1}^{n_\ell } g_{\mathbf{s}}({\boldsymbol{\theta}}_{\epsilon_\ell}^{i } ) -g_{\mathbf{s}}({\boldsymbol{\theta}}_{\epsilon_{\ell-1}}^{i})\right ] } \\ = & \sum_{\ell=1}^{l } \frac{1}{n_\ell^2 } \sum_{i=1}^{n_\ell}v_\ell.\end{aligned}\ ] ] that is , the total number of data generation steps is simply substitution of equation ( [ eqn : const3 ] ) and equation ( [ eqn : const4 ] ) into equation ( [ eqn : const1 ] ) yields substitution of equation ( [ eqn : const3 ] ) and equation ( [ eqn : const5 ] ) into equation ( [ eqn : const2 ] ) allows us to obtain , finally , through substitution of equation ( [ eqn : lambda ] ) back into equation ( [ eqn : const5 ] ) we find that the optimal is given by , as required .dodwell tj , ketelsen c , scheichl r , teckentrup al ( 2015 ) a hierarchical multilevel markov chain monte carlo algorithm with applications to uncertainty quantification in subsurface flow .3(1):10751108 .small pm , et al .( 1994 ) the epidemiology of tuberculosis in san francisco a population - based study using conventional and molecular methods .gillespie dt ( 1977 ) exact stochastic simulation of coupled chemical reactions . | likelihood - free methods , such as approximate bayesian computation , are powerful tools for practical inference problems with intractable likelihood functions . markov chain monte carlo and sequential monte carlo variants of approximate bayesian computation can be effective techniques for sampling posterior distributions without likelihoods . however , the efficiency of these methods depends crucially on the proposal kernel used to generate proposal posterior samples , and a poor choice can lead to extremely low efficiency . we propose a new method for likelihood - free bayesian inference based upon ideas from multilevel monte carlo . our method is accurate and does not require proposal kernels , thereby overcoming a key obstacle in the use of likelihood - free approaches in real - world situations . |
online analytical processing(olap ) comprises a set of tools and algorithms that allow efficiently querying multidimensional ( md ) databases containing large amounts of data , usually called data warehouses ( dw ) .conceptually , in the md model , data can be seen as a _ cube _, where each cell contains one or more _ measures _ of interest , that quantify _facts_. measure values can be aggregated along _ dimensions _ , which give context to facts .at the logical level , olap data are typically organized as a set of _ dimension and fact tables . _current database technology allows alphanumerical warehouse data to be integrated for example , with geographical or social network data , for decision making . in the era of so - called `` big data '' , the kinds of data that could be handled by data management tools , are likely to increase in the near future .moreover , olap and business intelligence ( bi ) tools allow to capture , integrate , manage , and query , different kinds of information .for example , alphanumerical data coming from a local dw , spatial data ( e.g. , temperature ) represented as rasterized images , and/or economical data published on the semantic web . ideally , a bi user would just like to deal with what she knows well , namely the data cube , using only the classical olap operators , like _ roll - up _, _ drill - down _ , _ slice _ , and _ dice _ ( among other ones ) , regardless the cube s underlying data type .data types should only be handled at the logical and physical levels , not at the conceptual level .building on this idea , ciferri et al . proposed a _conceptual _ , _ user - oriented _ model , independent of olap technologies . in this model ,the user only manipulates a data cube .associated with the model , there is a query language providing high - level operations over the cube .this language , called cube algebra , was sketched informally in the mentioned work .extensive examples on the use of cube algebra presented in , suggest that this idea can lead to a language much more intuitive and simple than mdx , the _ de facto _ standard for olap .nevertheless , these works do not give any evidence of the correctness of the languages and operations proposed , other than examples at various degrees of comprehensiveness .in fact , surprisingly , and in spite of the large corpus of work in the field , a formally - defined reference language for olap is still needed .there is not even a well - defined , accepted semantics , for many of the usual olap operations .we believe that , far for being just a problem of classical olap , this formalization is also needed in current `` big data '' scenarios , where there is a need to efficiently perform real - time olap operations , that , of course , must be well defined .[ [ contributions ] ] contributions + + + + + + + + + + + + + in this paper we ( a ) introduce a collection of operators that manipulate a data cube , and clearly define their semantics ; and ( b ) prove , formally , that our operators can be composed , yielding a language powerful enough to express complex queries and cube navigation ( `` _ _ la _ _ olap '' ) paths . we achieve the above representing the data cube as a fixed -dimensional matrix , and a set of measures , and expressing each olap operation as a sequence of atomic transformations .each transformation produces a new measure , and , additionally , when a sequence forms an olap operation , a flag that indicates which are the cells that must be considered as input for the next operation .this formalism allows us to elegantly define an algebra as a collection of operations , and give a series of properties that show their correctness .we provide the proofs in the full paper .we limit ourselves to the most usual operations , namely slice , dice , roll - up and drill - down , which constitute the core of all practical olap tools .we denote these the _ classical olap operations_. this allows us to focus on our main interest , which is , to prove the feasibility of the approach .other not - so - usual operations are left for future work .the main contribution of our work , with respect to other similar efforts in the field is that , for the first time , a formal proof to practical problems is given , so the present work will serve as a basis to build more solid tools for data analysis . existing work either lacks of formalism , or of applicability , and no work of any of these kinds give sound mathematical prove of its claims . in this extended abstractwe present the main properties , and leave the proofs for the full paper .the remainder of the paper is organized as follows . in section [ sec : data - model ] , we present our md data model , on which we base the rest of our work .section [ sec : olap - tando ] presents the atomic transformations that we use to build the olap operations . in section [ sec : classicalolap ] we discuss the classical olap operations in terms of the transformations , show how they can be composed to address complex queries .we conclude in section [ sec : conclusion ] .in this section we describe the olap data model we use in the sequel .we next give the definitions of multidimensional matrix schema and instance . in the sequel , , with , is a natural number representing the number of dimensions of a data cube .[ def : matrix - schema ] a _-dimensional matrix schema _ is a sequence of dimension names . as illustrated in the following example, the convention will be that dimension names start with a capital letter . [ex : matrix - schema ] the running example we use in this paper , deals with sales information of certain products , at certain locations , at certain moments in time . for this purpose, we will define a -dimensional matrix schema .[ def : matrix - instance ] a _ -dimensional matrix instance _( _ matrix _, for short ) over the -dimensional matrix schema is the product , where is a non - empty , finite , ordered set , called the _ domain _ , that is associated with the dimension name . for all ,we denote by , the order that we assume on the elements of . for , we call the tuple , a _ cell _ of the matrix . the cells of a matrix serve as placeholders for the measures that are contained in the data cube ( see definition [ def : data - cube - instance ] below ) .note that , as it is common practice in olap , we assumed an order on the domain .the role of the order is further discussed in section [ subsec : order ] . as a notational convention , elements of the domains with a lower case letter , as it is shown in the following example .[ ex : matrix - instance ] for the -dimensional matrix schema of example [ ex : matrix - schema ] , the non - empty sets , , and produce the matrix instance the cells of the matrix will contain the sales for each combination of values in the domain . in , we have , for instance , the order over the dimension , we have the usual temporal order .we now define the notions of dimension schema and instance .[ def : dimension - schema ] let be a name for a dimension .a _ dimension schema for _ is a lattice , with a unique top - node , called ( which has only incoming edges ) and a unique bottom - node , called ( which has only outgoing edges ) , such that all maximal - length paths in the graph go from to .any path from to in a dimension schema is called a _ hierarchy _ of .each node in a hierarchy ( i.e. , in a dimension schema ) is called a _ level _ ( of ) . as a convention ,level names start with a capital letter .note that the node is often renamed , depending on the application .[ ex : dimension - schema ] fig .[ fig : dimension - schema ] gives examples of dimension schemas and for the dimensions and in example [ ex : matrix - schema ] .for the dimension , we have , and there is only one hierarchy , denoted the node is an example of a level in this hierarchy . for the dimension , we have , and two hierarchies , namely and . , in ( ) , and , in ( ) . ][ def : instancegraph ] let be a dimension with schema , and let be a level of .a _ level instance of _ is a non - empty , finite set .if , then is the singleton .if , then is the the domain of the dimension , that is , ( as in definition [ def : matrix - instance ] ) . a _ dimension graph ( or instance ) _ the dimension schema is a directed acyclic graph with node set where the union is taken over all levels in .the edge set of this directed acyclic graph is defined as follows .let and be two levels of , and let and .then , only if there is a directed edge from to in , there can be a directed edge in from to . if is a hierarchy in , then the _ hierarchy instance _( relative to the dimension instance ) is the subgraph of with nodes from , for appearing in .this subgraph is denoted . as notational convention, the names of objects in a set start with a lower case character .we remark that a hierarchy instance is always a ( directed ) tree .also , if and are two nodes in a hierarchy instance , such that is in the transitive closure of the edge relation of , we will say that _ rolls - up _ to and we denote this by ( or if is clear from the context ) .[ ex : instancegraph ] consider the dimension , whose schema is given in fig .[ fig : dimension - schema ] ( ) . from example[ ex : matrix - instance ] , we have , which is , or .an example of a dimension instance is depicted in fig .[ fig : dimension - instance ] .this example expresses , for instance , that the city is located in the region which is part of the country , meaning that rolls - up to and to , that is , and . .] in a dimension graph , we must guarantee that rolling - up through different paths gives the same results .this is formalized by the concept of `` sound '' dimension graph .[ def : sound - graph ] let be a dimension graph ( as in definition [ def : instancegraph ] ) .we call this dimension graph _ sound _ , if for any level in and any two hierarchies and that reach from the level and any and , we have that and imply that . in this paper , we assume that dimension graphs are always sound . essentially , a data cube is a matrix in which the cells are filled with measures that are taken from some _ value domain _ . for many applications, will be the set of real or rational numbers , although some other ones may include , e.g. , spatial regions or geometric objects .[ def : data - cube - schema ] a _ -dimensional data cube schema _ consists of ( a ) a -dimensional matrix schema ; and ( b ) a hierarchy schema for each dimension , with .[ def : data - cube - instance ] let be a non - empty set of `` values '' . a _-dimensional , -ary data cube instance _ ( or _ data cube _ , for short ) over the -dimensional matrix schema and hierarchy schemas for , for , with values from , consists of ( a ) a -dimensional matrix instance over the matrix schema , ; ( b ) for each , a _ sound _dimension graph over ; ( c ) _ measures _ , which are functions from to the value domain ; and ( d ) a _ flag _ , which is a function from to the set . also ,as a notational convention , we use calligraphic characters , like , to represent data cube instances .the flag can be considered as a -st boolean measure .the role of is to indicate which of the matrix cells are currently `` active '' .the active cells have a flag value and the others have a flag value .when we operate over a data cube , flags are used to indicate the input or output parts of the matrix of the cube .typically , in the beginning of the operations , all cells have a flag value of .the role of flags will become more clear in the next sections , when we discuss olap transformations and operations .when performing olap transformations and operations , we may need to store aggregate information about certain measures up to some level above the one .we do not want to use extra space for this in the data cube .instead , we use the available cells of the original data cube to store this information . for this , we make use of the order assumed in definition [ def : matrix - instance ] , for the representation of high - level objects by -level objects .[ def : represents ] let be an arbitrary dimension with domain .let be a level of .an element is _ represented _ by the smallest element ( according to ) for which holds .we denote this as , and say that .[ ex : order ] continuing with the previous examples , we consider the dimension with ( i.e. , . on this set ,we _ assume _ the order .for this dimension , we have the hierarchy and the dimension instance , given in figs . [fig : dimension - schema ] and [ fig : dimension - instance ] , respectively . at the level ,cities represent themselves . at higher levels , regions and countriesare represented by their `` first '' city in ( according to ) .thus , and are represented by , is represented by , and is represented by . at the level , represents note that the -level representatives of higher - level objects , will be flagged , and other cells flagged .also , in our example , if we aggregate information at level , with , then all cities in become flagged .thus , it would not be clear if the cube contains information at the level or at the level .the following property shows how the order on the level induces and order on higher levels .[ prop : order - higher - level ] let be a ( sound ) dimension of a data cube and let be a level in the dimension schema .the order on induces an order on as follows .if , then if and only if .a typical olap user manipulates a data cube by means of well - known operations .for instance , using our running example , the query `` total sales by region , for regions in belgium or france '' , is actually expressed as a sequence of operations , whose semantics should be clearly defined , and which can be applied in different order . for example , we can first apply a _ roll - up _( i.e. , an aggregation ) to the _ country _ level , and once at that level apply a _ dice _ operation , which keeps the cube cells corresponding to belgium or france .finally , a _ drill - down _ can be applied to disaggregate the sales down to the level _ region _ , returning the desired result . in what follows ,we characterize olap operations as the result of sequences of olap transformations , which are measure - creating updates to a data cube . an _ atomic olap transformation _acts on a data cube instance , by adding a measure to the existing data cube measures .olap operations like the ones informally introduced above are defined , in our approach , as a sequence of transformations .the process of olap transformations starts from a given _ input data cube _ .we assume that this original data cube has given measures ( as in definition [ def : data - cube - instance ] ) .these measures have a special status in the sense that they are `` protected '' and can never be altered ( see section [ subsec : olap - operation ] ) . typically , the input - flag of the original data cube is set to in every cell and signals that every cell of is part of the input cube .atomic olap transformations can be applied to data cubes .they add ( or create ) new measures to the sequence of existing measures by adding new measure values in each cell of the data cube s matrix . at any moment in this process, we may assume that the data cube has measures , where the first are the original measures of , and the last ( with ) ones have been created subsequently by olap transformations ( where is the empty sequence of s , for ) .the next olap transformation adds a new measure to the matrix cells .we have said that we use olap transformations to compute olap operations .we indicate that the computation of an olap operation is finished by creating an -ary output flag .this output flag is a boolean measure , that is created via atomic olap transformations .it indicates which of the cells of should be considered as belonging to the output of .it is -ary in the sense that it keeps the last created measures and `` trashes '' the rest .it also removes the previous flag , which it replaces .the initial measures of the input data cube are never removed ( unless they are `` destroyed '' in some cells ) .they remain in the cube throughout the process of applying one olap operation after another to , and can be used at any stage .summarizing , after an olap operation of output arity is completed on some cube , the measures in the cells of the output data cube are of the form here , the underlining indicates the protected status of these measures . after each olap operation ,the unprotected measures with the symbols and the output measures become the next olap operation can then act on and use in its computation all the measures above .we remark that the dimensions , the hierarchy schemas and instances of remain unaltered during the entire olap process .we end this description with a remark on _destructors_. a destructor , optionally , precedes the creation of an output flag .a destructor takes the value for some cells of the matrix of a data cube , and on other cells .when is invoked ( and activated by the output flag that follows it ) on a data cube with measures and flag , it empties all cells for which the value of the destructor is by removing all measures from them , even the protected ones , thereby effectively `` destroying '' these cells .this is the only case where the protected measures are altered ( see operations slice or dice , later ) .the output of a destructive operation looks like in which the destructor precedes the output flag .the effect of the presence of a destructor is the following .a cell such that is emptied , after which it contains no more measures and flag . for cells with , the sequence of measures transformed to which is renamed as before the next transformation takes place .this transformation will act , cell per cell , on the matrix of a cube , and it does nothing with emptied cells .that is , no new measure can ever be added to a destroyed cell .the following definition specifies how an olap transformation acts on a data cube .we then address in detail each atomic olap transformation appearing in this definition .[ def : olap - transformation ] let be a -dimensional , -ary data cube instance with given ( or protected ) measures , created measures ( with ) and flag over some value domain .olap transformation _ , applied to , results in the creation of a new measure in .transformation adds measure ; is produced from : ; ; and the hierarchy schemas and instances of ; and belongs to one of the following classes : ( a ) arithmetic transformations ( definition [ def : trans - arith ] ) ; ( b ) boolean transformations ( definition [ def : trans - boolean ] ) ; ( c ) selectors ( definition [ def : trans - selector ] ) ; ( d ) counting , sum , min - max ( definitions [ def : trans - counting ] , [ def : trans - min - max - revisited ] ) ; ( e ) grouping ( definition [ def : trans - grouping ] ) .an olap transformation can also result in the creation of a measure that is an output flag of arity .this should be a measure with a boolean value . to indicate that it is a flag of arity , we use the reserved symbol instead of .an output flag may ( optionally ) be preceded by a destructor .this should be a measure with a boolean value ( to indicate which cells are destroyed ) .we use the reserved symbol instead of .before we give the definition of an olap operation , we describe the _ input _ to the olap process ( this process may involve multiple olap operations ) .such input is a -dimensional , -ary data cube instance , with measures and flag .these measures are _ protected _ in the sense that they remain the first measures throughout the entire olap process and are never altered or removed unless they are destroyed in some cells .the cube has also a boolean flag , which typically has value in all cells of .thus , the measures of the input cube are denoted after applying a sequence of olap operations to , we obtain a data cube . let be a -dimensional , -ary _ input _ data cube instance with given measures , computed measures and flag .the data cube acts as the input of an _ olap operation _ ( of arity ) , which consists of a sequence of consecutive olap transformations that create the additional measures , followed by the creation of an -ary flag . as the result of the creation of ,the measures in the cells of the data cube are changed from to which become after renaming .the output cube has the same dimensions , hierarchy schemas and instances as , and measures in the case where is preceded by a destructor , the same procedure is followed , except for the cells of for which takes the value .these cells of are emptied , contain no measures , and become inaccessible for future transformations .we now address the five classes of atomic olap transformations of definition [ def : olap - transformation ] .we use the following notational convention . for a measure , we write to indicate the value of in the cell we remark that does not exist for empty cells and it is thus not considered in computations .also , we assume that there are _ protected _ measures , and _ computed _measures in the non - empty cells , and call the next computed measure .[ def : trans - arith ] the following creations of a new measure are _ arithmetic transformations _ : 1 .( * constant * ) , with , a rational number .( * sum * ) , with .( * product * ) , with .( * quotient * ) , with .[ def : trans - boolean ] the following creations of a new measure are _ boolean transformations _ : 1 .( * equality test on measures * ) , with . here , the result of is a boolean 1 or 0 ( ) .( * comparison test on measures * ) , with . here, the result of the comparison is a boolean 1 or 0 ( ) .( * equality test on levels * ) 4 .( * comparison test on levels * ) for a level in the dimension schema of dimension , and a constant , is a `` comparison '' test .the result of is a boolean 1 or 0 ( ) , such that is if and only if rolls - up to an object at level for which .the order can be any order that is defined on level .transformation is defined similarly .[ ex : trans - boolean-1 ] we illustrate the use of boolean transformations by means of a sequence of transformations that implement a `` dice '' ( see section [ subsec : dice ] for more details ) .the query asks for the cells in the matrix of which contain sales that are higher than 50 .this query can be implemented by the following sequence of transformations : * ( rational constant ) ; * ( comparison test on measures ) ; * ( product ) ; * ( destructor ) ; and * ( unary flag ) the measure contains the values larger than or equal to 50 ( and a 0 if the are lower ) .the destructor destroys the cells that contain a o. finally , the flag selects all cells from the input as output cells ( it will contain a 1 for all such cells that satisfy the condition ) , and concludes the operation .the output of this operation is which is then renamed to [ def : trans - selector ] the following creations of a new measure are _ selector transformations _ ( or _ selectors _ ) , and their definition is cell per cell of : 1 .( * constant selector * ) for a level in the dimension schema of a dimension , and , can be a _ constant - selector for _ , denoted , 2 .( * level selector * ) for a level in the dimension schema of a dimension , can be a _ level - selector for _ , denoted by , which means that we have , for all with , the _ constant _ selector in definition [ def : trans - selector ] , corresponds to the equality test on levels ( see 3 . in definition [ def : trans - boolean ] ) . here, this transformation appears with a different functionality and we reserve a special notation for it , and we repeated it .also , note that the _ level _ selector selects all representatives ( at the level ) of objects at level of dimension .[ ex : trans - selector-2 ] the query asks for the sales in the cities of and . it can be implemented by the following sequence of transformations , where can take values or , since the cities and do not overlap : * ( constant selector ) ; * ( constant selector ) ; * ( sum ) ; * ( product ) ; * * ( unary flag creation ) .[ def : trans - counting ] the creations of a new measure defined next , are denoted 1 .( * count - distinct * ) , counts the number of distinct values of measure in the complete matrix of the data cube .( * -dimensional sum * ) with , gives the sum of the measure over all matrix cells .we abbreviate this operation by writing and call this transformation the _-dimensional sum_. 3 .( * min - max * ) , with , gives the smallest value of the measure the matrix .similarly , , gives the largest value of the measure in the matrix .it is important to remark that the above transformations create the _same new measure value _ for all cells of the matrix .[ ex : trans - counting-2 ] now , we look at the query the query can be computed as follows , given : * ( constant selector on ) ; * ( product that selects the sales in , puts a 0 in all other ones ) ; * ( this is the total sales in in every cell ) ; * ( this is the total sales in in the cells of ) ; * ( this flag creation selects the cells of ) .the output measures are , which are renamed .thus , the value of the total of sales in is now available in every cell corresponding to . for the cells outside is a .the most common olap operations ( e.g. , roll - up , slice ) , require grouping data before aggregating them .for example , typically we will ask queries like `` total sales by city '' , which requires grouping facts by city , and , for each group , sum all of its sales .therefore , we need a transformation to express `` grouping '' .to deal with grouping , we use the concept of `` prime labels '' for sets and products of sets .we will use these labels to identify elements in dimensions and in dimension levels . before giving the definition of the grouping transformations , we elaborate on . as we show , these prime labels work in the context of measures that take rational values ( as it is often the case , in practice ) .the following definition specifies our infinite supply of prime labels .[ def : prime - labels ] let denote the -th prime number , for .we define the sequence of _ prime labels _ as follows : we denote the set of all prime labels by .[ def : prime - labelling - of - sets ] let , be ( finite ) sets .a _ prime labeling _ of the set is an injective function . for , we call the _ prime label _ of ( for the prime labeling ). let be a subset of , which serves as an index set .a of the cartesian product consists of prime labelings of the sets , for , that satisfy the condition that is empty for and . for , we call the _ prime product -label _ of ( given the prime labelings , for ). when is a strict subset of , we speak about a and when , we speak about a . if we view a cartesian product as a finite matrix , whose cells contain rational - valued measures , we can use prime ( product ) labelings as follows in the aggregation process .let us assume that the cells of contain rational values of a measure and let us denote the value of this measure in the cell by .if we have a full prime product labeling on , then we can consider the sum over this cartesian product of the product of the prime product labels with the value of : since each cell of has a unique prime product label , and since these labels are rationally independent ( see property [ prop : prime - sum ] ) , this sum enables us to retrieve the values if we have a partial prime product labeling on , determined by an index set , then , again , we can consider the sum over this cartesian product of the product of the partial prime product labels with the value of : now , all cells in above a cell in the projection of , receive the same prime label .this means that these cells are `` grouped '' together and the above sum allows us to retrieve the part of the sum that belongs to each group .the following definition gives a name to the above sums .[ def : prime - sums ] the following property can be derived from the well - known fact that the field extension has degree over and corollaries of this property ( see chapter 8 in ) .no square root of a prime number is a rational combination of square roots of other primes .[ prop : prime - sum ] let and let be a cartesian product of finite sets .we assume that the cells of this set contain rational values of a measure .let be a subset of and let be prime labelings of the sets , for , that form a prime product -labeling .then , the prime sum uniquely determines the values for all cells of .we remark that we use these prime ( product ) labels in a purely _ symbolic _ way without actually calculating the square root values in them .we are now ready to define atomic olap operations that allow us to implement grouping . in what follows, we apply these prime labels to the case where the sets in are domains of dimensions ( e.g. , at the bottom level ) , or domains of dimensions at some level .[ def : trans - grouping ] the following creations of a new measure are _ grouping transformations _ :( * * ) let be a dimension and a level in the dimension schema of a dimension .let with induced order ( see property [ prop : order - higher - level ] ) .if the prime labels have been used by previous transformations , then for , we have if .we denote this transformation by or , for short , and call the result of such a transformation a _ prime labeling_. 2 .( * projection of a prime sum * ) if the result of some previous transformation is a ( over the complete matrix ) in which prime ( product ) labels ( computed in a previous transformation ) are used , then is a new measure that `` projects '' on the appropriate component from the prime sum , that is , if the prime ( product ) label .we denote this .[ ex : trans - grouping-3 ] consider the query this query can be implemented as follows ( explained below , using the data in example [ ex : instancegraph ] ) : * ( this gives each country a prime label ) ; * ( this gives each city a prime label ) ; * ( this gives each city a product of prime labels ) ; * ; * ( gives each product a different prime label ) ; * ( counts the number of products ) ; * ( gives each time moment a different prime label ) ; * ( counts the number of moments in time ) ; * ( is the number of products times the number of time moments ) ; * ( normalization of the sum ) ; * ; ( projection over the prime labels of city ) ; * ( 3-dimensional sum ) ; * ( normalization of the sum ) ; * ( projection over the prime labels of country ) ; * ( this flag creation selects all cells of the matrix ) .transformation gives each country a next available prime label .since no labels have been used yet , gets label and gets label .transformation gives each city a next available prime label .since and have been used , gets label , gets label , gets label , and gets label .transformation gives the value ( i.e. , , the value ( ) , the value ( ) , and the value ( ) .if there are 10 products and 100 time moments , then puts the value in each cell of the matrix .transformations and count the number of products and the number of time moments ( using fresh prime labels ) , and the product of these quantities is computed in . in , is divided by this product , putting in every cell .transformation is a projection on the prime labels of .since , , , and are the prime labels for the cities , and since , this will put in the cells of and , and in the cells of and .next , puts in every cell of the cube and puts in every cell of the cube . finally , projects on the prime labels of countries , which are 1 and .this puts a 2 in every cell of a belgian city and a 2 in every cell in a french city .this is the result of the query , as the flag indicates , that is returned in every cell .now every cell of a city in has the count of cities , as has every city in .we can now extend the transformations of definition [ def : trans - counting ] , in a way that the counting , minimum , and maximum , are taken over cells which share a common prime product label .[ def : trans - min - max - revisited ] the following creations of a new measure are : 1 .( * count - distinct * ) 2 .( * min - max * ) we remark that when there is only one prime label throughout , the above generalization of the counting and min - max transformations correspond to definition [ def : trans - counting ] .in this section , we prove that the classical olap operations can be expressed using the olap transformations from section [ sec : olap - tando ] .these classic operations can be combined to express complex analytical queries .the classical olap operations are dice , slice , slice - and - dice , roll - up and drill - down ( see section [ subsec : rollup ] ) .we assume in the sequel , that the input data cube has given measures , and that at some point in the olap process this cube is transformed to a cube , having measures where , with , are created measures and is an input / output flag .before we start , we need to define the notion of a boolean cell - selection condition , and give a lemma about its expressiveness we will use throughout section [ sec : classicalolap ] . [ def : cell - selection ] let be the matrix of .a _ boolean condition on the cells of _ is a function from to .we say that a boolean condition is _ transformation - expressible _ if there is a sequence of olap transformations such that for all .[ lemma : boolean - closure ] if are transformation - expressible boolean conditions on cells , then , , and are transformation - expressible boolean conditions on cells . intuitively , the _ dice _ operation selects the cells in a cube that satisfy a boolean condition on the cells .the syntax for this operation is where is a boolean condition over level values and measures .the resulting cube has the same dimensionality as the original cube .this operation is analogous to a selection in the relational algebra . in a data cube, it selects the cells that satisfy the condition by flagging them with a in the output cube .our approach covers all typical cases in real - world olap .we next formalize the operator s definition in terms of our transformation language . in the remainder, we use the term _ olap operation _ to express a sequence of olap transformations .[ def : dice ] given a data cube , the operation selects all cells of the matrix that satisfy the boolean condition by giving them a flag in the output .the condition is a boolean combination of conditions of the form : ( a ) a selector on a value at a certain level of some dimension ; ( b ) a comparison condition at some level from a dimension schema of a dimension of the cube of the form or , where is a constant ( at that level ) ; ( c ) an equality or comparison condition on some measure of the form , or , where is a ( rational ) constant . [ prop : dice ] let be a data cube en let be a boolean condition on the cells of ( as in definition [ def : dice ] ) .the is expressible .intuitively , the _ slice _operation takes as input a -dimensional , -ary data cube and a dimension and returns as output , which is a `` -dimensional '' data cube in which the original measures are replaced by their aggregation ( sum ) over different values of elements in . in other words ,dimension is removed from the data cube , and will not be visible in the next operations .that means , for instance , that we will not be able to dice on the levels of the removed dimension .as we will see , the `` removal '' of dimensions is , in our approach , implemented by means of the destroyer measure .we remark that the aggregation above is due to the fact that , in order to eliminate a dimension , this dimension should have exactly one element , therefore a roll - up ( which we explain later in section [ subsec : rollup ] ) to the level _ all _ in is performed .[ def : slice ] given a data cube , and one of its dimensions , the operation `` replaces '' the measures by their aggregation ( sum ) ( for ) as : for all .we abbreviate the above -dimensional sum as [ prop : slice ] let be a data cube and let be one of its dimensions .the is expressible as an olap operation .[ ex : slice ] consider dimensions and , and measure in our running example .the operation returns a cube with -cells containing the sums of for each product - time combination , over all location .all cells not belonging to the representative of in the dimension ( i.e. , ) , are destroyed .the query is expressed by the following transformations .* ( prime labels on products ) ; * ( fresh prime labels on time moments ) ; * ( product of the two previous prime labels ) ; * ( product ) ; * ( -dimensional sum ) ; * ( projection on prime product labels ) ; * ( selects the representative of in the dimension ) ; * * ( this flag creation selects the relevant cells of the matrix ) .transformation gives each -combination a unique prime product label .this label is multiplied by the in each cell .then , is the global sum over ; is the projection over the prime product labels for -combinations .this gives each cell above some fixed -combination , the sum of the , over all locations , for that combination .all cells of that do not belong to ( selected in ) , which represents , are destroyed by .a particular case of the _ slice _ operation occurs when the dimension to be removed already contains a unique value at the bottom level .then , we can avoid the roll - up to _ all _ , and define a new operation , called _ slice - and - dice_. although this can be seen as a _ dice _ operation followed by a one , in practice , both operations are usually applied together .[ def : slice - dice ] [ prop : slice - dice ] let be a data cube , on of its dimensions en let .the operation is expressible as an olap operation .[ ex : slice - dice ] in our running example , the operation is implemented by the output flag . intuitively , _roll - up _ aggregates measure values along a dimension up to a certain level , whereas _ drill - down _ disagregates measure values down to a dimension level . although at first sight it may appear that _ drill - down _ is the inverse of _ roll - up _ , this is not always the case , e.g. , if a _roll - up _ is followed by a or a ; here , we can not just undo the _ roll - up _ , but we need to account for the cells that have been eliminated on the way .more precisely , the _ roll - up _ operation takes as input a data cube , a dimension and a subpath of a hierarchy over , starting in a node and ending in a node , and returns the aggregation of the original cube along up to level for some of the input measures . _roll - up _ uses one of the classic sql aggregation functions , applied to the indicated protected and computed measures ( selected from ) , namely sum ( ) , average ( ) , minimum /maximum ( and ) , count and count - distinct ( and ) .usually , measures have an associated _ default _ aggregation function .the typical aggregation function for the measure , e.g. , is .we denote the above operation as where is one of the above aggregation functions that is associated to , for .since we are mainly interested in the expressiveness of this operation as a sequence of atomic transformations , only the destination node in the path is relevant . indeed, the result of this roll - up remains the same if the subpath is extended to start from the node of dimension .so , we can simplify the notation , replacing with and assume that the roll - up starts at the level .the _ drill - down _ operation takes as input a data cube , a dimension and a subpath of a hierarchy over , starting in a node and ending in a node ( at a lower level in the hierarchy ) , and returns the aggregation of the original cube along from the bottom level up to level .the drill - down uses the same type of aggregation functions as the roll - up . again, since we are only interested in the expressiveness of this operation , the drill - down operation has the same output as [ def : rollup ] given a data cube , one of its dimensions , and a hierarchy over , ending in a node , the operation computes the aggregation of the measures by their aggregation functions , for , as follows : for all , for which , for some .this roll - up flags all representative -level objects as active .[ prop : rollup ] let be a data cube , let be one of its dimensions , and let be a hierarchy over ending in a node .let be a set of selected measures ( taken from the protected measures and the computed measures of ) , with their associated aggregation functions .the operation is expressible as an olap operation .[ ex : rollup-1 ] we next express the _ roll - up _ operation , using prime ( product ) labels , sums , projections , and the -dimensional sum .we look at the query `` total sales per country '' .we use the simplified syntax , only indicating the target level of the roll - up on the _ location _ dimension ( i.e. , _ country _ ) .the query is the result of the following transformations , given the measure : 1 . ( prime labels on products ) ; 2 . ( prime labels on time moments ) ; 3 . ( prime labels on countries ) ; 4 . ; ( prime product label in one step ) ; 5 . ( product of labels with ) ; 6 . ( -dimensional sum ) ; 7 . ( projection on prime product labels ) ( output flag on country - representatives ) .transformation gives every product - date - country combination a unique prime product label .normally this product takes more steps . above, we have abbreviated it to one transformation .the transformation gives the aggregation result , and is the flag that says that only the cities and , which represent the level , are active in the output ( and nothing else of the original cube ) .the main result of this paper is the proof of the completeness of an olap algebra , composed of the olap operations dice ( section [ subsec : dice ] , slice ( section [ subsec : slice ] ) , slice - and - dice ( section [ subsec : sandd ] ) , roll - up , and drill - down ( section [ subsec : rollup ] ) .this is summarized by theorem [ theo : main ] .[ theo : main ] the classical olap operations and their composition are expressible by olap operations ( that is , as sequences of atomic olap transformations ) .we next illustrate the power and generality of our approach , combining a sequence of olap operations , and expressing them as a sequence of olap transformations .[ ex : rollup-3 ] an olap user is analyzing sales in different countries and regions .she wants to compare sales in the north of belgium ( the flanders region ) , and in the south of france ( which we , generically , have denoted _ south _ in our running example ) .she first filters the cube , keeping just the cells of those two regions .this is done with the expression : we showed that this can be implemented as a sequence of atomic olap transformations .now she has a cube with the cells that have not been destroyed .next , within the same navigation process , she obtains the total sales in france and belgium , only considering the desired regions , by means of : this will only consider the valid cells for rolling up .after this , our user only wants to keep the sales in france .thus , she writes : finally , she wants to go back to the details , one level below in the hierarchy , so she writes : implemented as a roll - up from the bottom level to _ region _ , only considering the cells that have not been destroyed .we have presented a formal , mathematical approach , to solve a practical problem , which is , to provide a formal semantics to a collection of the olap operations most frequently used in real - world practice .although olap is a very popular field in data analytics , this is the first time a formalization like this is given .the need for this formalization is clear : in a world being flooded by data of different kinds , users must be provided with tools allowing them to have an abstract `` cube view '' and cube manipulation capabilities , regardless of the underlying data types . without a solid basis and unambiguous definition of cube operations ,the former could not be achieved .we claim that our work is the first one of this kind , and will serve as a basis to build more robust practical tools to address the forthcoming challenges in this field .we have addressed the four core olap operations : slice , dice , roll - up , and drill - down .this does not harm the value of the work . on the contrary , this approach allows us to focus on our main interest , that is , to study the formal basis of the problem .our line of work can be extended to address other kinds of olap queries , like queries involving more complex aggregate functions like moving averages , rankings , and the like .further , cube combination operations , like drill - across , must be included in the picture .we believe that our contribution provides a solid basis upon which , a complete olap theory can be built .* acknowledgements : * alejandro vaisman was supported by a travel grant from hasselt university ( korte verblijven inkomende mobiliteit , bof15kv13 ) .he was also partially supported by pict-2014 project 0787 .r. agrawal , a. gupta , and s. sarawagi .modeling multidimensional databases . in _ proceedings of the 15th international conference on data engineering , ( icde ) _ , pages 232243 ,birmingham , uk , 1997 .ieee computer society .f. dehne , q. kong , a. rau - chaplin , h. zaboli , and r. zhou .scalable real - time olap on cloud architectures ., 7980:31 41 , 2015 .special issue on scalable systems for big data management and analytics . o. romero and a. abell . on the need of a reference algebra for olap . in _ proceedings of the 9th international conference on data warehousing and knowledge discovery , dawak07 _ , pages 99110 , regensburg , germany , 2007 . | online analytical processing ( olap ) comprises tools and algorithms that allow querying multidimensional databases . it is based on the multidimensional model , where data can be seen as a cube , where each cell contains one or more measures can be aggregated along dimensions . despite the extensive corpus of work in the field , a standard language for olap is still needed , since there is no well - defined , accepted semantics , for many of the usual olap operations . in this paper , we address this problem , and present a set of operations for manipulating a data cube . we clearly define the semantics of these operations , and prove that they can be composed , yielding a language powerful enough to express complex olap queries . we express these operations as a sequence of atomic transformations over a fixed multidimensional matrix , whose cells contain a sequence of measures . each atomic transformation produces a new measure . when a sequence of transformations defines an olap operation , a flag is produced indicating which cells must be considered as input for the next operation . in this way , an elegant algebra is defined . our main contribution , with respect to other similar efforts in the field is that , for the first time , a formal proof of the correctness of the operations is given , thus providing a clear semantics for them . we believe the present work will serve as a basis to build more solid practical tools for data analysis . * keywords * : olap ; data warehousing ; algebra ; data cube ; dimension hierarchy . |
intense tropical cyclones ( tcs ) are among the most devastating of natural phenomena , and considerable effort is spent in estimating the risk of tc landfall .landfall risk assessments are used by the insurance industry for setting rates and by governments for establishing building regulations and planning emergency procedures .recently , there has been much interest in documenting and understanding trends in tc frequency and intensity , and whether such trends are related to anthropogenic climate change and increasing sea - surface temperatures ( sst ) .tc frequency has increased in the north atlantic in recent years , as has the number of tcs reaching the most intense categories ( webster et al . , 2005 ) , though the global tc frequency has been approximately steady .theory points to a large role for sst in the maximum potential tc intensity ( emanuel , 1987 ) , and emanuel ( 2005 ) found a rapid increase since 1970 in global tc power dissipation that is highly correlated with sst . rising sst in the main development regions ( mdrs ) of tcs over recent decades is largely due to anthropogenic greenhouse warming ( santer et al ., 2006 ; elsner , 2006 ) . less attention has been focused on variations in landfall risk and its geographic distribution with sst .changes in tc frequency are likely to affect landfall rates , but so might changes in geographic distribution of tc genesis and paths of propagation. lyons ( 2004 ) found that most of the difference between high and low u.s .landfall years is due to changes in the fraction of tcs making landfall , rather than changes in basin - wide tc number .in contrast to the atlantic - wide increase in tc power dissipation shown by emanuel ( 2005 ) , tc power dissipation at u.s .landfall has not shown any trend ( landsea , 2005 ) , although there are many times fewer data on which to base the analysis . the most straightforward approach to analyzingthe sst dependence of landfall risk is to use historical landfall events .this approach is sound if there are sufficient events , such as is found over large sections of coast and over many years .however , if one aims to study changes in geographic landfall distribution and , in addition , use subsets of data years based on climate state , then sampling error becomes a major issue .basin - wide statistical track models are an attractive alternative .the primary advantage of a basin - wide model is that it utilizes historical track information over the full basin , roughly 100 times more data that just at landfall .a second advantage is that by using a track model landfall changes can be decomposed into changes in various tc properties , such as number , genesis site , and propagation .here we use the track model developed by hall and jewson ( 2007a ) to explore regional variations of landfall rates with sst . in section 2we describe the historical data , in section 3 we briefly review the statistical tc track model , and in section 4 we discuss the conditioning of the model on sst .the impact of sst on each model component is presented in section 5 . in section 6we discuss the sst impact on north american landfall and relate the impact to tc number , genesis site and propagation .we conclude in section 7 .our tc analysis is based on hurdat data over the north atlantic ( jarvinen et al . , 1983 ) .we use hurdat tcs of all intensity from 19502005 , which encompasses 595 tcs .observations prior to 1950 are less reliable , as they precede the era of routine aircraft reconnaisance .we also use sst data obtained from the uk met office hadley centre ( rayner et al . , 2003 ) .1 shows the evolution from 1950 to 2005 of july - august - september sst averaged over the region 290 to 345 and 10 to 20 .there is a well - documented upward trend , particularly in the past two decades , which is largely due to anthropogenic greenhouse warming ( santer et al ., 2006 ; elsner , 2006 ) .from the 56 years we extract the hottest 19 years , with sst 27.38 , and the coldest 19 years , with sst 27.07 , roughly 1/3 each of all years .other geographic zones for sst averaging over the tropical and mid - latitude north atlantic result in very similar partitioning of data years . figs .2a and 2b shows the historical tc tracks in the 19 cold years and the 19 hot years , of which there are 165 and 239 , respectively . in order to compute landfall rate we divide the north american continental coastline into 39 segments of varying length from maine to the yucatan peninsula , as shown in fig .mileposts a through j are shown for reference in subsequent figures .`` landfall '' of a tc is recorded when a track segment , heading landward , intersects a coastline segment .a tc can make multiple landfalls .4a shows the total landfall counts for hot and cold years for each coastline segment , running from maine to yucatan . in fig4b these raw counts are converted to rates per year per 100 km of segmented coastline and plotted versus distance along the segmented coast .the landfall counts and rates of fig .4 show large variations along the coast and between hot and cold .summed over the full coast there are 69 landfalls in 165 cold - year tcs and 110 landfalls in 239 hot - year tcs .some components of these hot - cold differences reflect a geophysical relationship with sst , while other components are simply due to the sampling error inherent in the low total landfall counts .many coastline segments have experienced only a few or no landfalls .our goal is to extract the components of the hot - cold differences that are geophysical by minimizing the sampling error . in the statistical track analysis that followswe exploit the much larger tc data set over the full atlantic basin , not just the data at landfall .in essence , historical data from the full basin is projected onto the coastline to reduce the landfall sampling error .hall and jewson ( 2007a ) described and evaluated a statistical model of tc tracks in the north atlantic from genesis to lysis based on hurdat data .the track model consists of three components : ( 1 ) genesis , ( 2 ) propagation , and ( 3 ) lysis ( death ) .the number of tcs in a simulation year is determined by random resampling of the historical annual tc number .genesis sites are simulated by sampling a kernel pdf built around historical sites .the kernel bandwidth is optimized by jackknife out - of - sample log - likelihood maximization . for propagation , we compute mean latitude and longitude 6-hourly displacements and their variances , by averaging of `` nearby '' historical displacements . nearby is defined optimally by jackknife out - of - sample log - likelihood maximization .standardized displacement anomalies are modeled as a lag - one autoregressive model , with latitude and longitude treated independently .the autocorrelation coefficients are determined from `` nearby '' historical anomalies , with `` near '' optimized as above .finally , tcs suffer lysis with a probability determined by optimal averaging of nearby historical lysis rates .tracks are stochastic , and two tracks originating from the same point can follow very different trajectories . at present no tc intensity is modeled , and no intensity information is used in the track model . clearly , intensity is a critical component of tc risk assessment . in this study , however , we restrict attention to landfall by tcs of any intensity .the advantage of a track model over direct analysis of landfall is that information is brought to bear on landfall from the full trajectories of all tcs , not just the single landfall points of landfalling tcs .this increases the data by a factor of 100 .not all the data is equal , and the most relevant information comes from portions of tc trajectories near the coast .this exploitation of the most relevant data for estimating landfall is implicit in the track model .the track - model advantage is particularly important for analysis of small coast sections ; analysis of regions with low activity , where there are few or no historical landfalls ; and analysis in which only subsets of data years are used , for example , conditioned on a climate state . hall and jewson ( 2007b ) compared the landfall rates of the track model to those of a bayesian model based solely on local landfall observations , a type of statistical analysis often used in tc landfall studies .they found that on regional and smaller scales the track model performs better in a jackknife out - of - sample evaluation with historical landfalls , despite the fact that over the entire north american coast the track model underestimates landfall by 12% , an amount that is significant . on a local basis , the decreased sampling error of the track model more than compensates for the increased bias .or goal is to examine the effect of sst on tc landfall rates . to doso we construct the track model separately on each historical sst set ; that is , the averaging to obtain means , variances , and autocorrelation coefficients of 6-hourly track displacements , and the track lysis probabilities are restricted to the particular cold or hot subset .the track model components are thus `` conditioned '' on data being in a particular subset based on sst .additionally , the kernel pdf of genesis sites is built on data only in the subset , and the annual tc number is drawn randomly only from the subset .the hot- and cold - conditioned models are then run for a large number of years ( 1000 ) , and the synthetic tracks and their landfalls analyzed .figs . 2c and2d show the synthetic tracks based on the hot and cold years . in addition , in order to isolate individual components of the sst influence on tracks we generate synthetic tracks with single model components ( tc number , genesis site , propagation ) conditioned on sst , while the other components are unconditioned . in order to test the significance of hot - cold differences compared to other sources of variabilitywe have also constructed the track model from random subsets of data , as follows : we choose randomly a 19 year subset of the 56-year data , construct the track model , and generate 1000 synthetic tracks .we then choose another random but non - overlapping 19-year subset , again construct the track model , and generate 1000 new synthetic tracks .various properties ( e.g. , landfall rates ) of the two sets of synthetic tracks are differenced .the selection of model components conditioned on the random 19-year periods always matches the selection of model components for sst conditioning ; for example , when we condition tc propagation alone on sst we compare the hot - cold difference to random differences in which propagation alone is conditioned on the random 19-year sets , to be consistent .the comparison of hot - cold differences to random differences calculated in this way addresses the significance of hot - cold differences compared to other sources of variability . if the hot - cold difference of some rate stands out sufficiently from the random differences the null hypothesis , that sst has no effect on that rate , is disproved .as we shall see , in many , but not all , regions hot - cold differences are significantly larger than random differences .once hot - cold significance is established , however , a complementary analysis is needed to estimate the uncertainty on the estimate of a hot - year or cold - year rate . to this endwe successively remove one of the 19 years from the hot ( or cold ) set , for each removal conditioning the model on the 18 remaining years , generating synthetic tcs , and computing rates .the range of resulting 18-year rates provides a measure of uncertainty of the 19-year rate .here we examine the impact of sst conditioning on each model component individually . the resulting impact on landfall rates , both individually and in combination , is presented in section 6 .our model for annual tc number is simple , consisting of random resampling from the tc annual numbers in the historical years of the data - year subset in question ( e.g. , hot , cold , or random ) .the mean annual tc number over 1000 simulated hot years is 12.8 yr , while for cold years it is 8.5 yr .the hot - cold difference , 4.3 yr ( 50% ) , is highly significant . by comparison ,the mean absolute - magnitude of differences across the 20 random 19-year pairs is 0.8 yr , with an rms deviation of 0.6 yr .in fact , none of the 40 random 19-year subsets has as few tcs as the cold years and none has as many tcs as the hot years . as we shall see below this difference in tc number has the single greatest influence on landfall rates . by constructionthe effect of number on tc tracks and landfall has no geographic structure . given an annual number of tcs , where should they originate ? figs . 5a and5b show the kernel pdfs of genesis sites for the cold years and the hot years .not surprisingly , the hot years have greater probability for genesis in the mdr .ocean heat is a key ingredient to cyclogenesis and tc intensification ( emanuel , 1987 ) , and the mdr is precisely the region whose sst we have used to condition the model genesis .this highlights the fact that our separation of tc number and genesis site is to some degree artificial : much of the hot - year number increase is concentrated in the defining region of hot years . however , there is additional hot - cold genesis - site structure outside the mdr .hot years also have greater probability for genesis off the mexican gulf coast and just east of the yucatan peninsula . in contrast , individual events in cold years have greater probability for genesis in the northern gulf of mexico and in the western atlantic off the southeastern u.s coast . for comparison ,5c shows the genesis pdf for all 56 data years , which is intermediate .( we emphasize that these pdfs are normalized spatially .they provide the probability of a genesis site , given a genesis occurrence .if a pdf increases in one region , it must decrease elsewhere . )5d shows shows a measure of the significance of the hot - cold difference in genesis - site pdfs .the pdfs of the 20 random 19-year pairs are differenced and the rms deviation of the differences computed at each plotted point in the basin ( ) .the hot - cold difference is plotted where ( 1 ) its magnitude is greater than one standard deviation across the random differences ( i.e. , it is significant ) , and ( 2 ) its magnitude is within 50% of the maximum hot - cold magnitude ( i.e. , it is large ) .the differences noted qualitatively comparing fig .5a to 5b are confirmed to be significant .the hot - cold differences in genesis site impact the distribution of modeled tc tracks throughout the basin and also affect the modeled tc landfall rate .landfall rates are analyzed in section 6 . here , we determine the impact of genesis site changes on tc tracks by analyzing the number of tcs crossing lines of constant longitude and latitude . for this purpose ,only genesis site pdfs are conditioned on sst , while tc number and propagation are unconditional .6 shows the zonal `` flux '' of hot- and cold - year tcs ; that is , the number of tcs crossing lines of constant latitude per year counted in 5 latitude bins .eastward and westward fluxes are shown separately , as are the eastward and westward hot - cold flux differences .plotted with the flux differences are the random - difference standard deviations as a measure of statistical significance .7 shows a comparable plot for the meridional tc flux . the greater probability of genesis in the mdr in hot years results in more tcs propagating westward in the subtropics .the hot - cold difference is significant .many tcs then curve northward , followed by eastward . a significantly greater hot - year northwardflux is seen across 20 . the eastward midlatitude tc flux is greater in hot years , but the hot - cold difference is in most places not significant compared to the random differences . by contrast , the influence of the cold - year genesis - site peak off the southeastern u.s .coast can be seen in the significantly greater cold - year northward flux off the u.s .coast through 30 and 40 .we now analyze the impact of sst on tc propagation by conditioning on sst the mean 6-hourly displacements , the variance about the mean , the autocorrelation , and the lysis .the genesis - site pdfs and the tc numbers are unconditional . first , to illustrate the sst influence we generate 1000 hot and cold synthetic tcs from the same genesis site and compute the density of track points .fig . 8a and8b shows the resulting regions of high track - point density from eastern and western genesis sites in the mdr .the hot - cold tc difference is most apparent for the western genesis site .hot - year tcs are more likely to bend northward and eastward earlier , while cold - year tcs tend to propagate further west , including into the caribbean , before heading north .the zonal and meridional fluxes are shown in figs . 9 and 10 .these figures are analogous to figs . 6 and 7 , but now for the case of sst - conditioned propagation and lysis , but unconditional tc number and genesis site .consistent with fig .8 westward cold - year flux is greater than hot - year flux through 290 , although the difference is only marginally significant .by contrast the northward hot - year flux is greater through 20 , 30 , and 40 in the western atlantic .the most significant propagation differences are seen in the eastward flux in midlatitudes , which is significantly greater in hot years than cold .the physical mechanisms relating mdr sst to tc propagation are unclear .it may be that the signal seen here is a projection of the north - atlantic oscillation ( nao ) on our sst data - year partitioning .in the nao s high phase sst in the tropical north atlantic is lower , and the subtropical surface - pressure high extends further west ( wang , 2002 ) .this would tend to cause tcs to propagate further west before curving north . during the nao s low phase the sst is higher , andis therefore associated with more tcs ( elsner et al . , 2006 ) , but the subtropical high does not extend as far west , and tcs curve north sooner .we now analyze the landfall - rate characteristics of the synthetic tc data .figs 1114 illustrate landfall rates ( counts per year per 100 km ) and rate differences as a function of distance along the segmented coastline running southward from maine , as in fig .4 . the general features of the model s landfall rate and its evaluation against historical rates are discussed in hall and jewson ( 2007a ; 2007b ) . accumulated over the full coastthe model suffers from a low landfall bias of about 12% compared to the 19502005 historical rate , a difference which is marginally significant compared to the rms deviation across many 56-year simulations .landfall rates show strong geographic variations , with several local maxima .the rates are sensitive to the orientation of coast segments , with segments perpendicular to the local mean tc path experiencing more landfalls .hence , there is a local landfall - rate minimum off the southeast u.s .atlantic coast ( between mileposts c and d ) , where the coast segments are close to parallel with the mean tc tracks , while there is a maximum near cape hatteras ( mileposts b c ) and long island ( mileposts a b ) , where south - facing coast segments jut out into oncoming tcs .one implication is that different segmented models of the coastline will result in different distributions , making it difficult to compare directly the quantitative landfall rates from one study to another .11 shows the landfall rates and the hot - cold difference for the case with only tc number conditioned on sst .tc - number conditioning on sst has no geographic structure , and the hot - cold landfall - rate ratio is roughly uniform , with the hot - year rate everywhere significantly greater than the cold - year rate .( the ratio is uniform in the limit of large number of simulation years . )accumulated over the entire segmented coast there are 3.2 cold landfalls per year and 4.8 hot landfalls per year , a ratio that simply reflects the historical hot - cold tc - number ratio 239/165 .the hot - cold landfall - rate difference , 1.5 yr , is highly significant . across the 20 random 19-year pairsthe rms deviation of landfall - rate differences is only 0.4 yr . fig .12 is comparable to fig .11 , but now for the case of genesis - site pdf conditioned on sst , while the other components are unconditional . accumulated over the entire coastline the hot - year landfall rate is 3.9 yr compared to the cold - year rate 3.8 yr , a difference that is well within the range of random differences , which has an rms deviation of 0.5 yr . a few regions , however , have significant hot - cold differences . on the mexican gulf coast ( milepost g ) and the eastern yucatan ( milepost j ), hot - year genesis sites result in significantly greater landfall , consistent with the hot - year genesis - site peaks just off the coasts of these regions ( fig . 5 ) .by contrast , on the northern u.s .gulf coast ( between mileposts e and f ) the cold - year landfall rate is significantly higher , consistent with the cold - year genesis - site peak in the central and northern gulf of mexico .in addition , the peak in cold - year genesis sites off the southeastern u.s .coast causes cold - year landfall in the mid - atlantic u.s .coast to be greater ( between mileposts b and c ) , a difference that is marginally significant .note that these landfall rate differences do not take tc intensity into account .cold - year genesis sites cause more cold - year landfalls on the u.s .mid - atlantic coast , but these landfalls are not far from their genesis sites , and have not had much time to intensify .by contrast , the more frequent hot - year genesis in the mdr results in lower probability per tc of making landfall there is more opportunity to veer away from coast but also allows more time for tcs to intensify .13 shows the landfall rates for sst conditioning of propagation and lysis , with the genesis - site and tc number unconditional .accumulated over the entire coastline the hot - year landfall rate is 3.7 yr and the cold - year landfall rate is 3.9 yr , a difference that is well within the random variability of random differences , which now has an rms deviation of 0.6 yr . once again , however , on some regions the hot - cold differences are significant , albeit marginally so . on the u.s .northeast and mid - atlantic coast segments facing the oncoming mean tc tracks ( long island and cape hatteras ) cold - year landfall rates are greater , reflecting the fact that in cold years tcs propagate further west before veering north ( figs 810 ) .note in fig .10 that across 40 just off the u.s .coast the northward flux of tcs is greater in cold years than hot ( marginally significant ) , even though only a few degrees further east the opposite is true .by contrast on the eastern yucatan ( milepost j ) the sst effect on propagation causes hot - years to have greater landfall rate .this effect must be well localized , because there is little hot - cold difference in the westward tc flux across 280 , about 10 east of yucatan .we have not separated the sst influence on propagation and lysis , in order to keep the number of figures manageable .separate analysis ( not shown ) indicates no significant hot - cold landfall - rate differences due to lysis alone .14 shows the hot - year and cold - year landfall rates when all model components are conditioned on sst simultaneously .accumulated over the entire coastline the sst effects on genesis site and propagation are negligible , and the fully - conditioned hot - cold landfall rates are nearly identical to the tc - number only case : 3.1 yr in cold years and 4.7 yr in hot years .the difference , 1.5 yr , is significant , about 2 times the random difference rms deviation of 0.7 yr .note that compared to the tc - number conditioning , the random variation is now larger , reflecting variability in more model components .hot - cold differences in genesis site and propagation affect regional landfall rates , however , in some places amplifying the tc - number effect and in other places ameliorating it . on the eastern yucatan coastthe landfall effects of sst on tc number , genesis site and propagation all have the same sign : greater hot - year than cold - year landfall . with all components conditioned on sstthe yucatan landfall rate is about 3 times higher in hot years than cold , a difference which is highly significant .by contrast , the greater landfall rates in hot years on the northeast and mid - atlantic u.s .coast due to tc number alone is countered by the greater cold - year rates due to genesis site and propagation .the net effect is no significant hot - cold landfall - rate difference in this region .most of the sst effect via tc - number on florida and the u.s .gulf coast survives under full conditioning , although its significance is now marginal , as the random variability is larger . note that significance increases when landfall is accumulated over larger regions .table 1 lists the hot - year and cold - year rates on the 6 regions indicated in fig .3 . also listed are the scores of the differences ; that is , the hot - year minus cold - year rate difference divided by the rms deviation of the random differences . on the u.s .northeast and mid - atlantic coastlines the hot - cold differences are not significant . on florida , the u.s gulf coast and the mexican gulf coast the hot - cold differences are marginally significant ( scores of 0.9 , 1.1 , and 0.9 , respectively ) . on the yucatan peninsulathe hot - cold difference is highly significant , with .the fact that genesis - site and propagation conditioning on sst has little effect on landfall rates accumulated over large sections of the coast is reflected in the small hot - cold difference in landfall fraction ( fraction of tcs making landfall ) . over the the u.s .portion of the coastline ( segments 123 ) the modeled landfall fractions are 0.28 in cold years and 0.29 in hot years .the actual historical record reveals a larger difference : 0.30 in cold years and 0.35 in hot years ( 50 u.s .landfalls in 165 historical cold - year tcs and 84 u.s .landfalls in 239 historical hot - year tcs ) . herewe examine this apparent discrepancy .the historical hot and cold years subsets analyzed here each have 19 years .we divide the 1000-year hot and cold simulations into 52 19-year periods and compute the landfall fractions in each period .these fractions exhibit considerable variability .15a shows the pdf of these landfall fraction for hot and cold years , along with the total simulated hot and cold fractions and the historical hot and cold fractions .the historical fractions each fall within the variability of the simulated fractions .we then take many random hot - cold differences of landfall fractions and compute the pdf of the differences ( fig .the historical difference ( hot cold by 0.05 ) is well within the range of random differences in the simulations .this suggests that the apparent discrepancy between the historical and simulated hot - cold landfall fraction differences is not significant .it also suggests that no significant impact of sst on u.s .landfall fraction can be gleaned from observations over the period 19502005 .we have demonstrated that for many features and on many regions the landfall rates in hot years are significantly greater than those in cold .we now convert the hot - year rates to landfall probabilities with confidence limits and compare to the same probabilities in all years .landfall can be modeled as a poisson process ( bove et al . , 1998 ) whose rate is uncertain , due to the finite historical record .( we have verified that the distributions of annual landfall counts in the model are poisson . ) incorporating this uncertainty into the landfall probability results in a negative binomial distribution ( elsner and bossak , 2001 ; hall and jewson , 2007b ) .given landfalls in years the probability of landfalls in a subsequent year is expression ( [ e : prob ] ) reduces to a poisson distribution for large and .in the case of the synthetic tcs , and on 100 km coastline sections varies from 1 to 144 ( using all years ) , depending on region . fig .16a shows the probability of at having at least one landfall in a year in 100 km sections along the segmented coastline for the hot years and for all years , based on these distributional assumptions .fig 16b shows the comparable distribution for the probability of at least two landfalls .plotted for the hot - years are the 90% confidence limits of the probability determined from a distribution of probabilities .the distribution is determined , in turn , by successively dropping out one of the 19 hot years , conditioning the model on the remaining 18 hot years , generating synthetic tcs , and computing landfall rates and probabilities .for example , on segment 19 ( 295 km in length facing southeast of the louisiana coast between mileposts e and f in fig .3 ) there is a probability of 0.0850.117 of at last one landfall in hot years in 100 km sections , versus a probability of 0.065 in all years , an increase of 31% to 80% . for two or more landfallsthere is a probability of 0.00380.0072 for hot years versus 0.0022 for all years , an increase of 73% to 327% . by contrast , on segment 8 ( 91 km in length facing south from cape hatteras between mileposts b and c ) the probability range for at least one landfall in 100 km is 0.1100.152 , which encompasses the probability for all years of 0.134 .the probability here for at least two landfalls in hot years is 0.00640.0122 , which again encompasses the probability , 0.0094 , for at least two landfalls in all years .listed in table 2 are the hot - year and all - year landfall probabilities on 6 larger regions of coast : the u.s .northeast , the u.s .mid - atlantic , the florida peninsula , the u.s .gulf coast ( excepting florida ) , the mexican gulf coast ( excepting yucatan ) , and the yucatan peninsula .mileposts delineating these regions are indicated in fig .3 . compared to fig .16 , here the probabilities are accumulated over the entire region , rather than in 100 km sections . on the u.s .northeast coast there is apparently a slight reduction in landfall probability in hot years compared to all years .however , as indicated in table 1 , the hot - cold difference is not significant , and we conclude that there is no change in landfall probability in hot years . on the u.s .mid - atlantic coast the hot - year and all - year probabilities are not different , and the hot - cold difference is within the range of random differences . on florida , the u.s .gulf coast , and the mexican gulf coast the hot - cold differences are marginally significant ( ) , while on the yucatan coast the hot - cold difference is highly significant ( ) . on florida , the u.s .gulf , and the yucatan peninsula , the hot - year landfall probabilities are higher than the probabilities for all years , with at least 95% confidence ( i.e. , the lower 90% confidence limit of hot years is greater than the probability for all years ) .d the increase in landfall probability in hot years is greatest on the yucatan peninsula ; e.g. , there is a 90% to 250% increase in probability of two or more tcs making landfall in hot years compared to all years . on the u.s .gulf coast the comparable increase is 14% to 36% .the hot - year increase on the u.s .gulf coast for making three or more landfalls is 20% to 70% .we have performed additional limited analysis with alternative spatial and temporal averaging of sst to test the robustness of our conclusions .increasing the latitude range from 10 to as much as 0 , the longitude range from 290 to as much as 270 , and altering the time period from jul sep to aug oct in various combinations never changes the 19 hottest and 19 coldest years among 19502005 by more than 3 years .thus , our tc model conditioning will not be strongly affected . for the case 10 , 270 , and aug oct ( change in 3 of the 19 hottest and coldest years ) we have fully conditioned the model and simulated 1000 hot- and cold - year tc tracks .the genesis - site pdfs are altered quantitatively , though not qualitatively , and the hot - cold landfall - rate difference over the entire coastline is reduced ( 4.5 yr in hot years , 3.5 yr in cold years ). however , the basic features described above are unchanged .there is no significant hot - cold difference for the u.s .ne and mid - atlantic coasts , while the remaining regions exhibit significant hot - cold differences .we have used a statistical tropical cyclone ( tc ) track model to elucidate the relationship between tropical atlantic sst and tc landfall rates on north america .the advantage of a track model over direct analysis of historical landfall rates is the reduction in sampling error from using the much larger quantity of data over the entire ocean basin .this reduced sampling error allows meaningful examination of detailed geographic structure in landfall rate and its dependence on sst .we have constructed the model components ( annual tc number , genesis site , and propagation ) individually and together from the 19 lowest and 19 highest sst years in 19502005 , generated 1000s of synthetic tc tracks , and analyzed their landfall characteristics .we have constructed the model on randomly selected pairs of 19-year periods and compared the range of landfall differences among these pairs to the hot - cold differences as a test of hot - cold significance .in addition , we have constructed the model on sub - samples of the 19 hot years in order to estimate confidence limits on the hot - year rates .tc number has a large and well known dependency on sst : there are 45% more tcs in the hot years than cold , a difference which is highly significant compared to random sampling .alone , this sst dependency of tc number translates to an identical fractional landfall - rate increase , uniform along the coast .possibly modifying this tc - number effect is variation with sst of the geographic distribution of genesis and the propagation of tcs .it turns out , however , that accumulated over the entire coastline there is little additional effect on landfall of genesis - site and propagation variations . when all components of the model are conditioned on being in either the cold or hot sst years we find a landfall rate of 4.7 yr in hot years and 3.1 yr in cold years , rates whose ratio is not significantly different than the historical hot - cold tc - number ratio 239/165 . over the period 19502005 tcs in the 19 hot years have a higher u.s .landfall fraction ( 0.35 ) than tcs in the 19 cold years ( 0.30 ) , but we have argued that this difference is not significant .there are , however , significant differences in the geographic distribution of landfall in hot and cold years .tc number alone causes a uniform increase in landfall rate in hot years .changes in genesis site and propagation amplify this increase in some regions , most notably the eastern yucatan peninsula , whose landfall rate is 3 times higher in hot years than cold .in contrast , genesis - site and propagation dependence on sst ameliorate the increase in landfall on the u.s . northeast and mid - atlantic coast , with a net result of no significant hot - cold difference in landfall rate .landfall rates on the florida and northern u.s .gulf coasts are higher in hot years than cold , although the difference is less significant than would be concluded from tc number alone .we have also computed the hot - year increase in landfall probabilities compared to all years with confidence limits determined from subsampling the 19 hot data years .landfall probabilities on florida , the northern u.s .gulf coast , and the yucatan peninsula are higher in hot years with at least 95% confidence .we have verified that these qualitative features of regional hot - cold differences are not sensitive to the sst averaging .it is important to note that no intensity model is included here , nor is any intensity used in the analysis .our estimated landfall rates refer to any named tc .overall , landfall rates are , of course , lower if attention is restricted to the most intense tcs .in addition , regional hot - cold differences might be quite different .for example , cold - year genesis sites cause greater cold - year landfall rates on u.s .mid - atlantic coast . butthese landfalls are not far from their genesis sites , and so have not had much time to intensify . by contrast , more frequent genesis in the tropical eastern atlantic in hot years may result in a lower chance per tc of making landfall there is more opportunity to veer away from coast but the tcs that do make landfall have had more time to intensify .increases in sst in regions of tropical cyclogenesis over recent decades is due primarily to anthropogenic greenhouse forcing ( santer et al ., 2006 ; elsner , 2006 ) .these increases are likely to persist and to intensify because ( 1 ) the ocean has large thermal intertia , so that ( 2 ) the ocean is still responding to greenhouse forcing of the last few decades ( hansen et al . , 2005 ) , and ( 3 ) greenhouse gas forcing will almost certainly continue to grow over the next few decades .it must be acknowledged that the mechanisms for cyclogenesis and tc intensification are poorly understood , and factors in addition to sst play important roles .tc frequency in other ocean basins has not increased as it has in the atlantic . moreover ,the association we see between sst and tc propagation is not understood , and may not be directly causal ; i.e. , there may be an unidentified mechanism responsible for variability in both sst and propagation. however , our results show a significant increase of north american tc landfall with tropical sst in rough proportion to the increase in tc number .the implication is clear for higher tc landfall risk with future sst increases .finally , we note that the vast majority of the upward trend in financial loss due to tc landfall is explained by increased exposure ; that is , increased coastal population and development ( pielke and landsea , 1998 ) .it is sometimes argued that this fact trivializes any climate - change impact on tc frequency and intensity .the argument is wrong , in our view . regardless of past damages, coastal regions will continue to be populated and developed . to plan development , to plan emergency procedures , and to set insurance ratesdevelopers , governments and insurers need to know the risk of tc damage and its potential for future change .this is the question that is addressed by this study and other studies seeking to understand the connection between tcs and climate variation , including secular anthropogenic change .nasa is acknowledged for support of this research .we also thank noaa s atlantic oceanographic and meteorological laboratory for maintenance of the hurdat data base .bove , m. c. , j. b. elsner , c. w. landsea , and j. j. obrien , 1998 , effect of el nino on u.s .landfalling hurricanes , revisited , _ bull ._ , * 79 * , 24772482 .jarvinen , b. r. , c. j. neumann , and m. a. s. davis , 1984 : a tropical cyclone data tape for the north atlantic basin , 18861983 , contents , limitations , and uses , _ noaa tech_ , nws nhc 22 , miami , florida .rayner , n. a. , et al ., 2003 , global analysis of sea - surface temperature , sea ice , and night marine air temperature since the late nineteenth century , _ j. geophys_ , * 108 * , doi:10.1029/2002jd002670 .e to 290 and 10 to 20 from the hadley centre sst data ( rayner et al . , 2003 ) .the horizontal dashed lines indicate the thresholds for hot and cold years ( 27.38 and 27.07 ) used for model conditioning .the averaging region is plotted in the map ( right).,title="fig : " ] + cold by an amount at least 50% of the maximum difference and the difference is greater than one rms deviation across the random differences .blue indicates regions with the same magnitude and significance thresholds , but for cold hot.,title="fig : " ] + w through 340 every 10 from left to right ) as a function of latitude ( vertical axes ) . in this figure sst conditioningis performed for the genesis - site pdfs only ; i.e. , there is no conditioning for the tc number , propagation or lysis .units are numbers per year of tcs crossing the longitude lines accumulated in 5 latitude bins .curves are shifted 10 for each longitude line . shownare ( a ) the eastward tc fluxes for cold - years ( blue ) and hot - years ( red ) ; ( b ) the difference ( solid lines ) of the hot - cold eastward fluxes ; ( c ) the westward fluxes ; and ( d ) the difference of the hot - cold westward fluxes . to test significance of the hot - cold differences , also shown ( orange filled ) in ( b ) and ( d ) are the mean rms deviation of the eastward and westward random flux differences.,title="fig : " ] + n to 50 every 10 latitude , bottom to top ) as a function of longitude ( horizontal axes ) . in this figure sst conditioningis performed for the genesis - site pdfs only ; i.e. , there is no conditioning for the tc annual number , propagation or lysis .units are numbers per year of tcs crossing the latitude lines accumulated in 5 longitude bins .curves are shifted 10 for each latitude line . shownare ( a ) the northward tc fluxes for cold - years ( blue ) and hot - years ( red ) ; ( b ) the southward fluxes ; ( c ) the difference ( solid lines ) of the hot - cold northward fluxes ; ( d ) the difference of the hot - cold southward fluxes . to test significance of the hot - cold differences , also shown ( orange filled ) in ( c ) and ( d )are the mean rms deviation of the northward and southward random flux differences.,title="fig : " ] + rms deviation difference ( orange fill ) . here, only the annual tc number is conditioned on sst .the other track model components ( genesis site , propagation , and lysis ) are unconditional .mileposts ( a j ) and regions are labeled for ease of reference to fig .vertical shaded regions correspond to the regions defined in fig .3.,title="fig : " ] + .landfall rates ( counts per year ) on 6 regions and the full coast for hot years ( ) and cold years ( ) and the score of their difference , given by divided by the rms deviation of random differences .[ cols="^,^,^,^,^,^,^,^",options="header " , ] | we employ a statistical model of north atlantic tropical cyclone ( tc ) tracks to investigate the relationship between sea - surface temperature ( sst ) and north american tc landfall rates . the track model is conditioned on summer sst in the tropical north atlantic being in either the 19 hottest or the 19 coldest years in the period 19502005 . for each conditioning many synthetic tcs are generated and landfall rates computed . compared to direct analysis of historical landfall , the track model reduces the sampling error by projecting information from the entire basin onto the coast . there are 46% more tcs in hot years than cold in the model , which is highly significant compared to random sampling and corroborates well documented trends in north atlantic tc number in recent decades . in the absence of other effects , this difference results in a significant increase in model landfall rates in hot years , uniform along the coast . hot - cold differences in the geographic distribution of genesis and in tc propagation do not significantly alter the overall landfall - rate difference in the model , and the net landfall rate is 4.7 yr in hot years and 3.1 yr in cold years . sst influence on genesis site and propagation does modify the geographic distribution of landfall , however . the yucatan suffers 3 times greater landfall rate in hot years than cold , while the u.s . mid - atlantic coast exhibits no significant change . landfall probabilities increase in hot years compared to all years in florida , the u.s gulf coast , the mexican gulf coast , and yucatan with at least 95% confidence . |
we first introduce a three - strategy version of the spatial inspection game , where in addition to criminals and punishers , also ordinary people compete for space on a square lattice with periodic boundary conditions .we use the latter as the simplest network to account for the fact that the interaction range among individuals in human societies is limited . the payoff matrix + [ cols=">,^,^,^",options="header " , ] contains the same three main parameters as the three - strategy payoff matrix , with the key difference being that punishers and are willing to bear only and of the full punishment cost , respectively .naturally , they also receive a proportionally smaller reward .punishers correspond to punishers in the three - strategy model in terms of their commitment to sanctioning criminals , but we introduce a different notation for convenience . both the uniform three - strategy and the heterogeneous five - strategy spatial inspection game are studied by means of monte carlo simulations , as described in the methods section .we begin by presenting the complete phase diagram at a representative value of the punishment cost in fig .[ phase ] .it can be observed that criminals dominate if the reward for their punishment is small .if the reward exceeds a certain value at a fixed temptation / loss , then the punishers become viable . at moderate values , however , their presence is also accompanied by the emergence of ordinary players .the stability of the phase is due to cyclic dominance between the three competing strategies . in particular , within the region ordinary people outperform the punishers , the punishers defeat the criminals , while the criminals beat ordinary people , thus closing the loop of dominance .conversely , for larger values of , in particular if , the pure phase becomes the two - strategy phase via a second - order continuous phase transition as increases . moreover , at sufficiently large values of the reward , the three - strategy phase and the two - strategy phase are separated by a second - order continuous phase transition . for a more quantitative view , we present in fig .[ cross ] characteristic cross - sections of the phase diagram shown in fig .[ phase ] .these cross - sections confirm that criminals can dominate in the high temptation / loss region or in the low reward region .moreover , it can be observed that larger rewards are beneficial for the punishers , but only up to a certain point . if increases beyond a critical pointordinary people emerge , and as second - order free - riders they flourish on the expense of those that punish criminal behavior .we emphasize that , interestingly , the payoffs of ordinary people are independent of , yet still their fraction increases as increases .this counterintuitive result is due to cyclic dominance , where feeding the prey , in this case the punishers who do get larger payoffs for larger values , directly benefits the predator , which in this case are the ordinary people .we can thus conclude that the real obstacle in the fight against criminal behavior is the possibility of ordinary people to free - ride on the efforts of punishers .a similar conclusion has been reached before for the evolution of cooperation in the public goods game with punishers , where the free - riding problem of defectors is simply deferred to the second - order free - riding problem of cooperators . as a natural response of punishers to the harmful exploitation of ordinary people, we next consider the five - strategy spatial inspection game with heterogeneous punishment .in particular , strategies and try to eschew the exploitation by reducing the amount they contribute for sanctioning to and of the full cost , respectively .however , their reward is proportionally smaller as well ( see the extended payoff matrix in section 2 for details ) . due to the large number of competing strategies and the resulting multitude of possible subsystem solutions we focus on the most important parameter region where ordinary players survive in the uniform , three - strategy , model .accordingly , we explore a representative cross section when the reward is high enough for punishing strategies to survive , and we explore how the system responds to the diversity of punishment .results presented in the left panel of fig .[ opext ] confirm the effectiveness of resorting to heterogeneous punishment in that second - order free - riders are able to survive only in a significantly narrower interval of the temptation / loss if compared to the uniform punishment model .furthermore , results presented in the right panel of fig . [ opext ] also give credence to the expectation that the reduced viability of ordinary people will promote the evolution of punishers . more precisely , we find that the uniform punishment strategy is significantly less effective than heterogeneous punishment for almost the entire range of the temptation / loos , except for a narrow interval in the region . as we will show in fig .[ cext ] , this fact has important consequences for the mitigation of criminal behavior in the population .another peculiarity that can be observed in the right panel of fig .[ opext ] is the zig - zag outlay of the aggregate fraction of all punishers in the five - strategy model . yetthis can be understood thoroughly simply by looking at the fraction of punishers , and individually .the mentioned panel reveals clearly that low values of are able to sustain only those punishers who are willing to invest the lowest cost towards sanctioning criminals .the rank of the most viable punishers subsequently increases from over to as we increase , and the solution of the five - strategy model thus eventually becomes identical to the the solution of the three - strategy model .remarkably , we can observe six consecutive phase transitions [ as we increase a single parameter , .it is worth pointing out that the reported increment of the punisher rank with increasing the temptation / loss resonates with the outcome of a recent human experiment , where , in the realm of a social dilemma , it was shown that if cooperation is likely one should punish mildly .we continue with the results presented in fig .[ cext ] , where we compare the effectiveness of uniform and heterogeneous punishment to deter criminal behavior . to a degreeunexpected , it can be observed that the possibility to resort to different levels of punishment does not necessarily work better than uniform punishment in reducing crime . on the contrary ,the fraction of players is generally higher over a large interval of values when the heterogeneous punishment model is used .more precisely , the fraction of criminals is lower only in the low temptation / loss region where punishers can adjust to this favorable condition .this observation is related to the failure of heterogeneous punishment to eliminate second - order free - riding more effectively than uniform punishment , and it indicates that sophisticatedly adjusted punishers may win a battle against ordinary people , but loose the main war against the actual enemy , the criminals .while punishers can lower the amount they invest towards sanctioning criminals , such a reduced effort also yields smaller rewards .interestingly , the positive side of lower costs can be utilized only if the heterogeneity of punishers is maintained .the said effect becomes visible if we mark the borders of different phases on the curve of criminals , as shown in the right panel of fig .as it is illustrated , the fraction of criminals can be a decaying function even if we increase the temptation / loss , but only as long as different types of punishers exist and compete against the criminals .as soon as evolution favors a single punisher type , an effective response to an increase of the value of becomes absent .lastly , we note that the conclusions attained with the results presented in figs . [ opext ] and [ cext ] remain generally valid also for all high temptation values . to obtain a better understanding of the origin of the zig - zag outlay of criminals depicted in fig .[ cext ] , we monitor the time evolution of the distribution of strategies in the population for three different combinations of payoff parameters , as shown in fig .[ snapshots ] .we emphasize that the main mechanism responsible for the formation of different stationary states is due to the different motion of interfaces that separate the possible solutions of the system .accordingly , we follow the evolution of interfaces starting from a prepared initial state , but for clarity only two types of punishers are present because this minimal model is sufficient to capture the essence of the emerging effect .the extrapolation to the full five - strategy model , however , is straightforward . for comparison, we use an identical prepared initial state , as shown in the leftmost panel , for three representative values of . as in previous figures ,red color depicts players while light and dark blue depict the and punishers , respectively . before discussing each specific case , we note that , individually , always beats due to the lower cost of inspection. when the temptation / loss is low , as shown in panels ( a)-(d ) , can beat very efficiently , while is unable to do the same but simply coexists with the criminals .the superiority of over , however , will result in a shrinking area of the domain , as shown in panel ( b ) .ultimately , this fact leads to the extinction of strategy , despite the fact that it is more successful in deterring criminals than strategy .as soon as die out , as shown in panel ( c ) , criminals can exploit the milder punishment from strategy and spread towards the stationary state , as shown in panel ( d ) .a seemingly surprising and counterintuitive result is that criminals , who can coexist with players but are defeated by players , are able to survive while their `` predators '' ( ) go extinct .but in fact , the evolution depicted in the panels ( a)-(d ) simply illustrates the actual consequence of second - order free - riding .namely , players exploit the more altruistic players by contributing less to sanctioning criminals . in the absence of players , however , the common enemy ( ) can spread relatively free and reach a significantly high level ( ) .interestingly , when players are less successful in deterring players , the outcome is completely the opposite , as shown in panels ( e)-(h ) of fig .[ snapshots ] . since the temptation / loss , are able to coexist with .the coexistence of and strategies is also still possible , and at the same time continue to invade the pure phase [ the invasion ends in panel ( f ) ] .however , become ineffective against the alliance .indeed , this two - strategy alliance is so powerful that it beats the other alliance completely .the competition between the two alliances starts in panel ( g ) , and it terminates with the total victory of the alliance in panel ( h ) .the conclusion is similar as in the preceding case .namely , when the evolution selects only one type of punishers , then criminals have a reasonable chance to survive .note that the fraction of criminals in the stationary state is again relatively high , , despite of substantial punishment .the most favorable outcome can be obtained at an intermediate temptation / loss value , as shown in panels ( i)-(l ) of fig .[ snapshots ] .the value is still high enough to maintain the coexistence of the alliance , but it lessens its evolutionary advantage in that the alliance is able to survive .the stationary state thus contains three strategies , whereby a relatively small portion of the population , , is occupied by criminals .we thus conclude that , in the long - run , if different punisher strategies survive in the stationary state , heterogeneous punishment may be utilized successfully to mitigate crime better than uniform punishment .note that is a decreasing function of in the three - strategy phase in fig .[ cext ] , while it always increasing when homogeneous punishment is applied ( in , , or in the phases ) .this is because heterogeneous punishment enables the validation of the most effective approach against crime : sometimes moderate efforts , yielding milder fines , serve the interest of whole population better than severe punishment . even more importantly , the simultaneous presence of different types of punishers enables a synergy among them in that one strategy ( in our case ) can lower the payoff of criminals significantly while the other strategy ( ) can still enjoy a more competitive payoff due to a smaller cost .this multi - point effect is conceptually similar to when the duty of punishment is shared stochastically among cooperative players . of course, as we have already emphasized , these conclusions remain valid and can be extrapolated to a larger number of different punisher strategies .we have studied the effectiveness of punishment in abating criminal behavior in the spatial inspection game with three and five competing strategies , entailing criminals , ordinary people and punishers . in the five - strategy game , we have introduced three different types of punishers , depending on the amount they are willing to contribute towards sanctioning criminals .we have shown that cyclic dominance plays an important role in that it maintains the survivability of seemingly subordinate strategies through indirect support . for example , increasing the reward for punishing criminals might promote second - order free - riding of ordinary people , despite of the fact that it should in fact support the punishers .this is due to cyclic dominance , where directly promoting the prey , in this case the punishers , benefits the predator , which in this case are the ordinary people .moreover , we have shown that the actual obstacle in the fight against criminal behavior is the possibility of ordinary people to free - ride on the efforts of punishers , which is also the main culprit behind the establishment of cyclic dominance .in general , sanctioning criminal behavior is thus a double - edged sword .the obvious benefit is that the evolution of crime is contained and is unable to dominate in the population .the pitfall is that , in conjunction with ordinary people , punishment creates conditions that support cyclic dominance , which prevents the complete abolishment of crime even if the sanctions are severe and effective .in addition to these observations , we have shown that the possibility of heterogeneous punishment yields a highly ambiguous measure against criminal behavior . at specific parameter valuesit can happen that milder punishers play the role of second - order free riders , which ultimately prevents to eliminate crime completely [ see panels ( a)-(d ) in fig .[ snapshots ] ] .evidently , the reverse process is also possible in structured populations where the more altruistic punishers can separate from second - order free riders and win the indirect territorial battle .but in the realm of the studied inspection game , we have also observed that the diversity of punishers can yield a more favorable social outcome even as the temptation to do crime is growing . in the latter case , the simultaneous presence of different punishers provides an advantageous coexistence : some punishers ensure a higher fine to criminal players while other punishers can benefit from a lower cost due to a less intensive engagement .importantly , neither of these two options is effective on its own right , but together they improve the effectiveness of combating crime .notably , the emergence of cyclic dominance due to strategic complexity has been reported before , for example in public goods games with volunteering , peer punishment , pool punishment and reward , but also in pairwise social dilemmas with coevolution .other counterintuitive phenomena that are due to cyclic dominance include the survival of the weakest , the emergence of labyrinthine clustering , and the segregation along interfaces that have internal structure , to name but a few examples .cyclical interactions are thus in many ways the culmination of evolutionary complexity , and we here show that they likely play a prominent role in deterring crime as well . however , while the beneficial role of cyclic dominance for maintaining biodiversity is undeniable , one has to concur that it is a rather unsatisfactory outcome in terms of fighting criminal behavior . that is the sort of diversity in behavior that human societies could happily do without , yet it seems that this is precisely the trap the current system has fallen into .indeed , data from the federal bureau of investigation ( see fig . 2 in ref . ) indicate that crime , regardless of type and severity , is remarkably recurrent .although positive and negative trends may be inferred , crime events between 1960 and 2010 fluctuate across time and space , and there is no evidence to support that crime rates are permanently decreasing .the search for more effective crime mitigation strategies is thus in order , in particularly for such where the permanent elimination of crime is not an a priori impossibility .for both the 3-strategy and the 5-strategy spatial inspection game the monte carlo simulation procedure is the same . initially all competing strategies are distributed uniformly at random on the square lattice .we note , however , that the reported final stationary states are largely independent of the initial fractions of strategies .subsequently , in agreement with the random sequential update protocol , a randomly selected player acquires its payoff by playing the game pairwise with all its four neighbors .next , player randomly chooses one neighbor , who then also acquires its payoff in the same way as previously player .once both players acquire their payoffs , player adopts the strategy from player with a probability determined by the fermi function },\ ] ] where quantifies the uncertainty related to the strategy adoption process . in agreement with previous works, the selected value ensures that strategies of better - performing players are readily adopted by their neighbors , although adopting the strategy of a player that performs worse is also possible .this accounts for imperfect information and errors in the evaluation of the opponent .each full monte carlo step ( mcs ) consists of elementary steps as described above , which are repeated consecutively , thus giving a chance to every player to change its strategy once on average .we typically use lattices with players , although close to the phase transition points up to players had to be used in this case to avoid accidental extinctions , and thus to arrive at results that are valid in the large - size limit .the fractions of competing strategies are determined in the stationary state after a sufficiently long relaxation time lasting up to mcs .in general , the stationary state is reached when the average of the strategy fractions becomes time - independent .moreover , to account for the differences in initial conditions and to further improve accuracy , the final results are averaged over up to independent runs for each set of parameter values .this research was supported by the slovenian research agency ( grant p5 - 0027 ) , the hungarian national research fund ( grant k-101490 ) , and by the deanship of scientific research , king abdulaziz university ( grant 76 - 130 - 35-hici ) . | as a simple model for criminal behavior , the traditional two - strategy inspection game yields counterintuitive results that fail to describe empirical data . the latter shows that crime is often recurrent , and that crime rates do not respond linearly to mitigation attempts . a more apt model entails ordinary people who neither commit nor sanction crime as the third strategy besides the criminals and punishers . since ordinary people free - ride on the sanctioning efforts of punishers , they may introduce cyclic dominance that enables the coexistence of all three competing strategies . in this setup ordinary individuals become the biggest impediment to crime abatement . we therefore also consider heterogeneous punisher strategies , which seek to reduce their investment into fighting crime in order to attain a more competitive payoff . we show that this diversity of punishment leads to an explosion of complexity in the system , where the benefits and pitfalls of criminal behavior are revealed in the most unexpected ways . due to the raise and fall of different alliances no less than six consecutive phase transitions occur in dependence on solely the temptation to succumb to criminal behavior , leading the population from ordinary people - dominated across punisher - dominated to crime - dominated phases , yet always failing to abolish crime completely . in wilson and kelling introduced the `` broken windows theory '' , explaining how seemingly unimportant and harmless signals of urban disorder may over time elicit antisocial behavior and serious crime . the central premise of the theory is simple yet powerful , and it is reminiscent of preferential attachment or the matthew effect with a negative connotation . just like the more connected nodes attract more new links during network growth , so does an unattended broken window invite bypassers to behave mischievously or even disorderly . similarly , a graffiti might point to an unkept environment , signaling that more egregious damage will likely be tolerated as well . one broken window is thus likely to become many broken windows , and the inception of urban decay and criminal behavior is in place . the simplicity of this widely adopted criminological theory invites mathematicians and physicists to adopt a complex systems approach to study criminal behavior , in particular since the collective behavior of the system in this case can hardly be inferred from the relatively simple individual actions . emergent phenomena such as pattern formation including percolation and phase transitions are commonly associated with complex social and biological systems , and in this realm the mitigation of crime is certainly no exception . recent research highlights that crime is far from being uniformly distributed across space and time , and this is confirmed also by the dynamic nucleation and dissipation of crime hotspots and the emergence of complex criminal networks . the emergence of crime can also be treated as a social dilemma , in as far that social order is the common good that is threatened by criminal activity , with competition arising between criminals and those trying to prevent crime . an adversarial evolutionary game with four competing strategies has recently been proposed , where paladins are model citizens that do not commit crimes and collaborate with authorities , while villains , at the other extreme of the spectrum , commit crimes and do not report them . intermediate figures are informants who report on other offenders while still committing crimes , and apathetics who neither commit crimes nor report to authorities . apathetics are similar to second - order free - riders in the context of the public goods game with punishment , in that they cooperate at first order by not committing crimes , but defect at second order by not punishing offenders . simulations have revealed that in the realm of the adversarial game informants are key to the emergence of a crime - free society , and this has subsequently been confirmed also with human experiments . in general , the mitigation of crime can be framed as an evolutionary game with punishment , although recent research has raised doubts on the use of sanctions as a means to promote prosocial behavior . rewards for not doing and reporting crime are a viable alternative , and in this case the `` stick versus carrot '' dilemma becomes an important consideration . in the context of rehabilitating criminals , the question is also how much punishment for the crime and how much reward for eschewing wrongdoing in the future is in order for optimal results , as well as whether these efforts should be the responsibility of individuals or institutions under the assumption of a limited budget . it is at this intersection of statistical physics of complex system and evolutionary games that we aim to contribute in the present paper by considering a three - strategy spatial inspection game with uniform punishment as well as a five - strategy spatial inspection game with heterogeneous punishment . the inspection game is a recognized model in the sociological literature for the dynamics of crime . the game addresses the question of why anybody would be willing to invest into costly punishment of criminals , given that individuals are tempted to benefit from the punishing activities of others without actively contributing to them . as soon as ordinary people are introduced who neither commit crimes nor contribute to their mitigation , one is thus faced with the second - order free - rider problem . as we will show in what follows , this may introduce cyclic dominance that enables the coexistence of all three competing strategies in the uniform punishment model . more importantly , the consideration of heterogeneous punisher strategies drastically elevates the complexity of possible solutions , revealing on the one hand a more effective solution to the second - order free - rider problem , yet still failing to abolish crime completely . as a consequence , the diversity of punishment allows the formation of different alliances between competing strategies , which gives rise to a sophisticated range of solutions in dependence on the payoffs . in the next section we first present the details of the considered 3-strategy and 5-strategy spatial inspection game , and then demonstrate how systematic monte carlo simulations reveal the benefits and pitfalls of punishing criminal behavior . simulation details are described in the methods section . we conclude by discussing the presented results and their wider implications . |
he field of visual media has been witnessing explosive growth in recent years , driven by significant advances in technology that have been made by camera and mobile device manufacturers , and by the synergistic development of very large photo - centric social networking websites , which allow consumers to efficiently capture , store , and share high - resolution images with their friends or the community at large .the vast majority of these digital pictures are taken by casual , inexpert users , where the capture process is affected by delicate variables such as lighting , exposure , aperture , noise sensitivity , and lens limitations , each of which could perturb an image s perceived visual quality .though cameras typically allow users to control the parameters of image acquisition to a certain extent , the unsure eyes and hands of most amateur photographers frequently lead to occurrences of annoying image artifacts during capture .this leads to large numbers of images of unsatisfactory perceptual quality being captured and stored along with more desirable ones .being able to automatically identify and cull low quality images , or to prevent their occurrence by suitable quality correction processes during capture are thus highly desirable goals that could be enabled by automatic quality prediction tools .thus , the development of objective image quality models from which accurate predictions of the quality of digital pictures as perceived by human observers can be derived has greatly accelerated .advances in practical methods that can efficiently predict the perceptual quality of images have the potential to significantly impact protocols for monitoring and controlling multimedia services on wired and wireless networks and devices .these methods have the potential to also improve the quality of visual signals by acquiring or transporting them via `` quality - aware '' processes .such `` quality - aware '' processes could perceptually optimize the capture process and modify transmission rates to ensure good quality across wired or wireless networks .such strategies could help ensure that end users have a satisfactory quality of experience ( qoe ) .the goal of an objective no - reference image quality assessment ( nr iqa ) model is as follows : given an image ( possibly distorted ) and no other additional information , automatically and accurately predict its perceptual quality . given that the ultimate receivers of these images are humans ,the only reliable way to understand and predict the effect of distortions on a typical person s viewing experience is to capture opinions from a large sample of human subjects , which is termed _ subjective image quality assessment_. while these subjective scores are vital for understanding human perception of image quality , they are also crucial for designing and evaluating reliable iqa models that are consistent with subjective human evaluations , regardless of the type and severity of the distortions. the most efficient nr iqa algorithms to date are founded on the statistical properties of natural images .natural scene statistics ( nss ) models are based on the well - established observation that good quality real - world photographic images obey certain _ perceptually _ relevant statistical laws that are violated by the presence of common image distortions .some state - of - the - art nr iqa models that are based on nss models attempt to quantify the degree of ` naturalness ' or ` unnaturalness ' of images by exploiting these statistical perturbations .this is also true of competitive reduced - reference iqa models .such statistical ` naturalness ' metrics serve as image features which are typically deployed in a supervised learning paradigm , where a kernel function is learned to map the features to ground truth subjective quality scores .a good summary of such models and their quality prediction performance can be found in .* authentic distortions : * current blind iqa models use legacy benchmark databases such as the live image quality database and the tid2008 database to train low - level statistical image quality cues against recorded subjective quality judgements .these databases , however , have been designed to contain images corrupted by only one of a few synthetically introduced distortions , e.g. , images containing only jpeg compression artifacts , images corrupted by simulated camera sensor noise , or by simulated blur .though the existing legacy image quality databases have played an important role in advancing the field of image quality prediction , we contend that determining image quality databases such that the distorted images are _ derived _ from a set of high - quality source images and by simulating image impairments on them is much too limiting .in particular , traditional databases fail to account for difficult mixtures of distortions that are inherently introduced during image acquisition and subsequent processing and transmission .for instance , consider the images shown in fig .[ sampleimgs](a ) - fig .[ sampleimgs](d ) .figure [ sampleimgs](d ) was captured using a mobile device and can be observed to be distorted by both low - light noise and compression errors .figure 1(b ) and ( c ) are from the legacy live iqa database where jpeg compression and gaussian blur distortions were synthetically introduced on a pristine image ( fig .[ sampleimgs](a ) ) .although such singly distorted images ( and datasets ) facilitate the study of the effects of distortion - specific parameters on human perception , they omit important and frequently occurring mixtures of distortions that occur in images captured using mobile devices .this limitation is especially problematic for _ blind _ iqa models which have great potential to be employed in large - scale user - centric visual media applications .designing , training , and evaluating iqa models based only on the statistical perturbations observed on these restrictive and non - representative datasets might result in quality prediction models that inadvertently assume that every image has a `` single '' distortion that most objective viewers could agree upon . although top - performing algorithms perform exceedingly well on these legacy databases ( e.g. , the median spearmann correlation of on the legacy live iqa database reported by brisque and reported by tang _ et .al _ in ) , their performance is questionable when tested on naturally distorted images that are normally captured using mobile devices under highly variable illumination conditions .indeed , we will show in sec .[ sec : expt ] that the performance of several top - performing algorithms staggers when tested on images corrupted by diverse authentic and mixed , multipartite distortions such as those contained in the new live in the wild image quality challenge database .{womanhat - eps - converted - to.pdf } & \includegraphics[width=1.3in , height=2.0in]{womanhat_jpeg - eps - converted - to.pdf } & \includegraphics[width=1.3in , height=2.0in]{womanhat_wn - eps - converted - to.pdf } & \includegraphics[width=1.5in , height=2.0in]{challenge - eps - converted - to.pdf } \end{array} ] and are referred to as mean opinion scores .this method of gathering opinion scores , which diverges from accepted practice , is in our view questionable .conversely , the live iqa database was created following an itu recommended single - stimulus methodology .both the reference images as well as their distorted versions were evaluated by each subject during each session .thus , _ quality difference scores _ which address user biases were derived for all the distorted images and for all the subjects .although the live test methodology and subject rejection method adheres to the itu recommendations , the test sessions were designed to present a subject with a set of images , all afflicted by the same type of distortion ( for instance , all the images in a given session consisted of different degrees of jpeg 2000 distortion ) that were artificially added to different reference images .we suspect that this could have led to over - learning of each distortion type by the subjects as the study session progressed .since cameras on mobile devices make it extremely easy to snap images spontaneously under varied conditions , the complex mixtures of image distortions that occur are not well - represented by the distorted image content in either of these legacy image databases .this greatly motivated us to acquire real images suffering the natural gamut of authentic distortion mixtures as the basis for a large new database and human study .such a resource could prove quite valuable for the design of next - generation robust iqa prediction models that will be used to ensure future end users high quality of viewing experience .* online subjective studies * [ sec : onlinestudies ] most subjective image quality studies have been conducted in laboratory settings with stringent controls on the experimental environment and involving small , non - representative subject samples ( typically graduate and undergraduate university students ) .for instance , the creators of the live iqa database used two 21-inch crt monitors with display resolutions of pixels in a normally lit room , which the subjects viewed from a viewing distance of screen heights . however , the highly variable ambient conditions and the wide array of display devices on which a user might potentially view images will have a considerable influence on her perception of picture quality .this greatly motivates our interest in conducting iqa studies on the internet , which can enable us to access a much larger and more diverse subject pool while allowing for more flexible study conditions .however , the lack of control on the subjective study environment introduces several challenges ( more in sec .[ crowdsourcechallenges ] ) , some of which can be handled by employing counter measures ( such as gathering details of the subject s display monitor , room illumination , and so on ) .a few studies have recently been reported that used web - based image , video , or audio rating platforms . some of these studies employed pairwise comparisons followed by ranking techniques to derive quality scores , while others adopted the single stimulus technique and an absolute category rating ( acr ) scale .since performing a complete set of paired comparisons ( and ranking ) is time - consuming and monetarily expensive when applied on a large scale , xu _ et al . _ introduced the hodgerank on random graphs ( hrrg ) test , where random sampling methods based on erds - rnyi random graphs were used to sample pairs and the hodgerank was used to recover the underlying quality scores from the incomplete and imbalanced set of paired comparisons .more recently , an active sampling method was proposed that actively constructs a set of queries consisting of single and pair - wise tests based on the expected information gain provided by each test with a goal to reduce the number of tests required to achieve a target accuracy .however , all of these studies were conducted on small sets of images taken from publicly available databases of synthetically distorted images , mostly to study the reliability and quality of the opinion scores obtained via the internet testing methodology . in most cases ,the subjective data from these online studies is publicly unavailable . to the best of our knowledge , we are aware of only one other project reporting efforts made in the same spirit as our work , that is , _ crowdsourcing _ the image subjective study on mechanical turk by following a single - stimulus methodology . however , the authors of tested their crowdsourcing system on only jpeg compressed images from the legacy live image quality database of synthetically distorted images and gathered opinion scores from only forty subjects .by contrast , the new live in the wild image quality challenge database has challenging images and engaged more than unique subjects . also , we wanted our web - based online study to be similar to the subjective studies conducted under laboratory settings with instructions , training , and test phases ( more details in sec .[ sec : inst ] ) .we also wanted unique participants to view and rate the images on a continuous rating scale ( as opposed to using the acr scale ) .thus we chose to design our own crowdsourcing framework incorporating all of the above design choices , as none of the existing successful crowdsourcing frameworks seemed to offer us the flexibility and control that we desired .in practice , every image captured by a typical mobile digital camera device passes several processing stages , each of which can introduce visual artifacts . authentically distorted images captured using modern cameras are likely to be impaired by sundry and mixed artifacts such as low - light noise and blur , motion - induced blur , over and underexposure , compression errors , and so on .the lack of content diversity and mixtures of bonafide distortions in existing , widely - used image quality databases is a continuing barrier to the development of better iqa models and prediction algorithms of the perception of real - world image distortions . to overcome these limitations and towards creating a holistic resource for designing the next generation of robust , perceptually - aware image assessment models , we designed and created the live in the wild image quality challenge database , containing images afflicted by diverse authentic distortion mixtures on a variety of commercial devices .figure [ fig : challengeimgs ] presents a few images from this database .the images in the database were captured using a wide variety of mobile device cameras as shown in fig .[ fig : devicedist ] .the images include pictures of faces , people , animals , close - up shots , wide - angle shots , nature scenes , man - made objects , images with distinct foreground / background configurations , and images without any specific object of interest .some images contain high luminance and/or color activity , while some are mostly smooth .since these images are naturally distorted as opposed to being artificially distorted post - acquisition pristine reference images , they often contain mixtures of multiple distortions creating an even broader spectrum of perceivable impairments .since the images in our database contain mixtures of unknown distortions , in addition to gathering perceptual quality opinion scores on them ( as discussed in detail in sec .[ sec : onlinecrowdsource ] ) , we also wanted to understand to what extent the subjects could supply a sense of distortion type against a few categories of common impairments .thus we also conducted a crowdsourcing study wherein the subjects were asked to select the single option from among a list of distortion categories that they think represented the _ most dominant distortion in each presented image_. the categories available to choose from were - `` blurry , '' `` grainy , '' `` overexposed , '' `` underexposed , '' `` no apparent distortion .its a great image , '' and `` i do nt understand the question . ''we adopted a majority voting policy to aggregate the distortion category labels obtained on every image from several subjects .a few images along with the category labels gathered on them are shown in fig .[ fig : categoryclashes ] . images presented in the left column of fig .[ fig : categoryclashes ] were sampled from an image pool where a majority of the subjects were in full agreement with regard to their opinion of the specific distortion present in those images .the images presented in the second column are from a pool of images that received an approximately equal number of votes for two different classes of distortions .that is , about 50% of the subjects who viewed these images perceived one kind of dominant distortion while the remaining subjects perceived a completely different distortion to be the most dominating one .the confusion of choosing a dominant distortion was more difficult for some images , a few of which are presented in the last column . here ,nearly a third of the total subjects who were presented with these images labeled them as belonging to a distortion category different from the two other dominant labels obtained from the other subjects .figure [ fig : categoryclashes ] highlights the risk of forcing a consensus on image distortion categories through majority voting on our dataset .multiple objective viewers appeared to have different sensitivities to different types of distortions which , in combination with several other factors such as display device , viewer distance from the screen , and image content , invariably affect his / her interpretation of the underlying image distortion .this non - negligible disagreement among human annotators sheds light on the extent of distortion variability and the difficulty of the data contained in the current database .we hope to build on these insights to develop a holistic identifier of mixtures of authentic distortions in the near future . fornow , we take this as more direct evidence of the overall complexity of the problem . + + * no well - defined distortion categories in real - world pictures : * the above study highlights an important characteristic of real - world , authentically distorted images captured by nave users of consumer camera devices - that these pictures can not be accurately described as generally suffering from _ single _ distortions. normally , inexpert camera users will acquire pictures under highly varied illumination conditions , with unsteady hands , and with unpredictable behavior on the part of the photographic subjects .further , the overall distortion of an image also depends on other factors such as device and lens configurations .furthermore , authentic mixtures of distortions are even more difficult to model when they interact , creating new agglomerated distortions not resembling any of the constituent distortions .indeed real - world images sufer from a many - dimensional continuum of distortion perturbations . for this reason ,it is not meaningful to attempt to segregate the images in the live in the wild image quality challenge database into discrete distortion categories .crowdsourcing systems like amazon mechanical turk ( amt ) , crowd flower , and so on , have emerged as effective , human - powered platforms that make it feasible to gather a large number of opinions from a diverse , distributed populace over the web . on these platforms , `` requesters '' broadcast their task to a selected pool of registered `` workers '' in the form of an open call for data collection .workers who select the task are motivated primarily by the monetary compensation offered by the requesters and also by the enjoyment they experience through participation . despite the advantages offered by crowdsourcing frameworks ,there are a number of well - studied limitations of the same .for example , requesters have limited control over the study setup and on factors such as the illumination of the room and the display devices being used by the workers .since these factors could be relevant to the subjective evaluation of perceived image quality , we gathered information on these factors in a compulsory survey session presented towards the end of the study ( more details in sec .[ sec : inst ] ) . the basic study structure andprocedures of subjective testing in a crowdsourcing framework differ from those of traditional subjective studies conducted in a laboratory .subjective tests conducted in a lab environment typically last for many minutes with a goal of gathering ratings on every image in the dataset and are usually conducted in multiple sessions to avoid subject fatigue .for instance , the study reported in was conducted in two sessions where each session lasted for 30 minutes .however , crowdsourced tasks should be small enough that they can be completed by workers quickly and with ease .it has been observed that it is difficult to find workers to participate in large and more time consuming tasks , since many workers prefer high rewards per hour .thus , an online test needs to be partitioned into smaller chunks .further , although requesters can control the maximum number of tasks each worker can participate in , they can not control the exact number of times a worker selects a task .thus , it is very likely that all the images in the dataset will not be viewed and rated by every participating worker . despite these limitations imposed on any crowdsourcing framework , our online subjective study , which we describe in great detail belowhas enabled us to gather a large number of highly reliable opinion scores on all the images in our dataset .image aesthetics are closely tied to perceived quality and crowdsourcing platforms have been used in the past to study the aesthetic appeal of images .here we have focused on gathering subjective quality scores using highly diverse aesthetic content .we also informed users how to focus on quality and not aesthetics . in future studies , it will be of value to gather associated side information from each subject regarding the content and aesthetics of each presented image ( or video ) .the data collection tasks on amt are packaged as hits ( human intelligence tasks ) by requesters and are presented to workers , who first visit an instructions page which explains the details of the task . if the worker understands and likes the task , she needs to click the `` accept hit '' button which then directs her to the actual task page at the end of which , she clicks a `` submit results '' button for the requester to capture the data .crowdsourcing has been extensively and successfully used on several object identification tasks to gather segmented objects and their labels .however , the task of labeling objects is often more clearly defined and fairly straightforward to perform , by contrast with the more subtle , challenging , and highly subjective task of gathering opinion scores on the perceived quality of images .the generally naive level of experience of the workers with respect to understanding the concept of image quality and their geographical diversity made it important that detailed instructions be provided to assist them in understanding how to undertake the task without biasing their perceptual scores .thus , every unique participating subject on amt that selects our hit was first provided with detailed instructions to help them assimilate the task .a screenshot of this web page is shown in fig .[ fig : instructions - b ] . specifically , after defining the objective of the study , a few sample images were presented which are broadly representative of the kinds of distortions contained in the database , to help draw the attention of the workers to the study and help them understand the task at hand .a screenshot of the rating interface was also given on the instructions page , to better inform the workers of the task and to help them decide if they would like to proceed with it .[ fig : instructions - b ] * ensuring unique participants : * after reading the instructions , if a worker accepted the task , and did so for the first time , a rating interface was displayed that contains a slider by which opinion scores could be interactively provided .a screenshot of this interface is also shown in fig .[ fig : instructions - a ] . in the event that this worker had already picked our task earlier , we informed the worker that we are in need of unique participants and this worker was not allowed to proceed beyond the instructions page .only workers with a confidence value greater than 0.75 were allowed to participate .even with such stringent subject criteria , we gathered more than 350,000 ratings overall .* study framework : * we adopted a single stimulus continuous procedure to obtain quality ratings on images where subjects reported their quality judgments by dragging the slider located below the image on the rating interface . this continuous rating bar is divided into five equal portions , which are labeled `` bad , '' `` poor , '' `` fair , '' `` good , '' and `` excellent . '' after the subject moved the slider to rate an image and pressed the _ next image _button , the position of the slider was converted to an integer quality score in the range , then the next image was presented .before the actual study began , each participant is first presented with images that were selected by us as being reasonably representative of the approximate range of image qualities and distortion types that might be encountered .we call this the * training phase*. next , in the * testing phase * , the subject is presented with images in a random order where the randomization is different for each subject .this is followed by a quick survey session which involves the subject answering a few questions .thus , each hit involves rating a total of 50 images and the subject receives a remuneration of cents for the task .figure [ fig : hit ] illustrates the detailed design of our hit on iqa and fig .[ fig : turkflow ] illustrates how we package the task of rating images as a hit and effectively disperse it online via amt to gather thousands of human opinion scores .crowdsourcing has empowered us to efficiently collect large amounts of ratings .however , it raises interesting issues such as dealing with noisy ratings and addressing the reliability of the amt workers . to gather high quality ratings ,only those workers on amt with a confidence value greater than 75% were allowed to select our task .also , in order to not bias the ratings due to a single worker picking our hit multiple times , we imposed a restriction that each worker could select our task no more than once .5 of each group of 43 test images were randomly presented twice to each subject in the testing phase . if the difference between the two ratings that a subject provided to the same image each time it was presented exceeded a threshold on at least 3 of the 5 images , then that subject was rejected .this served to eliminate workers that were providing unreliable , `` random '' scores .prior to the full - fledged study , we conducted an initial subjective study and obtained ratings from 300 unique workers .we then computed the average standard deviation of these ratings on all the images . rounding this value tothe closest integer yielded 20 which we then used as our threshold for subject rejection .5 of the remaining 38 test images were drawn from the live multiply distorted image quality database to supply a control .these images along with their corresponding mos from that database were treated as a _ gold standard_. the mean of the spearman s rank ordered correlation values computed between the mos obtained from the workers on the gold standard images and the corresponding ground truth mos values from the database was found to be * 0.9851*. the mean of the absolute difference between the mos values obtained from our crowdsourced study and the ground truth mos values of the gold standard images was found to be * 4.65*. furthermore , we conducted a paired - sampled t - test and observed that this difference between gold standard and crowdsourced mos values is not statistically significant .this high degree of agreement between the scores gathered in a traditional laboratory setting and those gathered via an uncontrolled online platform with several noise parameters is critical to us .although the uncontrolled test settings of an online subjective study could be perceived as a challenge to the authenticity of the obtained opinion scores , this high correlation value indicates a high degree of reliability of the scores that are being collected by us using amt , reaffirming the efficacy of our approach of gathering opinion scores and the high quality of the obtained subject data .in addition to measuring correlations against the gold standard image data as discussed above , we further analyzed the subjective scores in the following two ways : to evaluate subject consistency , we split the ratings obtained on an image into two disjoint equal sets , and computed two mos values on every image , one from each set . when repeated over 25 random splits , an average spearman s rank ordered correlation between the mean opinion scores between the two sets was found to be * 0.9896*. evaluating intra - subject reliability is a way to understand the degree of consistency of the ratings provided by individual subjects .we thus measured the spearman s rank ordered correlation ( srocc ) between the individual opinion scores and the mos values of the gold standard images .a median srocc of * 0.8721 * was obtained over all of the subjects .all of these additional experiments further highlight the high degree of reliability and consistency of the gathered subjective scores and of our test framework .the database currently comprises of more than ratings obtained from more than unique subjects ( after rejecting unreliable subjects ) .enforcing the aforementioned rejection strategies led us to reject 134 participants who had accepted our hit .each image was viewed and rated by an average of unique subjects , while the minimum and maximum number of ratings obtained per image were and , respectively . while computing these statistics , we excluded the 7 images used in the training phase and the 5 gold standard images as they were viewed and rated by all of the participating subjects. workers took a median duration of minutes to view and rate all 50 images presented to them .the mean opinion scores ( mos ) after subject rejection was computed for each image by averaging the individual opinion scores from multiple workers .mos is representative of the _ perceived viewing experience _ of each image .the mos values range between ] {gender - eps - converted - to.pdf } & \includegraphics[width=1.8in , height=2.0in]{age - eps - converted - to.pdf } & \includegraphics[width=1.8in , height=2.0in]{distranges - eps - converted - to.pdf } & \includegraphics[width=1.8in , height=1.99in]{devices - eps - converted - to.pdf } \end{array} ] to understand to what extent gender had an affect on our quality scores , we separately analyzed the ratings obtained from male and female workers on five randomly chosen images ( figures [ fig : infimages](a)-(e ) ) while maintaining all the other factors constant .specifically , we separately captured the opinion scores of male and female subjects who are between years old , and reported in our survey to be using a desktop and sitting about inches from the screen . under this setting and on the chosen set of images , both male and female workers appeared to have rated the images in a similar manner .this is illustrated in figure [ fig : influences](a ) .next , we considered both male and female workers who reported using a laptop during the study and were sitting about inches away from their display screen .we grouped their individual ratings on these 5 images ( fig .[ fig : infimages ] ) according to their age and computed the mos of each group and plotted them in fig [ fig : influences](b ) . for the images under consideration ,again , subjects belonging to different _ age categories _ appeared to have rated them in a similar manner .{genderinf_c - eps - converted - to.pdf } & \includegraphics[width=2.1in , height=2.3in]{ageinf_c - eps - converted - to.pdf } \\\includegraphics[width=2.2in , height=2.3in]{distinf_c - eps - converted - to.pdf } & \includegraphics[width=2.2in , height=2.3in]{displayinf_c - eps - converted - to.pdf } \end{array} ] of the total of images , pictures were captured at night and suffer from severe low - light distortions ( fig .[ fig : nightimgs ] ) .it should be noted that none of the legacy benchmark databases have images captured under such low illumination conditions and it follows that the nss - based features used in other models were created by training natural images under normal lighting conditions . here , we probe the predictive capabilities of top - performing blind iqa models when such low - light images are included in the training and testing data .we therefore included the night - time pictures into the data pool and trained friquee and the other blind iqa models .the results are given in table [ dbn - night ] . despite such challenging image content, friquee still performed well in comparison with the other state - of - the - art models .this further supports the idea that a generalizable blind iqa model should be trained over mixtures of complex distortions , and under different lighting conditions .+ live challenge & 0.82 & 0.78 live challenge + live multiply & 0.80 & 0.79 live challenge + live multiply + live iqa & 0.85 & 0.83 [ tbl : combine - perf ] since nr iqa algorithms are generally trained and tested on various splits of a single dataset ( as described above ) , it is natural to wonder if the trained set of parameters are database specific . in order to demonstrate that the training process is simply a calibration , andthat once such training is performed , an ideal blind iqa model should be able to assess the quality of any distorted image ( from the set of distortions it is trained for ) , we evaluated the performance of the multi - model friquee algorithm on combinations of different image databases - the live iqa database and the live multiply distorted iqa database , as well as the new live in the wild image quality challenge database .the same 80 - 20 training setup was followed , i.e. , after combining images from the different databases , 80% of the randomly chosen images were used to train our dbn model and the trained model was then tested on the remaining 20% of the image data .we present the results in table [ tbl : combine - perf ] .it is clear from table [ tbl : combine - perf ] that the performance of friquee is _ not _ database dependent and that once trained , it is capable of accurately assessing the quality of images across the distortions ( both single and multiple , of different severities ) that it is trained for .the results clearly show friquee s potential to tackle the imminent deluge of visual data and the unavoidable distortions they are bound to contain .with more than subjective judgments overall , we believe that the study described here is the largest , most comprehensive study of perceptual image quality ever conducted . of course , digital videos ( moving pictures ) are also being captured with increasing frequency by both professional and casual users . in the increasingly mobile environment, these spatial - temporal signals will be subject to an even larger variety of distortions arising from a multiplicity of natural and artifical processes .predicting , monitoring , and controlling the perceptual effects of these distortions will require the development of powerful blind video quality assessment models , such as , and new vqa databases representative of human opinions of modern , realistic videos captured by current mobile video camera devices and exhibiting contemporary distortions .current legacy vqa databases , such as are useful tools but are limited in regard to content diversity , numbers of subjects , and distortion realism and variability . therefore , we plan to conduct large - scale crowdsourced _video _ quality studies in the future , mirroring the effort described here , and building on our expertise in conducting the current study .we acknowledge prof .sanghoon lee , dr . anish mittal , dr .rajiv soundararajan , numerous unnamed photographers from ut austin and yonsei university , among others , for helping to collect the original images in the live in the wild image quality challenge database .this work was supported in part by the national science foundation under grant iis-1116656 .m. saad , a. c. bovik , and c. charrier , `` blind image quality assessment : a natural scene statistics approach in the dct domain , '' _ ieee trans . image process .21 , no . 8 , pp . 3339 - 3352 , aug .2012 .y. zhang , a. k. moorthy , d. m. chandler , and a. c. bovik , `` c - diivine : no - reference image quality assessment based on local magnitude and phase statistics of natural scenes , '' _ sig .image commun _ , vol .29 , issue 7 , pp .725 - 747 , august . 2014 .sheikh , m.f .sabir , and a.c .bovik , `` a statistical evaluation of recent full reference image quality assessment algorithms , '' _ ieee trans . image process .3440 - 3451 , nov 2006 .n. ponomarenko , v. lukin , a. zelensky , k. egiazarian , m. carli , and f. battisti , `` tid2008-a database for evaluation of full - reference visual quality assessment metrics , '' _ adv of modern radio electron._,vol .4 , pp . 30 - 45 , 2009 .n. ponomarenko , o. ieremeiev , v. lukin , k. egiazarian , l. jin , j. astola , b. vozel , k. chehdi , m. carli , f. battisti , and c .- c .jay kuo , `` color image database tid2013 : peculiarities and preliminary results , '' _ proc . of 4th european workshop on visual info ._ , pp . 106 - 111 , june 2013 .t. hofield , p. korshunov , f. mazza , i. povoa , and c. keimel , `` crowdsourcing - based multimedia subjective evaluations : a case study on image recognizability and aesthetic appeal , '' _ int .workshop on crowdsourcing for multimedia _ , pp .29 - 34 , oct .2013 t. hofield , c. keimel , m. hirth , b. gardlo , j. habigt , k. diepold , and p. tran - gia , `` best practices for qoe crowdtesting : qoe assessment with crowdsourcing , '' _ ieee trans .multimedia _ , 16(2 ) , 541 - 558 , 2014 .j. m. foley and g. m. boynton,``a new model of human luminance pattern vision mechanisms : analysis of the effects of pattern orientation , spatial phase , and temporal frequency , '' _ in spie proceedings _ , vol . 2054 , 1993 .z. wang , a. c. bovik , h. r. sheikh , and e. p. simoncelli , `` image quality assessment : from error measurement to structural similarity , '' _ ieee trans . on image proc ._ , vol . 13 , no .600 - 612 , april 2004 .k. seshadrinathan , r. soundararajan , a.c .bovik , and l.k .cormack , `` study of subjective and objective quality assessment of video , '' _ ieee trans .image process ., _ vol . 19 , no . 6 , pp . 1427 - 1441 , june 2010moorthy , k. seshadrinathan , r. soundararajan , and a.c .bovik , `` wireless video quality assessment : a study of subjective scores and objective algorithms , '' _ ieee trans .video technol .587 - 599 , april 2010 .deepti ghadiyaram received the b.tech .degree in computer science from international institute of information technology ( iiit ) , hyderabad in 2009 , and the m.s .degree from the university of texas at austin in 2013 .she joined the laboratory for image and video engineering ( live ) in january 2013 , and is currently pursuing her ph.d .she is the recipient of the microelectronics and computer development ( mcd ) fellowship , 2013 - 2014 .her research interests include image and video processing , computer vision , and machine learning , and their applications to the aspects of information retrieval such as search and storage .alan c. bovik holds the cockrell family endowed regents chair in engineering at the university of texas at austin , where he is director of the laboratory for image and video engineering ( live ) in the department of electrical and computer engineering and the institute for neuroscience .his research interests include image and video processing , digital television and digital cinema , computational vision , and visual perception .he has published over 750 technical articles in these areas and holds several u.s .his publications have been cited more than 45,000 times in the literature , his current h - index is about 75 , and he is listed as a highly - cited researcher by thompson reuters .his several books include the companion volumes _ the essential guides to image and video processing _( academic press , 2009 ) .bovik received a primetime emmy award for outstanding achievement in engineering development from the the television academy in october 2015 , for his work on the development of video quality prediction models which have become standard tools in broadcast and post - production houses throughout the television industry .he has also received a number of major awards from the ieee signal processing society , including : the society award ( 2013 ) ; the technical achievement award ( 2005 ) ; the best paper award ( 2009 ) ; the signal processing magazine best paper award ( 2013 ) ; the education award ( 2007 ) ; the meritorious service award ( 1998 ) and ( co - author ) the young author best paper award ( 2013 ) .he also was named recipient of the honorary member award of the society for imaging science and technology for 2013 , received the spie technology achievement award for 2012 , and was the is&t / spie imaging scientist of the year for 2011 .he is also a recipient of the joe j. king professional engineering achievement award ( 2015 ) and the hocott award for distinguished engineering research ( 2008 ) , both from the cockrell school of engineering at the university of texas at austin , the distinguished alumni award from the university of illinois at champaign - urbana ( 2008 ) .he is a fellow of the ieee , the optical society of america ( osa ) , and the society of photo - optical and instrumentation engineers ( spie ) , and a member of the television academy , the national academy of television arts and sciences ( natas ) and the royal society of photography .professor bovik also co - founded and was the longest - serving editor - in - chief of the _ ieee transactions on image processing _ ( 1996 - 2002 ) , and created and served as the first general chair of the _ ieee international conference on image processing _ , held in austin , texas , in november , 1994 .his many other professional society activities include : board of governors , ieee signal processing society , 1996 - 1998 ; editorial board , _ the proceedings of the ieee _ , 1998 - 2004 ; and series editor for image , video , and multimedia processing , morgan and claypool publishing company , 2003-present .his was also the general chair of the 2014 texas wireless symposium , held in austin in november of 2014 . | most publicly available image quality databases have been created under highly controlled conditions by introducing graded simulated distortions onto high - quality photographs . however , images captured using typical real - world mobile camera devices are usually afflicted by complex mixtures of multiple distortions , which are not necessarily well - modeled by the synthetic distortions found in existing databases . the originators of existing legacy databases usually conducted human psychometric studies to obtain statistically meaningful sets of human opinion scores on images in a stringently controlled visual environment , resulting in small data collections relative to other kinds of image analysis databases . towards overcoming these limitations , we designed and created a new database that we call the _ live in the wild image quality challenge database _ , which contains widely diverse authentic image distortions on a large number of images captured using a representative variety of modern mobile devices . we also designed and implemented a new online crowdsourcing system , which we have used to conduct a very large - scale , multi - month image quality assessment subjective study . our database consists of over 350,000 opinion scores on 1,162 images evaluated by over 8100 unique human observers . despite the lack of control over the experimental environments of the numerous study participants , we demonstrate excellent internal consistency of the subjective dataset . we also evaluate several top - performing blind iqa algorithms on it and present insights on how mixtures of distortions challenge both end users as well as automatic perceptual quality prediction models . the new database is available for public use at https://www.cs.utexas.edu/~deepti/challengedb.zip . ghadiyaram : characterizing the perception quality of real distorted images using natural scene statistics and deep belief nets . perceptual image quality , subjective image quality assessment , crowdsourcing , authentic distortions . |
in this letter , we consider the numerical integration of the partial differential equation ( pde ) in the form where subscripts or denote the partial differentiation with respect to or , and is the variational derivative of .we assume the periodic boundary condition , where is a constant .when the derivatives of do not appear in , the equation is called the ( nonlinear ) klein gordon equation in light - cone coordinates .moreover , the class of pdes in the form is closely related to the ostrovsky equation , the short pulse equation , etc . for their numerical treatments , due to the possible indefiniteness caused by the spatial derivative in the left - hand side , it seems a systematic numerical framework for is yet to be investigated , though a few exceptions for specific cases can be found ( see , e.g. , ) . in this letter , we focus on a certain class of conservative methods . under the periodic boundary condition , the target equation has the conserved quantity : ^l_0 - \int^l_0 u_t u_{tx } { \mathrm{d}}x = - \int^l_0 u_t u_{tx } { \mathrm{d}}x = 0 .\hspace{6pt } \label{eq_conservation}\end{aligned}\ ] ] note that the skew - symmetry of the differential operator is crucial here .a numerical scheme is called conservative when it replicates such a conservation property ( see , e.g. , ) .the numerical solutions obtained by such schemes are often more stable than those of general - purpose methods . there , the crucial point for the discrete conservation law is the skew - symmetry of difference operator , which corresponds to that of the differential operator ; when one tries to construct a conservative finite - difference scheme for the equation , the differential operator in left - hand side must be replaced by one of the skew - symmetric difference operators , for example , the central difference operators , the compact finite difference operators ( see , e.g. , kanazawa matsuo yaghchi ) , and the fourier - spectral difference operator ( see , e.g. , fornberg ) .this is intrinsically indispensable , at least to the best of the present authors knowledge .this , however , at the same time , leads to an undesirable side effect that the numerical solutions tend to suffer from spatial oscillations . in this letter , to work around this technical difficulty , we propose a novel `` average - difference method , '' which is tough against such undesirable spatial oscillations .a similar method has been , in fact , already investigated by nagisa . however , he used this method for advection - type equations , and concluded the method was unfortunately not more advantageous than existing methods .in this letter , we instead construct an average - difference method for the pde , and combine it with the idea of conservation mentioned above .then we compare the proposed and existing methods in the case of the linear klein gordon equation , which is the simplest case with . as a result ,the average - difference type method is successfully superior to the existing methods in view of the phase speed of each frequency component .the conservative scheme for the pde can be constructed in the spirit of discrete variational derivative method ( dvdm ) ( see , the monograph for details ) . there , one utilizes the concept of the `` discrete variational derivative '' and skew - symmetric difference operators . the symbol denotes the approximation , where and are the temporal and spatial mesh sizes , respectively . here, we assume the discrete periodic boundary condition , and thus , we use the notation .let us introduce the spatial central difference operator and the temporal forward difference operator : the discrete counterpart of the functional can be defined as where is an appropriate approximation of .then , the discrete variational derivative is defined as a function satisfying for the construction of such one , see . by using the discrete variational derivative, we can construct a conservative scheme as stated in the introduction , the key ingredient here is the skew - symmetry of the central difference operator .suppose the numerical scheme has a solution under the periodic boundary condition .then , it satisfies ._ thanks to the definition of the discrete variational derivative , we can follow the line of the discussion as follows : whose right - hand side vanishes due to the skew - symmetry of the central difference operator : holds for any ._ the discrete conservation law can also be proved similarly for the other skew - symmetric difference operators .in this section , we propose the novel method . there , instead of the single skew - symmetric difference operator , we employ the pair of the forward difference and average operators : the average - difference method for the equation can be written in the form the name `` average - difference '' comes from the idea of approximating with the pair of ; this makes sense for more general pdes , and thus is independent of any conservation properties .still , in this letter we focus on and .although it is constructed in the spirit of dvdm , now the forward difference operator loses the apparent skew - symmetry , and accordingly , the proof of the discrete conservation law becomes unobvious .a similar proof can be found in nagisa .suppose the average - difference method has a solution under the periodic boundary condition .then , it satisfies .[ thm_dvdm_ad ] _ by using the definition of the discrete variational derivative , we see that here , for brevity , we introduce the notation note that the equation implies the relation . by using the identity which holds for any , we see which proves the theorem . _in order to conduct a detailed analysis , we consider the simplest case , the linear klein gordon equation under the periodic domain with the period .the exact solution of the linear klein gordon equation can be formally written in the form where is the imaginary unit , and is determined by the initial condition : in view of the superposition principle , we consider the single component ( ) . in order to clarify the difference between the standard conservative method and proposed method , we consider the following three semi - discretizations where for ( ) . here , denotes the fourier - spectral difference operator , i.e. , where is obtained by the discrete fourier transform : note that , the implicit midpoint method for the semi - discretizations above coincide with the numerical schemes constructed in the previous sections .we consider the solution of the semi - discretizations above in the form ( ) for each , which gives an exact solutions of , , and with appropriate choices of .for the central difference scheme , we see if we employ the fourier - spectral difference operator instead of the central difference , we see and holds for any . for the average - difference scheme ,we obtain the phase speeds corresponding to each numerical scheme are summarized in fig .[ fig_speed ] ( ) . as shown in fig .[ fig_speed ] , the phase speed of the central difference scheme are falsely too fast for high frequency components ( ) . on the other hand ,the error of the phase speeds of the average - difference method are much smaller .( ) .the red circles , blue crosses , and green triangles correspond to the average - difference , fourier - spectral difference , and central difference schemes , respectively . ] in this section , we conduct a numerical experiment under the periodic boundary condition with the initial condition the corresponding solution can be formally written in figures [ fig_step_cd ] , [ fig_step_ps ] , and [ fig_step_ad ] show the numerical solutions of the central difference scheme , the fourier - spectral difference scheme , and the average - difference method , respectively ( the temporal discretization : implicit midpoint method ) . as shown in fig .[ fig_step_cd ] , the central difference scheme suffers from the spatial oscillation , whereas the other schemes reproduce the smooth profiles until .the cause of this difference lies on the discrepancy in phase speeds of high frequency components ( fig .[ fig_speed ] ) . , ) .] , ) .] , ) .] however , as shown in fig .[ fig_comp ] , which shows the numerical solutions of each schemes at , the fourier - spectral scheme also suffers from the undesirable spatial oscillation , whereas the proposed method , average - difference method reproduces a better profile .moreover , the values of error at ( i.e. , ) for the numerical solutions of central difference scheme , fourier - spectral difference scheme , and the proposed method are , , and , respectively . this could be attributed to the fact that the fourier - spectral difference can be regarded as a higher - order central difference , and thus should share the same property to a certain extent .( , ) .the black dotted line represents the exact solution .the green , blue , and red solid lines denote the numerical solution of the central difference scheme , the fourier - spectral difference scheme , and the average - difference method , respectively . ]the results above can be extended in several ways . first , instead of the cumbersome proof in theorem [ thm_dvdm_ad ], we can introduce the concept of generalized skew - symmetry , by which a more sophisticated `` average - difference '' version of the dvdm could be given .second , we should try more general pdes to see to which extent the new dvdm is advantageous .finally and ultimately , we hope to construct a systematic numerical framework for ( 1 ) , based on the above observations .the authors have already got some results on these issues , which will be reported somewhere soon .this work was partly supported by jsps kakenhi grant numbers 25287030 , 26390126 , and 15h03635 , and by crest , jst .the second author is supported by the jsps research fellowship for young scientists .99 l. a. ostrovsky , nonlinear internal waves in the rotating ocean , okeanologia * 18 * ( 1978 ) , 181191 .t. schfer and c. e. wayne , propagation of ultra - short optical pulses in cubic nonlinear media , phys .d , * 196 * ( 2004 ) , 90105 .t. yaguchi , t. matsuo and m. sugihara , conservative numerical schemes for the ostrovsky equation , j. comput ., * 234 * ( 2010 ) , 10361048 . y. miyatake , t. yaguchi and t. matsuo , numerical integration of the ostrovsky equation based on its geometric structures , j. comput . phys . ,* 231 * ( 2012 ) , 45424559 .d. furihata and t. matsuo , discrete variational derivative method : a structure - preserving numerical method for partial differential equations , crc press , boca raton , 2011 .e. celledoni , v. grimm , r. i. mclachlan , d. i. mclaren , d. oneale , b. owren and g. r. w. quispel , preserving energy resp .dissipation in numerical pdes using the `` average vector field '' method , j. comput .* 231 * ( 2012 ) , 67706789 .h. kanazawa , t. matsuo and t. yaguchi , a conservative compact finite difference scheme for the kdv equation , jsiam letters * 4 * ( 2012 ) , 58 .b. fornberg , a practical guide to pseudospectral methods , cambridge university press , cambridge , 1996 .y. nagisa , finite difference schemes for pdes using shift operators ( in japanese ) , the university of tokyo , master s thesis , 2014 . | we consider structure - preserving methods for conservative systems , which rigorously replicate the conservation property yielding better numerical solutions . there , corresponding to the skew - symmetry of the differential operator , that of difference operators is essential to the discrete conservation law . unfortunately , however , when we employ the standard central difference operator , the simplest one , the numerical solutions often suffer from undesirable spatial oscillations . in this letter , we propose a novel `` average - difference method , '' which is tougher against such oscillations , and combine it with an existing conservative method . theoretical and numerical analysis in the linear case show the superiority of the proposed method . |
in the case of spanish language for several thresholds , after logarithmic binning .( inset panel ) non - collapsed distribution , for different thresholds . ]* gutenberg - richter law . *the energy released during voice events is a direct measure of the vocal fold response function under air pressure perturbations , and its distribution could in principle depend both on the threshold and on the language under study . in the inset of figure [ fig : figure_colapso_e_integrada ]we observe that is power law distributed over about six decases , saturated by an exponential cut - off .this distribution has been interpreted before as the analogue of a gutenberg - richter law in voice , as the precise shape of energy release fluctuations during voice production parallel those occurring in earthquakes . as increasing induces a flow in rg space , systems which lie close to a critical point ( unstable fixed point in rg space )show scale invariance under and hence the distributions can be collapsed into a -independent shape , thereby eliminating the trivial dependence on .this has been shown to be the case for human voice and accordingly ( technical details can be found in the si ) we can express the collapsed energy distribution as for , where is the lower limit beyond which this law is fulfilled , is a scaling function and the relevant variable is , the scaling exponent . in order to collapse every curve the theory predicts to rescale . in the outset of figure [ fig : figure_colapso_e_integrada ] we show the result of this analysis for the case of spanish language , where we find that , this exponent being approximately language - independent ( see table [ table : tablas_slopes ] for other languages and si for additional details ) . interestingly ,these exponents are compatible with those found in rainfall , another natural system that has been shown to be compatible with soc dynamics , and can not be explained by simple null models . in what followswe explore the emergence of classical linguistic laws in these acoustic signals .+ .summary of scaling exponents associated to the energy release distribution ( ) , zipf s law ( ) , heaps law ( ) and brevity law ( ) for the six different languages .power law fits are performed using maximum likelihood estimation ( mle ) following clauset and goodness - of - fit test and confidence interval are based on kolmogorov - smirnov ( ks ) tests . in all cases ,ks are greater than 0.99 .exponents associated to energy release are compatible with those found in rainfall .results are compatible with the hypothesis of language - independence . [ cols="^,^,^,^,^",options="header " , ] * zipf s law . * .the inset panel shows the raw , threshold - dependent distributions and the outer panel zipf s law has been collapsed ( see the text for details ) . ]the illustrious george kingsley zipf formulated a statistical observation which is popularly known as zipf s law . in its original formulation , it establishes that in a sizable sample of language the number of different words ( vocabulary ) which occur exactly times decays as , where the exponent varies from text to text but is usually close to .an alternative and perhaps more common formulation of this law is defined in terms of the rank , such that if words are ranked in decreasing order by their frequency of appearance then the number of occurrences of words with a given rank goes like , where it is easy to see that both exponents are related via and thus approximates to 1 . here for convenience we make use of the former and explore applied to the statistics of types .again could in principle depend on the threshold but assuming that the signal complies with the scale - invariance mentioned above , one can collapse all threshold - dependent curves into a universal shape and thus remove any dependence on this parameter by rescaling where is the total number of different types present in the signal and as the total number of tokens ( see si for technical details ) .results are shown in the case of basque language in figure [ fig : figure_zipf ] , where a clear threshold - independent decaying power law emerges with a scaling exponent .analogous results with compatible exponents for other languages can be found in si and table [ table : tablas_slopes ] .null models systematically deviate from these results , and neither display the characteristic power law decay nor any invariance under variation of the energy threshold ( si ) . +* heaps law .* is related with the original ( see the text ) and leads to . results for other languages are found in table [ table : tablas_slopes ] . ] together with zipf s law and connected mathematically ( see and references therein ) , the second classical linguistic law is heaps law , the sublinear growth of the number of different words in a text with text size ( measured in total number of words ) : ( a constant rate for appearance of new words leads to ) . herethe vocabulary is defined as the total number of different types that appear in the signal , whereas is defined as the total number of tokens found for a given threshold .results are shown for a specific language in figure [ fig : figure_heap ] ( see si for the rest ) . in the outset panelwe present the collapsed , threshold - independent curves , where again we find a scaling law with an effective exponent related to the original exponent . in this case equivalent computation on the null model yield a heaps law with the trivial exponent ( si ) .+ these results are quantitatively consistent with previous results on written texts .in particular , several authors point out that , at least asymptotically , the relation holds with good approximation , and this is on reasonably good agreement with our findings in human voice as well .interestingly , a recent work has found that , as opposed to indoeuropean ( alphabetically based ) languages , zipf s law breaks down and heaps law reduces to the trivial case for written texts in chinese , japanese , korean and other logosyllabic languages . applying our methodology in a database of logosyllabic languagescould thus evaluate to which extent those differences arise also in human voice . + that describes the relative frequency of a type of mean duration . in every casewe find a monotonically decreasing curve which yields a brevity law . in the outset panelwe present the collapsed , threshold - independent curve , that evidences an initial power law decay with an exponent . ]* brevity law . *the tendency of more frequent words to be shorter can be generalized as the tendency of more frequent elements to be shorter or smaller , and its origin has been suggested to be related to optimization and information compression arguments in connection with other linguistic laws . in acoustics ,spontaneous speech indeed tends to obey this law after text segmentation , and has been found also in other non - human primates .here we can test brevity law in essentially two different ways .first , note that voice events ( tokens ) map into types according to the logarithmic binning of their associated energy , hence voice events with different duration might yield the same type as previously noted .thus for each type we can compute its mean duration averaging over all voice events that fall within that type , and then plot the histogram that describes frequency of each type versus its mean duration .brevity law would require to be a monotonically decreasing function .these results are shown for a particular language in log - log scales in figure [ fig : figure_brevity ] , finding initially a power law decaying relation which is indicative of a brevity law ( results are again found to be language independent , see si for additional results ) .the inset provide the threshold - dependent distributions and the outset panel provides the collapsed , threshold - independent shape ( see table [ table : tablas_slopes ] for scaling exponents ) . again in this case results in null models deviate from such behavior ( si ) and are clearly different from the random typing .alternatively , one can also directly observe the duration frequency at the level of voice events , finding similar results ( see si ) .in this work we have explored the equivalent of linguistic laws directly in acoustic signals .we have found that human voice -which actually complies to soc dynamics with critical exponents compatible with those found in rainfall - manifests the analog of classical linguistic laws found in written texts ( zipf s law , heaps law and the brevity law or law of abbreviation ) .these laws are found to be invariant under variation of the energy threshold , and can be collapsed under universal functions accordingly . as is the only free parameter of the method , this invariance determines that the results are not afflicted by ambiguities associated to arbitrarily defining unit boundaries .results appear to be robust across languages and timescales ( ranging six different indoeuropean languages and different scales extending all the way into the intraphoneme range , and invariant under energy threshold variation ) .interestingly , an equivalent analysis performed on null models defined by randomizing the signal ( yielding white noise with the same instantaneous energy distribution of the original signal ) fail to reproduce this phenomenology ( si ) .the concrete range of exponents found for both zipf and heaps laws are compatible between each other and somewhat similar -but not identical- to the typical ones observed in the literature for written texts , whereas to the best of our knowledge this is the first observation of scaling behavior with a clear exponent in the case of brevity law in speech .actually , our finding of a power law in brevity law differs from the case of random typing where a power law does nt conform .+ the specific and complex alternation of air stops ( silences ) intertwined with voice production are at the core of the microscopic voice fluctuations . during voice production , acoustic communicationis governed by the so - called biphasic cycle ( breath and glottal cycle , see for a review ) that together with some other acoustic considerations ( pitch period , voice onset time , the relation between duration , stress and syllabic structure ) determines the microscopic structure of human voice , including silence stops . however , these timescales are in general very large : as previously stated , this current study focus and scans voice properties even at intraphonemic timescales , where the statistical laws of language emerge directly from the physical magnitudes that govern acoustic communication .our results therefore open the possibility of speculating whether the fact that these laws have been found in upper levels of human communication might be a result of a scaling process and a byproduct of the physics rather than derived from the choice of the typical units of study on the analysis of written corpus ( phonemes , syllabus , words , ... ) , like differences between analysis of indoeuropean and logosyllabic languages demonstrates . as a matter of fact , in a previous work human voicehas been framed within self - organized criticality ( soc ) , speculating that the fractal structure of lungs drives human voice close to a critical state , this mechanism being ultimately responsible for the microscopic self - similar fluctuations of the signal .this constitutes a new example of the emergence of soc in a physiological system , different in principle from the classical one found in neuronal activity .one could thus speculate that the emergence of these complex patterns is just a consequence of the presence of soc , what in turn would support the physical origin of linguistic laws . from an evolutionary viewpoint , under this latter perspective human voice , understood as a communication system which has been optimized under evolutionary pressures , would constitute an example where complexity ( described in terms of robust linguistic laws ) emerges when a system is driven close to criticality , something reminiscent of the celebrated edge of chaos hypothesis .+ more generally , the method used and proposed here also addresses the longstansing problem of signal segmentation .it has been acknowledged that there is no such thing as a correct segmentation strategy . inwritten corpus white space is usually taken as an easy marker of the separation between words , however this is far from evident in continuous speech where separation between words or concepts is technologically harder to detect , conceptually vague and probably ill - defined .few exceptions that used oral corpus for animal communication still require to define ad hoc segmentation algorithms , or manual segmentation strategies which usually give an arbitrary or overestimated segmenting times , what might even raise epistemological questions .as such , this segmentation problem unfortunately has prevented wider , comparative studies in areas such as animal communication or the search for signs of possible extraterrestrial intelligence from radio signals ( in this line only few proposals have been made ) . by varying the energy threshold the method presented here automatically partitions and symbolizes the signal at various energy scales , providing a recipe to establish an automatic , general and systematic way of segmenting and thus enabling comparison of across acoustic signals of arbitrary origion for which we may lack the syntax , code or exact size of its constituents .+ to round off , we hope that this work paves the way for new research avenues in comparative studies .open questions that deserve further work abound ; just to name a few : in the light of this new method , what can we say about the acoustic structure in other animal communication systems ?can we find evidence of universal traits in communication that do not depend on a particular species but are only physically and physiologically constrained , or on the other hand are linguistic universals a myth ?how these laws evolve with aging ? are they affected by cognitive or pulmonary diseases ? what is the precise relation between soc and linguistic laws in this context ? andin particular , can we find mathematical evidence of a minimal , analytically tractable soc model that produce these patterns ? these and other questions are interesting avenues for future work .ha , l. q. , sicilia - garcia , e. i. , ming , j. , and smith , f. j. ( 2002 ) .extension of zipf s law to words and phrases . in proceedings of the 19th international conference on computational linguistics - volume 1 , pages 1 - 6 .association for computational linguistics ferrer i cancho , r. , riordan , o. , and bollobs , b. ( 2005 ) .the consequences of zipf s law for syntax and symbolic reference .proceedings of the royal society of london b : biological sciences , 272(1562):561 - 565 aylett , m. and turk , a. ( 2006 ) .language redundancy predicts syllabic duration and the spectral characteristics of vocalic syllable nuclei .the journal of the acoustical society of america , 119(5):3048 - 3058 tomaschek , f. , wieling , m. , arnold , d. , and baayen , r. h. ( 2013 ) .word frequency , vowel length and vowel quality in speech production : an ema study of the importance of experience . in interspeech ,pages 1302 - 1306 ferrer - i cancho , r. , hernndez - fernndez , a. , lusseau , d. , agoramoorthy , g. , hsu , m. j. , and semple , s. ( 2013 ) .compression as a universal principle of animal behavior . cognitive science , 37(8):1565 - 1578 ferreri cancho , r. and hernndez - fernndez , a. ( 2013 ) . the failure of the law of brevity in two new world primates .statistical caveats .glottotheory : international journal of theoretical linguistics , 4(1):45 - 55 kello , c. t. , brown , g. d. , ferrer - i cancho , r. , holden , j. g. , linkenkaer - hansen , k. , rhodes , t. , and van orden , g. c. ( 2010 ) .scaling laws in cognitive sciences .trends in cognitive sciences , 14(5):223 - 232 kuhl , p. k. , conboy , b. t. , coffey - corina , s. , padden , d. , rivera - gaxiola , m. , and nelson , t. ( 2008 ) .phonetic learning as a pathway to language : new data and native language magnet theory expanded ( nlm - e ) .philosophical transactions of the royal society of london b : biological sciences , 363(1493):979 - 1000 mccowan , b. , hanser , s. f. , and doyle , l. r. ( 1999 ) . quantitative tools for comparing animal communication systems : information theory applied to bottlenose dolphin whistle repertoires .animal behaviour , 57(2 ) gustison , m. l. , semple , s. , ferreri cancho , r. , and bergman , t. j. ( 2016 ) .gelada vocal sequences follow menzerath s linguistic law .proceedings of the national academy of sciences of the usa , 113(19):e2750-e2758 | linguistic laws constitute one of the quantitative cornerstones of modern cognitive sciences and have been routinely investigated in written corpora , or in the _ equivalent _ transcription of oral corpora . this means that inferences of statistical patterns of language in acoustics are biased by the arbitrary , language - dependent segmentation of the signal , and virtually precludes the possibility of making comparative studies between human voice and other animal communication systems . here we bridge this gap by proposing a method that allows to measure such patterns in acoustic signals of arbitrary origin , without needs to have access to the language corpus underneath . the method has been applied to six different human languages , recovering successfully some well - known laws of human communication at timescales even below the phoneme and finding yet another link between complexity and criticality in a biological system . these methods further pave the way for new comparative studies in animal communication or the analysis of signals of unknown code . the main objective of quantitative linguistics is to explore the emergence of statistical patterns ( often called linguistic laws ) in language and general communication systems ( see and for a review ) . the most celebrated of such regularities is zipf s law describing the uneven abundance of word frequencies . this law presents many variations in human language but also shows ubiquity in many linguistic scales , has been claimed to be universal and has consequences for syntax and symbolic reference . on the other hand heaps law , also called herdan s law states that the vocabulary of a text grows allometrically with the text length , and is mathematically connected with zipf s law , the scaling exponent being dependent on both zipf law and the vocabulary size . finally the zipf s law of abbreviation ( or brevity law for short ) is the statistical tendency of more frequent elements in communication systems to be shorter or smaller and has been recently claimed as an universal trend derived from fundamental principles of information processing . as such , this statistical regularity holds also phonetically and implies that the higher the frequency of a word , the the shorter its length or duration , probably caused by a principle of compression , although this is a general pattern that can change depending on other acoustical factors like noise , pressure to communicate at long distances calls or communicative efficiency and energetic constraints . + linguistic laws extend beyond written language and have been shown to hold for different biological data . according to some authors , the presence of scaling laws in communication is indicative of the existence of processes taking place across different cognitive scales . interpreting linguistic laws as scaling laws which emerged in communication systems actually opens the door for speculating on the existence of underlying scale - invariant physical laws operating underneath . of course , in order to explore the presence or absence of such patterns one needs to directly study the acoustic corpus , i.e. human voice , as every linguistic phenomenon or candidate for language law can be camouflaged or diluted after the transcription process . notwithstanding the deep relevance of linguistic laws reported in written texts , we still wonder up to which of these laws found in written corpora are related or derive from more fundamental properties of the acoustics of language -and are thus candidates for full - fledged linguistic laws- , or emerge as an artifact of scripture codification . + * linguistic laws in acoustics ? * acoustic communication is fully determined by three physical magnitudes extracted from the signals : frequency , energy and time . the configuration space induced by these magnitudes results from an intrinsic evolutionary relationship between the production and perception of sound systems that further shapes the range of hearing and producing sounds for a variety of life forms . since animals use their acoustic abilities both to monitor their environment and to communicate we should expect that natural selection has in some sense optimized these sensorial capabilities , but , despite the great differences that evolution has involved for different animals that communicate acoustically , there are many similarities between their mechanisms of sound production and hearing . focusing on primates , traditionally language has been distinguished from vocalizations of nonhuman primates only at a qualitative , semantic level . interestingly , it is well known that children use statistical cues to segment the input and probably share with non - human primates some of these mechanisms , albeit with some differences . the discovery of these statistical learning abilities has boosted a new approach to the study of language and suggests that statistical learning of language could be based on patterns or more generally linguistic laws : research on language acquisition shows that higher frequency facilitates learning , and zipf s law tell us that vocabulary learning is easier than expected a priori given the skewness of the distribution , for instance . we advance that , as will be shown below , these patterns are already present in the physical sound waves produced by human voice even at levels below those generally considered linguistically significant , i.e. , below the phoneme timescale . + empirical evidence of robust linguistic laws holding in written texts across different human languages has been reported many times ( see and references therein ) , and it has been shown that these laws are not fully observed in random texts . studies with oral corpus are however much less abundant , and they systematically imply a transcription of the acoustical waves into words in the case of human speech or some ill - defined analog of words in the case of animal communication , as the main segments to analyze . a few current efforts take a different road and consider other possible written units such as lemmas or compare written and oral production for some linguistic patterns , in general showing that frequencies of elements in written corpora can be taken as a rough estimate for their frequency in spoken language . all in all , the exploration of linguistic laws in oral corpora is scarce . in fact , all linguistic studies in oral and written corpora are influenced by our segmentation decisions and our definition of word , intimately biased by an inherently anthropocentric perspective and , of course , by our linguistic tradition . the idea that speech is produced like writing , as a linear sequence of vowels and consonants may indeed be a relic of our scripture technology . as a matter of fact , it is well - known in linguistics that both vowels and consonants are produced linearly but also depend on their surrounding elements : this is the traditional and well - studied concept of coarticulation . the boundaries between acoustical elements are therefore difficult to identify if we are not native speakers of a language , and that s yet a crucial problem of phonotactics and speech segmentation and recognition with differences across languages . precisely because of this , classical signal segmentation based on the concept of `` word '' inherited from writing has lead some researchers to search how to transform artificially written corpora into phoneme or syllabic chains with different objectives , at the time it involves two major problems in communication studies , namely ( i ) the impossibility of performing fully objective comparative studies between human and non - human signals , where signals could be physical events , behaviours or structures to which receivers respond . this problem leads researchers sometimes to manually segment acoustic signals guided only by their expertise , and prevents to explore signals of unknown origin including for instance the search for possible extraterrestrial intelligence . and ( ii ) a rather arbitrary definition of the units of study guided by ortographic conventions already produces non - negligible epistemological problems at the core of linguistics . + in this work we explore the acoustic analog of classical linguistic laws in the statistics of speech waveform from extensive real databases that for the first time extend all the way into the intraphoneme range ( ) . in order to do so in a systematic way , in what follows we will present a methodology that enables the direct linguistic analysis of any acoustical waveform without needs to invoke to _ ad hoc _ codes or assume any concrete formal communication system underneath ( materials and methods , see figure [ fig : figure1 ] for an illustration ) . this method only makes use of physical magnitudes of the signal such as energy or time duration which therefore allow for a nonambiguous definition of our units . speech is indeed a physical phenomenon and as such in this work we interpret it as a concatenation of events of energy release , and propose a mathematically well - defined way for segmenting this orchestrated suite of energy release avalanches . we find clear evidence of robust zipf , heaps and brevity laws emerging in this context and speculate that this might be due to the fact that human voice seems to be operating close to criticality , hence finding an example of a biological system that , driven by evolution , has linked complexity and criticality . we expect that this methodology can open a fresh line of research in communication systems where a direct exploration of underlying statistical patterns in acoustic signals is possible without needs to predetermine any of the aforementioned non - physical concepts , and hope that this will allow researchers to develop comparative studies between human language and other acoustical communication systems or even to unravel whether if a generic signal shares these patterns . + * data . * for this work we have used a tv broadcast speech database named kalaka-2 . originally designed for language recognition evaluation purposes , it consists of wide - band tv broadcast speech recordings ( 4h per language sampled using 2 bytes at a rate of 16khz ) ranging six different languages : basque , catalan , galician , spanish , portuguese and english and encompassing both planned and spontaneous speech throughout diverse environmental conditions , such as studio or outside journalist reports but excluding telephonic channels . + * the method . * the objects under study , speech waveforms or otherwise any generic acoustic signal , are fully described by an amplitude time series ( see figure [ fig : figure1 ] for an illustration of the method ) . in order to unambiguously extract a sequence of symbols -the equivalent to words and phrases- from such signal without the need to perform _ ad hoc _ segmentation , we start by considering the semi - definite positive magnitude which , dropping irrelevant constants , has physical units of energy per time ( si for additional details on speech waveform statistics ) . by defining an energy threshold in this signal we will unambiguously separate voice events ( above threshold ) from silence events ( below threshold ) . more concretely , is defined as a relative percentage and its actual value in energy units depends on the signal variability range : for example means that of the data falls under this energy level . it has been shown that decimates the signal similarly to a real - space renormalization group ( rg ) transformation , in such a way that increasing induces a flow in rg space . systems operating close to a critical state lie in an unstable fixed point of this rg flow and its associated signal statistics are therefore shown to be threshold - invariant . now , not only works as an energy threshold that filters out background or environmental noise ( noise filtering being a key aspect that species have learned to perform ) but , as previously stated , enable us to unambiguously define what we call a _ token _ or voice event , that is , a sequence of consecutive measurements , from a silence event of duration . each token is in turn characterized by a duple where is the duration of the event and corresponds to the total energy released during that event obtained summing up the instantaneous energy over the duration of the event . accordingly , the signal is effectively transformed to an ordered sequence of tokens , each of these being separated by silence events of highly heterogeneous durations which , incidentally , are known to be power law distributed . finally , by logarithmically binning the scale of integrated energies we can assign an energy label ( the bin ) to each token , hence mapping the initial acoustic signal into a symbolic sequence of fundamental units which we call _ types_. the logarithmic binning is justified here invoking fechner - weber s law . note that two tokens whose integrated energy fall in the same energy range are mapped to the same type even if their duration can be different , so in principle several tokens could map into the same type ( see si for a table of type / token ratios ) . by default we define as many bins as voice events such that the set of bins can be understood as an abstraction of a universal language vocabulary , accordingly some bins might be empty and in general each bin will occur with uneven frequencies . as such , types can be understood as acoustically - based universal abstractions of a fundamental unit , an abstract version of words or phonemes that appear intertwined in a signal with characteristic patterns . + to summarize , with this methodology we are able to map an arbitrary acoustic signal into a sequence of types separated by silence events ( figure 1 ) . standard linguistic laws can then be directly explored in acoustic signals without needs to have an _ a priori _ knowledge neither of the signal code nor of the adequate segmentation process or the particular syntax of the language underlying the signal . this protocol is thus independent of the communication system and can be used to make unbiased comparisons across different systems and signals . needless to say , results could in principle depend on the particular value of , as this scans the signals at different energy thresholds . however human voice has been recently shown to be invariant under changes in -an evidence of self - organized criticality ( soc ) in this system- and , accordingly , parameter - free laws can be extracted using a proper collapse theory as it will be shown in the results section . finally , in order to guarantee that the emergence of linguistic laws is only due to the structure and correlations of the signal and not due to the process of symbolization we will compare the results obtained from speech signals to properly defined null models which randomize the signal . these null models thus maintain the marginal instantaneous energy distribution and remove any other correlation structure , yielding non - gaussian white noise with a fat - tailed marginal distribution . |
to meet the fast growing mobile data volume driven by applications such as smartphones and tablets , the traditional wireless network architecture based on a single layer of macro - cells has shifted to one composed of smaller cells such as pico / femto cells with more densely deployed access points ( aps ) . therefore , cloud radio access network ( c - ran ) has recently been proposed and drawn a great deal of attention . in a c - ran , the distributed aps , also termed remote radio heads ( rrhs ) , are connected to the baseband unit ( bbu ) pool through high bandwidth backhaul links , e.g. optical transport network , to enable centralized processing , collaborative transmission , and real - time cloud computing . as a result , significant rate improvement can be achieved due to reduced pathloss along with joint scheduling and signal processing .however , with densely deployed aps , several new challenges arise in c - ran .first , close proximity of many active aps results in increased interference , and hence the transmit power of aps and/or mobile users ( mus ) needs to be increased to meet any given quality of service ( qos ) .second , the amount of energy consumed by a large number of active aps as well as by the transport network to support high - capacity connections with the bbu pool will also become considerable .such facts motivate us to optimize the energy consumption in c - ran , which is the primary concern of this paper . in particular , both downlink ( dl ) and uplink ( ul ) transmissions are considered jointly .the studied c - ran model consists of densely deployed aps jointly serving a set of distributed mus , where comp based joint transmit / receive processing ( beamforming ) over all active aps is employed for dl / ul transmissions . under this setup ,we study a joint dl and ul mu - ap association and beamforming design problem to minimize the total energy consumption in the network subject to mus given dl and ul qos requirements .the energy saving is achieved by optimally assigning mus to be served by the minimal subset of active aps , finding the power levels to transmit at all mus and aps , and finding the beamforming vectors to use at the multi - antenna aps .this problem has not been investigated to date , and the closest prior studies are . however , the prior studies have all considered mu association and/or active ap selection problems for various objectives from the dl perspective .in particular , the problems studied in can be treated as the dl - only version of our considered joint dl and ul problem , in which the transmit beamforming vectors and the active set of aps are jointly optimized to minimize the power consumption at all the aps .note that the mu association and/or active ap selection based on dl only may result in inefficient transmit power of mus or even their infeasible transmit power in the ul considering various possible asymmetries between the dl and ul in terms of channel , traffic and hardware limitation .furthermore , with users increasingly using applications with high - bandwidth ul requirements , ul transmission is becoming more important .for example , the upload speed required for full high definition ( hd ) skype video calling is about mbps .therefore , we need to account for both dl and ul transmissions while designing the mus association and active ap selection scheme . the ul - only mu association problem has also been considered extensively in the literature ; however , their solutions are not applicable in the context of this work due to their assumption of one - to - one mu - ap association .it is worth noting that the joint mus association and active ap selection is mathematically analogous to the problem of antenna selection in large multiple - input multiple - output ( mimo ) systems , which aims to reduce the number of radio transmission chains and hence the energy consumption and signal processing overhead .the connection between these two problems can be recognized by treating the c - ran as a distributed large mimo system . in terms of other related work , there have been many attempts to optimize the energy consumption in cellular networks , but only over a single dimension each time , e.g. power control , ap `` on / off '' control , and coordinated multi - point ( comp ) transmission . to avoid an infeasible power allocation ,it was suggested in to gradually remove the mus that can not be supported due to their limited transmit power budgets .in addition to achieving energy saving from mus perspective , proposed to switch off the aps that are under light load to save energy by exploiting the fact that the traffic load in cellular networks fluctuates substantially over both space and time due to user mobility and traffic burstiness .cooperation among different cells or aps could be another possible way to mitigate the interference and achieve energy - efficient communication .for example , if a certain cluster of aps can jointly support all the mus , the intercell interference can be further reduced especially for the cell - edge mus .a judicious combination of these techniques should provide the best solution , and this is the direction of our work .unfortunately , the considered joint dl and ul mu - ap association and beamforming design problem in this paper involves integer programming and is np hard as shown for a similar problem in ( * ? ? ?* theorem 1 ) . to tackle this difficulty , two different approaches , i.e., group - sparse optimization ( gso ) and relaxed - integer programming ( rip ) , have been adopted in and , respectively , to solve a similar dl - only problem , where two polynomial - time algorithms were proposed and shown to achieve good performance through simulations . in particular , the gso approach is motivated by the fact that in the c - ran with densely deployed aps , only a small fraction of the total number of aps needs to be active for meeting all mus qos . however , due to the new consideration of ul transmission in this paper , we will show that the algorithms proposed in can not be applied directly to solve our problem , and therefore the methods derived in this paper are important advances in this field .the contributions of this paper are summarized as follows : 1 . to optimize the energy consumption tradeoffs between the active aps and mus , we jointly study the dl and ul mu - ap association and beamforming design by solving a weighted sum - power minimization problem . to our best knowledge, this paper is the first attempt to unify the dl and ul mu - ap association and beamforming design into one general framework .2 . due to a critical scaling issue in the ulreceive beamforming design , the gso based algorithm and the rip based algorithm can not be applied to solve our joint dl and ul design directly . to address this issue, we establish a virtual dl transmission for the original ul transmission in c - ran by first ignoring the individual ( per - ap and per - mu ) power constraints based on the celebrated ul - dl duality result .consequently , the considered joint dl and ul problem without individual power constraints can be transformed into an equivalent dl problem with two inter - related subproblems corresponding to the original and virtual dl transmissions , respectively . with the equivalent dl - only formulation ,we extend the gso based and rip based algorithms to solve the relaxed joint dl and ul optimization problem .3 . considering the fact that the optimal solution to the ul sum - power minimization is component - wise minimum , there is no tradeoff among different mus in terms of power minimization in the ul .consequently , we are not able to establish the duality result for the per - mu power constraints in the ul , which are thus difficult to incorporate into our developed algorithm . to resolve this issue , we propose a price based iterative method to further optimize the set of active aps while satisfying the per - mu power constraints .finally , we verify the effectiveness of our proposed algorithms by extensive simulations from three perspectives : ensuring feasibility for both dl and ul transmissions ; achieving optimal network energy saving with mu - ap association ; and flexibly adjusting various power consumption tradeoffs between active aps and mus .it is worth pointing out that as the baseband processing is migrated to a central unit , i.e. , bbu pool , the data exchanged between the aps and the bbu pool includes oversampled real - time digital signals with very high bit rates ( in the order of gbps ) . as a result ,the capacity requirement for the backhaul / fronthaul links becomes far more stringent in the c - ran .given finite backhaul capacity , the optimal strategy for backhaul compression and quantization has been studied recently in e.g. . in this paper , however , we focus on addressing the energy consumption ( including both transmission and non - transmission related portions ) issue in the c - ran , which is also one of the major concerns for future cellular networks , by assuming that the backhaul transport network is provisioned with sufficiently large capacity .note that the optical network has been widely accepted as a good option to implement the high - bandwidth backhaul transport network .the rest of this paper is organized as follows .section [ sec : system model ] introduces the c - ran model , and the power consumption models for the aps and mus .section [ sec : problem formulation ] presents our problem formulation , introduces the two existing approaches , namely gso and rip , and explain the new challenges in solving the joint dl and ul optimization .section [ sec : joint ul dl ] presents our proposed algorithms based on the virtual dl representation of the ul transmission .section [ sec : numerical ] shows numerical results .finally , section [ sec : conclusion ] concludes the paper ._ notations _ : boldface letters refer to vectors ( lower case ) or matrices ( upper case ) . for an arbitrary - size matrix , , , and denote the complex conjugate , conjugate transpose and transpose of , respectively .the distribution of a circularly symmetric complex gaussian ( cscg ) random vector with mean vector and covariance matrix is denoted by ; and stands for `` distributed as '' . denotes the space of complex matrices . denotes the euclidean norm of a complex vector , and denotes the magnitude of a complex number .we consider a densely deployed c - ran consisting of access points ( aps ) , denoted by the set .the set of distributed aps jointly support randomly located mobile users ( mus ) , denoted by the set , for both downlink ( dl ) and uplink ( ul ) communications . in this paper , for the purpose of exposition , we consider linear precoding and decoding in the dl and ul , respectively , which is jointly designed at the bbu pool assuming the perfect channel knowledge for all mus .the results in this paper can be readily extended to the case of more complex successive precoding / decoding , e.g. dirty - paper coding ( dpc ) and multiuser detection with successive interference cancelation ( sic ) , with fixed coding orders among the users .we also assume that each ap , , is equipped with antennas , and all mus are each equipped with one antenna .it is further assumed that there exist ideal low - latency backhaul transport links with sufficiently large capacity ( e.g. optical fiber ) connecting the set of aps to the bbu pool , which performs all the baseband signal processing and transmission scheduling for all aps .the centralized architecture results in efficient coordination of the transmission / reception among all the aps , which can also be opportunistically utilized depending on the traffic demand .we consider a quasi - static fading environment , and denote the channel vector in the dl from ap to mu and that in the ul from mu to ap as and , respectively .let the vector consisting of the channels from all the aps to mu and that consisting of the channels from mu to all the aps be ] , respectively .there are two main techniques for separating dl and ul transmissions on the same physical transmission medium , i.e. , time - division duplex ( tdd ) and frequency - division duplex ( fdd ) .if tdd is assumed , channel reciprocity is generally assumed to hold between dl and ul transmissions , which means that the channel vector in the ul is merely the transpose of that in the dl , i.e. , . however , if fdd is assumed , s and s are different in general . in dl transmission ,the transmitted signal from all aps can be generally expressed as where is the beamforming vector for all aps to cooperatively send one single stream of data signal to mu , which is assumed to be a complex random variable with zero mean and unit variance .note that . then , the transmitted signal from ap can be expressed as where is the block component of , corresponding to the transmit beamforming vector at ap for mu .note that ^{t} ] , .from ( [ eq : dl transmit signal from ap n ] ) , the transmit power of ap in dl is obtained as we assume that there exists a maximum transmit power constraint for each ap , i.e. , the received signal at the mu is then expressed as where is the receiver noise at mu , which is assumed to be a circularly symmetric complex gaussian ( cscg ) random variable with zero mean and variance , denoted by . treating the interference as noise ,the signal - to - interference - plus - noise ratio ( sinr ) in dl for mu is given by in ul transmission , the transmitted signal from mu is given by where denotes the transmit power of mu , and is the information bearing signal which is assumed to be a complex random variable with zero mean and unit variance . with denoting the transmit power limit for mt , it follows that the received signal at all aps is then expressed as where denotes the receiver noise vector at all aps consisting of independent cscg random variables each distributed as .let denote the receiver beamforming vector used to decode from mu .then the sinr in ul for mu after applying is given by let denote the block component in , corresponding to the receive beamforming vector at ap for mu .we thus have ^{t} ] and ] in ( [ eq : group sparsity ] ) needs to be zero . consequently , the fact that a small subset of deployed aps is selected to be active implies that the concatenated beamforming vector in ( [ eq : group sparsity ] ) should contain only a very few non - zero block components .one well - known approach to enforce desired group sparsity in the obtained solutions for optimization problems is by adding to the objective function an appropriate penalty term .the widely used group sparsity enforcing penalty function , which was first introduced in the context of the group least - absolute selection and shrinkage operator ( lasso ) problem , is the mixed norm . in our case ,such a penalty is expressed as \right\|.\end{aligned}\ ] ] the norm in ( [ eq : uldl penalty ] ) , similar to norm , offers the closest convex approximation to the norm over the vector consisting of norms \right\|\right\}^{n}_{n=1} ] is desired to be set to zero to obtain group sparsity . more generally , the mixed norm has also been shown to be able to recover group sparsity with , among which the norm , defined as has been widely used . compared with norm , norm has the potential to obtain more sparse solution but may lead to undesired solution with components of equal magnitude . in this paper, we focus on the norm in ( [ eq : uldl penalty ] ) for our study. we will compare the performance of and norms by simulations in section [ sec : numerical ] . according to , at first glanceit seems that using the norm , problem ( p1 ) can be approximately solved by replacing the objective function with where can be treated as a convex relaxation of the indicator functions in ( [ eq : weighted sum power ] ) , and indicates the relative importance of the penalty term associated with ap .however , problem ( p1 ) with ( [ eq : scaling problem objective ] ) as the objective function is still non - convex due to the constraints in ( [ eq : uldl dl sinr constraint ] ) and ( [ eq : uldl ul sinr constraint ] ) .furthermore , since the ul receive beamforming vector s can be scaled down to be arbitrarily small without affecting the ul sinr defined in ( [ eq : ul sinr ] ) , minimizing ( [ eq : scaling problem objective ] ) directly will result in all s going to zero . to be more specific ,let and denote the optimal solution of problem ( p1 ) with ( [ eq : scaling problem objective ] ) as the objective function .then , it follows that and , preserves the `` group - sparse '' property where the non - zero block components correspond to the active aps .two issues thus arise : first , the ul does not contribute to the selection of active aps ; second , the set of selected active aps based on the dl only can not guarantee the qos requirements for the ul . as a result ,the norm penalty term in ( [ eq : scaling problem objective ] ) or more generally the norm penalty does not work for the joint dl and ul ap selection in our problem , and hence the algorithm proposed in , which involves only the dl transmit beamforming vector s , can not be modified in a straightforward way to solve our problem .next , we reformulate problem ( p1 ) by introducing a set of binary variable s indicating the `` active / sleep '' state of each ap as follows . where is a constant with arbitrary value .note that the active - sleep constraints in ( [ eq : bigm onoff ] ) are inspired by the well - known big- method : if , the constraint ( [ eq : bigm onoff ] ) ensures that ; if , the constraint has no effect on and , as represents an upper bound on the term .notice that can be chosen arbitrarily due to the scaling invariant property of ul receive beamforming vector s . with the active - sleep constraints in ( [ eq : bigm onoff ] ), the equivalence between problems ( p1 ) and ( p2 ) can be easily verified . in , a similar problem to ( p2 )was studied corresponding to the case with only dl transmission . for problem ( p2 ) without and and their corresponding constraints , the problem can be transformed to a convex second - order cone programming ( socp ) by relaxing the binary variable as , \forall n \in \mathcal{n} ] , ] denote the sparse solution after the convergence of the above iterative algorithm .then the nonzero entries in correspond to the aps that need to be active , i.e. , ._ obtain the optimal transmit / receive beamforming vectors and , , given the selected active aps_. this can be done by solving ( p4 ) with , and .3 . _ obtain the optimal transmit power values of mus , . this can be done by solving problem ( [ eq : ul power minimization ] ) with , , which is a simple linear programming ( lp ) problem .the iterative update given in ( [ eq : iterative update ] ) is designed to make small entries in converge to zero .furthermore , as the updating evolves , the penalty associated with ap in the objective function , i.e. , , will converge to two possible values : in other words , only the active aps will incur penalties being the exact same values as their static power consumption , which has the same effect as the indicator function in problem ( p1 - 1 ) or ( p1 ) .convergence of this algorithm can be shown by identifying the iterative update as a majorization - minimization ( mm ) algorithm for a concave minimization problem , i.e. , using function , which is concave , to approximate the indicator function given in ( [ eq : on off condition ] ) .the details are thus omitted due to space limitations .it is first observed that the per - ap power constraints in ( [ eq : uldl dl power constraint ] ) , i.e. , are convex .therefore , adding per - ap power constraints to problem ( p1 - 1 ) does not need to alter the above algorithm .thus , we focus on the per - mu power constraints in the ul transmission in this subsection .again , we consider the following transmit sum - power minimization problem in the ul with per - mu power constraints : although it has been shown in that the sum - power minimization problem in the dl with per - ap power constraints can be transformed into an equivalent min - max optimization problem in the ul , we are not able to find an equivalent dl problem for problem ( [ eq : ul power minimization with per mu ] ) as in section [ sec : without ] which is able to handle the per - mu power constraints .the fundamental reason is that the power allocation obtained by solving problem ( [ eq : ul power minimization ] ) is already component - wise minimum , which can be shown by the uniqueness of the fixed - point solution for a set of minimum sinr requirements in the ul given randomly generated channels .the component - wise minimum power allocation indicates that it is not possible to further reduce one particular mu s power consumption by increasing others , i.e. , there is no tradeoff among different mus in terms of power minimization .consequently , solving problem ( [ eq : ul power minimization with per mu ] ) requires only one additional step compared with solving problem ( [ eq : ul power minimization ] ) , i.e. , checking whether the optimal power solution to problem ( [ eq : ul power minimization ] ) satisfies the per - mu power constraints . if this is the case , the solution is also optimal for problem ( [ eq : ul power minimization with per mu ] ) ; otherwise , problem ( [ eq : ul power minimization with per mu ] ) is infeasible . next , we present our complete algorithm for problem ( p1 ) with the per - ap and per - mu power constraints .compared to the algorithm proposed for problem ( p1 - 1 ) in section [ sec : without ] without the per - ap and per - mu power constraints , the new algorithm differs in the first step , i.e. , to identify the subset of active aps .the main idea is that a set of candidate active aps is first obtained by ignoring the per - mu power constraints but with a new sum - power constraint in the ul ( or equivalently its virtual dl ) , i.e. , we iteratively solve the following problem similarly as in the first step of solving problem ( p1 - 1 ) in section [ sec : without ] . the sum - power constraint in ( [ eq : sum user p4 ] ) is added to impose a mild control on the transmit powers of all mus in the ul . after obtaining the candidate set , the feasibility of the ul transmission is then verified .if the candidate set can support the ul transmission with the given per - mu power constraints , then the optimal solution of ( p1 ) is obtained ; otherwise , one or more aps need to be active for the ul transmission . to be more specific ,denote the set of candidate active aps obtained by iteratively solving problem ( p5 ) as .problem ( [ eq : ul power minimization ] ) is then solved with , for which the feasibility is guaranteed due to the virtual dl sinr constraints in ( [ eq : vdl - socp sinr constraint ] ) .we denote the obtained power allocation as , . *if can support the ul transmission without violating any mu s power constraint , i.e. , the candidate set can be finalized as the set of active aps and the algorithm proceeds to find the optimal transmit / receive beamforming vectors similarly as that in section [ sec : without ] . * if can not support the ul transmission with the given mu s power constraints , we propose the following price based iterative method to determine the additional active aps .specifically , in each iteration , for those aps that are not in the candidate set each will be assigned a price , which is defined as where .the price is set to be the normalized ( by its corresponding static power consumption ) weighted - sum power gains of the channels from ap to all the mus that have their power constraints being violated .the weights are chosen as the ratios of mus required additional powers to their individual power limits . according to the definition of in ( [ eq : price ] ) ,the ap having smaller static power consumption and better channels to mus whose power constraints are more severely violated will be associated with a larger price .the candidate set is then updated by including the ap that corresponds the largest as with updated , the feasibility of the ul transmission needs to be re - checked by obtaining a new set of power allocation , which will be used to compute the new s in next iteration if further updating is required .the above process is repeated until all the mus power constraints are satisfied .its convergence is guaranteed since problem ( p1 ) has been assumed to be feasible if all aps are active . combining with the algorithm in section [ sec : without ] , our complete algorithm for problem ( p1 ) based on gso is summarized in table [ table1 ] . for the algorithm given in table [ table1 ] , there are two problems that need to be iteratively solved , i.e. , problems ( [ eq : ul power minimization ] ) and ( p5 ) . since problem ( [ eq : ul power minimization ] )can be efficiently solved by the fixed - point algorithm , the computation time is dominated by solving the socp problem ( p5 ) . if the primal - dual interior point algorithm is used by the numerical solver for solving ( p5 ) , the computational complexity is of order .furthermore , since the convergence of the iterative update in steps 4)-5 ) , governed by the mm algorithm , is very fast ( approximately - iterations ) as observed in the simulations , the overall complexity of the algorithm in table [ table1 ] is approximately . ' '' '' 1 .set , initialize the set of candidate active aps as .2 . obtain s and s by solving problem ( p5 ) with .3 . set .* repeat : * * . * set , . *obtain ] and $ ] , respectively .we assume a simplified channel model consisting of the distance - dependent attenuation with pathloss exponent and a multiplicative random factor ( exponentially distributed with unit mean ) accounting for short - term rayleigh fading .we also set if not specified otherwise , i.e. , we consider the sum - power consumption of all active aps and mus .finally , we set the receiver noise power for all the aps and mus as .first , we demonstrate the importance of active ap selection by jointly considering both dl and ul transmission in terms of the sinr feasibility in c - ran .since feasibility is our focus here instead of power consumption , it is assumed that the selected active aps will support all mus for both the dl and ul transmissions .the simulation results compare our proposed algorithms ( i.e. , algorithms i and ii ) with the following three ap selection schemes : * * ap initiated reference signal strength ( apirss ) based selection * : in this scheme , aps first broadcast orthogonal reference signals .then , for each mu , the ap corresponding to the largest received reference signal strength will be included in the set of active aps .note that this scheme has been implemented in practical cellular systems . ** mu initiated reference signal strength ( muirss ) based selection * : in this scheme , mus first broadcast orthogonal reference signals .then , for each mu , the ap corresponding to the largest received reference signal strength will be included in the set of active aps . note that since all mus are assumed to transmit reference signals with equal power and pathloss in general dominates short - term fading , the ap that is closest to each mu will receive strongest reference signal in general .also note that in the previous apirss based scheme , if all aps are assumed to transmit with equal reference signal power ( e.g. , for the homogenous setup ) , the selected active aps will be very likely to be the same as those by the muirss based scheme . ** proposed algorithm without considering ul ( paw / oul ) * : in this algorithm , the set of active aps are chosen from the conventional dl perspective by modifying our proposed algorithms . specifically , algorithm i is used here and similar results can be obtained with algorithm ii .note that algorithm i without considering ul transmission is similar to that proposed in . with the obtained set of active aps , the feasibility check of problem ( p1 ) can be decoupled into two independent feasibility problems : one for the dl and the other for the ul , while the network feasibility is achieved only when both the ul and dl sinr feasibility of all mus are guaranteed . in fig .[ fig : feasibilitydemo ] , we illustrate the set of active aps generated by different schemes under the heterogeneous setup , and also compare them with that by the optimal exhaustive search .it is assumed that there are haps and laps jointly supporting mus .the sinr targets for both dl and ul transmissions of all mus are set as .first , it is observed that algorithm i and algorithm ii obtain the same set of active aps as shown in fig .[ fig : feasibilitydemohetnetl12 ] , which is also identical to that found by exhaustive search .second , it is observed that the haps are both chosen to be active in fig .[ fig : feasibilitydemohetnethighsignal ] for the apirss based scheme .this is due to the significant difference between hap and lap in terms of transmit power , which makes most mus receive the strongest dl reference signal from the hap .the above phenomenon is commonly found in heterogenous network ( hetnet ) with different types of bss ( e.g. macro / micro / pico bss ) .third , from fig .[ fig : feasibilitydemohetnetclose ] , the active aps by the muirss based scheme are simply those closer to the mus , which is as expected . finally , in fig .[ fig : feasibilitydemohetnetdl ] , only two laps are chosen to support all mus with the paw / oul algorithm .this is because the algorithm does not consider ul transmission , and as a result fig .[ fig : feasibilitydemohetnetdl ] only shows the most energy - efficient ap selection for dl transmission . to compare the feasibility performance , we run the above algorithms with different dl and ul sinr targetsit is assumed that and .the results are summarized in table [ table4 ] and table [ table3 ] , where the number of infeasible cases for each scheme is shown over randomly generated network and channel realizations , for homogeneous setup and heterogeneous setup , respectively .note that in these examples , algorithm i and algorithm ii have identical feasibility performance , since the system is infeasible only when the dl / ul sinr requirements can not be supported for given channels and power budgets even with all aps being active . from both table [ table4 ] and table [ table3 ], it is first observed that the three comparison schemes , i.e. , apirss based scheme , muirss based scheme and paw / oul , all incur much larger number of infeasible cases as compared to our proposed algorithms .it is also observed that among the three comparison schemes , paw / oul has the best performance ( or the minimum number of infeasible cases ) when the dl transmission is dominant ( i.e. , ) ; however , it performs the worst in the opposite situation ( i.e. , ) .this observation indicates that dl oriented scheme could result in infeasible transmit power of mus in the ul for the cases with stringent ul requirements . from the last two rows of table [ table3 ] , it is observed that the apirss based scheme performs worse than the muirss based scheme when the ul sinr target is high .this is because that under heterogeneous setup , as shown in fig .[ fig : feasibilitydemohetnethighsignal ] , mus are attached to the haps under apirss based scheme although the haps may be actually more distant away from mus compared with the distributed laps .this imbalanced association causes much higher transmit powers of mus or even their infeasible transmit power in the ul .there has been effort in the literature to address this traffic imbalance problem in hetnet .for example in , the reference signal from picocell bs is multiplied by a factor with magnitude being larger than one , which makes it appear more appealing for mu association than the heavily - loaded macrocell bs ..feasibility performance comparison under homogeneous setup [ cols="^,^,^,^,^,^,^ " , ] [ table3 ] next , we compare the performance of the proposed algorithms in terms of sum - power minimization in c - ran with the following benchmark schemes : * * exhaustive search ( es ) * : in this scheme , the optimal set of active aps are found by exhaustive search , which serves as the performance upper bound ( or lower bound on the sum - power consumption ) for other considered schemes . with any set of active aps , the minimum - power dl and ul beamforming problems can be separately solved .since the complexity of es grows exponentially with , it can only be implemented for c - ran with small number of aps . ** joint processing ( jp ) among all aps* : in this scheme , all the aps are assumed to be active and only the total transmit power consumption is minimized by solving two separate ( dl and ul ) minimum - power beamforming design problems .* * algorithm i with norm penalty * : this algorithm is the same as that given in table [ table1 ] except that the sparsity enforcing penalty is replaced with norm as given in ( [ eq : infinity ] ) . in our simulations, we consider the homogeneous c - ran setup with and plot the performance by averaging over randomly generated network and channel realizations .the sinr requirements are set as for all mus .[ fig : performancemu ] and fig .[ fig : performancecomparepcdis ] show the sum - power consumption achieved by different algorithms versus the number of mus and ap static power consumption ( assumed to be identical for all aps ) , respectively . from both figures ,it is observed that the proposed algorithms have similar performance as the optimal es and achieve significant power saving compared with jp .it is also observed that the penalty term based on either or norm has small impact on the performance of algorithm i. finally , algorithm i always outperforms algorithm ii although the performance gap is not significant .= 0.7w , .,title="fig:",width=453 ] = 0.7 .,title="fig:",width=453 ] finally , we compare the sum - power consumption tradeoffs between active aps and all mus for the proposed algorithms as well as the optimal es , by varying the weight parameter in our formulated problems .we consider a homogenous c - ran setup with and , where for all mus .since it has been shown in the pervious subsection that algorithm i with norm and norm achieves similar performance , we choose norm in this simulation .furthermore , since js assumes that all the aps are active , which decouples dl and ul transmissions and thus has no sum - power consumption tradeoffs between aps and mus , it is also not included . from fig . [fig : tradeoff ] , it is first observed that for all considered algorithms , as increases , the sum - power consumption of active aps increases and that of all mus decreases , which is as expected .it is also observed that algorithm i achieves trade - off performance closer to es and outperforms algorithm ii , which is in accordance with the results in figs .[ fig : performancemu ] and [ fig : performancecomparepcdis ] .in this paper , we consider c - ran with densely deployed aps cooperatively serving distributed mus for both the dl and ul transmissions .we study the problem of joint dl and ul mu - ap association and beamforming design to optimize the energy consumption tradeoffs between the active aps and mus .leveraging on the celebrated ul - dl duality result , we show that by establishing a virtual dl transmission for the original ul transmission , the joint dl and ul problem can be converted to an equivalent dl problem in c - ran with two inter - related subproblems for the original and virtual dl transmissions , respectively . based on this transformation , two efficient algorithms for joint dl and ul mu - ap association and beamforming designare proposed based on gso and rip techniques , respectively . by extensive simulations, it is shown that our proposed algorithms improve the network reliability / feasibility , energy efficiency , as well as power consumption tradeoffs between aps and mus , as compared to other existing methods in the literature .s. tombaz , p. monti , k. wang , a. vastberg , m. forzati , and j. zander , `` impact of backhauling power consumption on the deployment of heterogeneous mobile networks , '' in _ proc .ieee globecom _ ,pp . 1 - 5 , dec . 2011 .m. hong , r. sun , h. baligh , and z. luo , `` joint base station clustering and beamformer design for partial coordinated transmission in heterogeneous networks , '' _ ieee j. sel .areas commun .226 - 240 , feb . 2013 .y. cheng , m. pesavento , and a. philipp , `` joint network optimization and downlink beamforming for comp transmissions using mixed integer conic programming , '' _ ieee trans . signal process .16 , pp . 3972 - 3987 , aug. 2013 .m. rasti , a. r. sharafat , s. member , and j. zander , `` pareto and energy - efficient distributed power control with feasibility check in wireless networks , '' _ ieee trans .inform . theory _245 - 255 , jan . 2011 .k. son , h. kim , y. yi , and b. krishnamachari , `` base station operation and user association mechanisms for energy - delay tradeoffs in green cellular networks , '' _ ieee j. sel .areas commun .29 , no . 8 , pp . 1525 - 1536 , sep .d. gesbert , s. hanly , h. huang , s. s. shitz , o. simeone , and w. yu , `` multi - cell mimo cooperative networks : a new look at interference '' , _ ieee j. sel .areas commun .9 , pp . 1380 - 1408 , dec .2010 .s. h. park , o. simeone , o. sahin , and s. shamai , `` joint precoding and multivariate backhaul compression for the downlink of cloud radio access networks , '' _ ieee trans .signal process .22 , pp . 5646 - 5658 , nov . 2013 .l. liu , r. zhang , and k. c. chua , `` achieving global optimality for weighted sum - rate maximization in the k - user gaussian interference channel with multiple antennas , '' _ ieee trans .wireless commun .5 , pp . 1933 - 1945 .may 2012 .g. auer , v. giannini , c. desset , i. godor , p. skillermark , m. olsson , m. imran , d. sabella , m. gonzalez , o. blume , and a. fehske , `` how much energy is needed to run a wireless network ? '' _ ieee trans .wireless commun ._ , vol . 18 , no .40 - 49 , oct . 2011 .m. yuan and y. lin , `` model selection and estimation in regression with grouped variables , '' _ journal of the royal statistical society : series b ( statistical methodology ) _ , vol .49 - 67 , feb . 2006 .q. ye , y. chen , m. al - shalash , c. caramanis , and j. g. andrews , `` user association for load balancing in heterogeneous cellular network , '' _ ieee trans .wireless commun .12 , no . 6 ,2706 - 2716 , jun . 2013 .j. sangiamwong , y. saito , n. miki , t. abe , s. nagata , and y. okumura , `` investigation on cell selection methods associated with inter - cell interference coordination in heterogeneous networks for lte - advanced downlink , '' in _11th european wireless conference sustainable wireless technologies _ , pp . 1 - 6 | the cloud radio access network ( c - ran ) concept , in which densely deployed access points ( aps ) are empowered by cloud computing to cooperatively support mobile users ( mus ) , to improve mobile data rates , has been recently proposed . however , the high density of active aps results in severe interference and also inefficient energy consumption . moreover , the growing popularity of highly interactive applications with stringent uplink ( ul ) requirements , e.g. network gaming and real - time broadcasting by wireless users , means that the ul transmission is becoming more crucial and requires special attention . therefore in this paper , we propose a joint downlink ( dl ) and ul mu - ap association and beamforming design to coordinate interference in the c - ran for energy minimization , a problem which is shown to be np hard . due to the new consideration of ul transmission , it is shown that the two state - of - the - art approaches for finding computationally efficient solutions of joint mu - ap association and beamforming considering only the dl , i.e. , group - sparse optimization and relaxed - integer programming , can not be modified in a straightforward way to solve our problem . leveraging on the celebrated ul - dl duality result , we show that by establishing a virtual dl transmission for the original ul transmission , the joint dl and ul optimization problem can be converted to an equivalent dl problem in c - ran with two inter - related subproblems for the original and virtual dl transmissions , respectively . based on this transformation , two efficient algorithms for joint dl and ul mu - ap association and beamforming design are proposed , whose performances are evaluated and compared with other benchmarking schemes through extensive simulations . cloud radio access network , green communication , uplink - downlink duality , group - sparse optimization , relaxed - integer programming , beamforming . [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] |
constraint logic programming ( clp ) is a natural and well understood extension of logic programming ( lp ) in which term unification is replaced by constraint solving over a specific domain .this brings a number of theoretical and practical advantages which include increased expressive power and declarativeness , as well as higher performance for certain application domains .the resulting clp languages allow applying efficient , incremental constraint solving techniques to a variety of problems in a very natural way : constraint solving blends in elegantly with the search facilities and the ability to represent partially determined data that are inherent to logic programming . as a result ,many modern prolog systems offer different constraint solving capabilities .one of the most successful instances of clp is the class of constraint logic languages using _ finite domains _( ) .finite domains refer to those constraint systems in which constraint variables can take values out of a finite set , typically of integers ( i.e. , a _ range _ ) .they are very useful in a wide variety of problems , and thus many prolog systems offering constraint solving capabilities include a finite domain solver . in such systems ,domain ( range ) definition constraints as well as integer arithmetic and comparison constraints are provided in order to specify problems . since the seminal paper of van hentenryck et al . , many fd solvers adopt the so - called `` glass - box '' approach .our fd kernel also follows this approach , based on a unique primitive called an _indexical_. high - level constraints are then built / defined in terms of primitive constraints .an indexical has the form ` x in r ` , where is a range expression ( defined in [ fig : range : syntax ] ) . intuitively , `x in r ` constrains the ( or integer ) ` x ` to belong to the range denoted by the term ` r ` . in the definition of the range special expressionsare allowed .in particular , the expressions ` max(y ) ` and ` max(y ) ` evaluate to the minimum and the maximum of the range of the ` y ` , and the expression ` dom(y ) ` evaluates to the current domain of ` y ` .constrains are solved partially in an incremental using consistency techniques which maintain the constraint network in some coherent state ( depending on the arc - consistency algorithm used ) .this is done by monotone domain shrinking and propagation . when all constraints are placed and all values have been propagated a call is typically made to a _ labeling _ predicate which performs an enumeration - based search for sets of compatible instantiations for each of the variables that remain not bound to a single value .we refer to for more details regarding indexicals and finite domain constraint solving . in this paper , we present a new free library for constraint logic programming over finite domains , included with the ciao prolog system . the library is entirely written in prolog , leveraging on ciao s module system and code transformation capabilities in order to achieve a highly modular design without compromising performance .we describe the interface , implementation , and design rationale of each modular component .the library meets several design goals : a high level of modularity , allowing the individual components to be replaced by different versions ; high - efficiency , being competitive with other implementations ; a glass - box approach , so the user can specify new constraints at different levels ; and a prolog implementation , in order to ease the integration with ciao s code analysis components .the core is built upon two small libraries which implement integer ranges and closures .on top of that , a _ finite domain variable _ datatype is defined , taking care of constraint reexecution depending on range changes .these three libraries form what we call the _ _ of the library .this is used in turn to implement several higher - level finite domain constraints , specified using indexicals .together with a labeling module this layer forms what we name the _ solver_. a final level integrates the clp()paradigm with our . thisis achieved using attributed variables and a compiler from the clp()language to the set of constraints provided by the solver .it should be noted that the user of the library is encouraged to work in any of those levels as seen convenient : from writing a new range module to enriching the set of by writing new indexicals .one of the first clp()implementations is the chip system .this commercial system follows a typical black - box approach : it consists of a complete solver written in c and interfaces in an opaque manner to a prolog engine .this makes it difficult for the programmer to understand what is happening in the core of the system .also , no facilities are provided for tweaking the solver algorithms for a specific application . more recent clp()systems such as those in sicstus , gnu prolog , and b - prolog are built instead following more the glass - box approach .the basic constraints are decomposed into smaller but highly optimized primitives ( typically indexicals ) . consequently, the programmer has more latitude to extend the constraints as needed .however , even if such systems can be easily modified / extended at the interface level ( e.g. , both sicstus and b - prolog provide way to define new global constraints ) they are much harder to modify at the implementation level ( e.g. , it is not possible to replace the implementation of range ) .the ciao clp()library that we present has more similarities with the one recently developed for swi prolog .both are fully written in prolog and support unbound ranges .the swi library is clearly more complete than ciao s ( e.g. , it provides some global constraints and always terminating propagation ) , but it is designed in a monolithic way : it is implemented in a single file , mixing different language extensions ( using classical prolog ` term_expansion ` mechanisms ) while the ciao library is split in more around 20 modules with a clear separation of the different language extensions .summarizing , our library differs in a number of ways from other existing approaches : * first , along with more recent libraries it differs from early systems in that it is written entirely in prolog . this dispenses with the need for a foreign interface and opens up more opportunities for automatic program transformation and analysis .the use of the meta - predicates ` setarg/3 ` and ` call/1 ` means that the use of prolog has a minimal impact on performance . *second , the library is designed as a set of separate modules .this allows replacing a performance - critical part like the range code with a new implementation better suited for it . *third , the library supports the `` glass - box '' approach fully , encouraging the user to access directly the low - level layers for performance - critical code without losing the convenience of the high - level clp paradigm .again , the fact that the implementation is fully in prolog is the main enabler of this feature . *lastly , we have prioritized extensibility , ease of modification , and flexibility , rather than micro - optimizations and pure raw speed . however , we argue that our design will accommodate several key optimizations like the ones of without needing to extend the underlying wam . the rest of the paper proceeds as follows . in sec . [sec : structure ] we present the architecture of the library and the interface of the modules . in sec .[ sec : glass - box - effect ] we discuss with an example how to use the glass box approach at different levels for better efficiency in a particular problem , with preliminary benchmarks illustrating the gains . finally , in sec .[ sec : conclusions ] we conclude and discuss related and future work .the ciao clp()library consists of seven modules grouped into three logical layers plus two specialized prolog to prolog translators . in the definition of these modules and interfaces we profit from ciao s module system and ciao s support for assertions , so that every predicate is correctly annotated with its types and other relevant interface - related characteristics , as well as documentation .the translators are built using the ciao _ packages _ mechanism , which provides integrated and modular support for syntax modification and code transformations .a description of the user interface for the library along with up - to - date documentation may be found in the relevant part of the ciao manual . the global architectureis illustrated in fig .[ fig : global - architecture ] .the kernel layer provides facilities for range handling and propagation chains , which are used for defining finite domain variables which , as mentioned before , are different from the standard logical variables .the defines a finite set of constraints such as ` a+b = c/3 ` , using indexicals .these constraints are translated form their indexical form to a set of instructions of the kernel layer .labeling and branch - and - bound optimization search modules complete the finite domain solver .the clp()constraints are translated to by a clp()compiler .we use attributed variables to attach a finite domain variable to every logical variable involved in clp()constraints .thus , the clp()layer is thin and of very low overhead .( fd_var ) at ( 0.5 , 1 ) ; ( fd_propags ) at ( -1.5 , -0.8 ) propagators ; ( fd_range ) at ( 1.5 , -0.8 ) range ; ( fd_var ) ( fd_range ) ; ( fd_var ) ( fd_propags ) ; ( kernel ) at ( -0.25,0.3 ) ; ( n ) at ( -1.75,1.5 ) * * ; ( fd_const ) at ( 4 , -1.25 ) [ right ] ; ( labeling ) at ( 4,0.25 ) [ right ] labeling ; ( optim ) at ( 4,1.75 ) [ right ] b&b optimization ; ( optim ) to [ out=180,in=25 ] ( kernel ) ; ( optim ) to [ out=-45,in=45 ] ( fd_const ) ; ( labeling ) to [ out=180,in=0 ] ( kernel ) ; ( labeling ) to [ out=-90,in=120 ] ( fd_const ) ; ( fd_const ) to [ out=180,in=-20 ] node[below=13,pos=0.6 ] idx compiler ( kernel ) ; ( solver ) at ( 2.25,0 ) ; ( n ) at ( 6.5,-2.25 ) * * ; ( clp_rt ) at ( 2.25,3.65 ) clp()run time ; ( clp_rt ) ( solver ) node[midway , auto = left ] clp()compiler ; ( solver ) at ( 2.25,0.5 ) ; ( n ) at ( -2.5,3.5 ) * clp( ) * ; the finite domain kernel is the most important part of the library .its implementation freely follows the design of the gnu prolog ( provides a general overview of this solver ) .a finite domain variable is composed of a range and several propagation chains . when the submission of a constraint modifies the range of a finite domain variable , other finite domain variables depending on that range are updated by firing up constraints stored in propagation chains .the propagation events are executed in a synchronous way , meaning that a range change will fail if any of its dependent constraints can not be satisfied. the kernel implements arithmetic over ranges ( pointwise operations , union , intersection complementation , ... ) and management of propagation chains , amounting to the delay of prolog goals on arbitrary events .these two elements are used to implement the two basic operations of a finite domain variable : ` tell ` and ` prune ` .the first one attempts to constrain a variable into a particular range , while the second one ( prune/2 ) removes a value form the range of a variable .the variable code inspects the new and old ranges and wakes up the suspended goals on a given variable .all the data structures are coded in an object - oriented style .efficient access and in - place update are implemented by using the ` setarg/3 ` primitive .we took special care to use ` setarg/3 ` in a _safe _ way to avoid undesired side effects , such as those described by tarau .range handling is one of the most important parts of the library , given the high frequency of range operations .indeed , the library supports three implementations for ranges : the standard one using lists of closed integer intervals ; an implementation using lists of open ( i.e. , unbounded ) intervals ; and a bit - based implementation which despite allowing unbound ranges is more suitable for problems dealing with small ranges .indeed , the user is encouraged to implement new range modules which are better suited to some particular problems .the interface that a range module must implement is split into two parts .the first one , shown in fig .[ fig : range : syntax ] , deals with range creation and manipulation .each of the operations defined in the figure has a corresponding predicate .for instance , bounds addition ` t+t ` is implemented by the predicate ` bound_add/3 ` , and similarly for the rest of the predicates .note that it is a convention of the interface that any operation that tries to create an empty range will fail .this is better for efficiency and we found no practical example yet where this would be inconvenient .[ fig : range : preds ] lists the rest of the predicates that a range implementation must provide .they are mainly used for obtaining information about a range and are instrumental for the labeling algorithms .lllllll ` r ` & & & & ` t .. t ` & & ( interval ) + & & & & ` { t } ` & & ( singleton ) + & & & & ` r \/r ` & & ( union ) + & & & & ` r /\ r ` & & ( intersection ) + & & & & ` - r ` & & ( complementation ) + & & & & ` r + n ` & & ( pointwise addition ) + & & & & ` r - n ` & & ( pointwise subtraction ) + & & & & ` r * n ` & & ( pointwise multiplication ) + + ` t ` & & : : = & & ` min(y ) ` & & ( minimum ) + & & & & ` max(y ) ` & & ( maximum ) + & & & & ` dom(y ) ` & & ( domain ) + & & & & ` val(y ) ` & & ( value ) + & & & & & & ( arithmetic expression ) + & & & & ` n ` & & ( bound ) + [ cols= " < , < " , ] the differences go from negligible to more than 50% . in a different benchmark ( bridge ) , the closed interval version was 25% faster than the open one .we now focus on the different possibilities that the library allows for programming . in the queens program ,the main constraint of the problem is expressed by the ` diff/3 ` constraint : .... diff(x , y , i ) : - x # \= y , x # \= y+i , x+i # \= y. .... where ` i ` will be always an integer .however , the compiler can not ( yet ) detect that ` i ` is an integer , and may perform some unnecessary linearization .we may skip the compiler and define ` diff ` using directly the : .... diff(x , y , i):-fd_constraints:'a<>b'(~w(x),~w(y ) ) , fd_constraints:'a<>b+t'(x , y , i ) , fd_constraints:'a<>b+t'(y , x , i ) . ....the speedup is considerable , getting close to 50% speedup in some cases .indeed , the compiler should be improved to produce this kind of code by default .the user may notice that the above three constraints may be encoded by using just two indexicals .for instance one can use the following definition for ` diff/3 ` : .... diff(x , y , i):-idx_diff(~w(x ) , ~w(y ) , i ) .idx_diff(x , y , i ) + : x in -{val(y ) , val(y)+c(i ) , val(y)-c(i ) } , y in -{val(x ) , val(x)+c(i ) , val(x)-c(i)}. .... again , the improvement is up to 40% from the previous version .however , the constraint ` diff ` can be improved significantly by using directly the kernel delay mechanism ( val chain ) and operations . in particular , we use the optimized kernel ` prune/2 ` operation that removes a single element form the range of a variable : .... diff(x , y , i):- wrapper(x , x0 ) , wrapper(y , y0 ) fd_term : add_propag(y , val , ' queens : cstr'(x0 , y0 , i ) ) , fd_term : add_propag(x , val , ' queens : cstr'(y0 , x0 , i ) ) .% y is always a singleton .cstr(x , y , i):- fd_term : integerize(y , y0 ) , fd_term : prune(x , y0 ) , y1 is y0 + i , fd_term : prune(x , y1 ) , y2 is y0 - i , fd_term : prune(x , y2 ) ..... we reach around 80% speedup from the first version , and this result is optimal regarding what the user can do .additional speedups can be achieved , but not without going beyond our glass - box approach .indeed , our clp()compiler is simpler given that we are working on a new translator that directly generates custom kernel constraints from clp()constraints .the ciao clp()library described is distributed with the latest ciao version , available at http://ciaohome.org . although included in the main distribution , it lives in the ` contrib ` directory , as it should be considered at a beta stage of development . even if we did not include yet important optimizations that should improve significantly the performance of the library, the current results are encouraging .the library has been used successfully internally within the ciao development team in a number of projects .the modular design and low coupling of components allow their easy replacement and improvement .indeed , every individual piece may be used in a glass - box fashion .we expect that the use of prolog will allow the integration with ciao s powerful static analyzers . at the same time, the clear separation of run - time and compile - time phases allows the modification and the improvement of the translation schemes in an independent manner .indeed , the advantages of this design have already been showcased in , where a prolog to javascript cross - compiler was used to provide a js version of the library and which only required replacing a few lines of code . using thiscross - compiler clp()programs can be run on the server side or on the browser side unchanged . regarding future work , we distinguish two main lines : the kernel and the clp()compiler . for the kernel ,the first priority is to finish settling down its interface . while we consider it mature , some optimizations like avoiding reexecution may require that we include more information in our structure , range modification times , etc .indeed , we would like to support more strategies for propagators than the current linear one .support for some global constraints is on the roadmap , and will likely mean the addition of more propagation chains .the library features primitive but very useful statistics .however we think it is not enough and we are working on an package that will provide detailed statistics and profiling .this is key in order to extract the maximum performance from the library .once we get detailed profiling information from a wide variety of benchmarks , a better range implementation will be due . regarding the clp()compiler , the current version should be considered a proof of concept .indeed , we are studying alternative strategies including the generation of custom kernels or specialized for each particular program in contrast to the current approach of mapping a clp()program to a fixed set of primitive constraints .ciaopp ciao s powerful abstract interpretation engine could be used in the translation , providing information about the clp()program to the clp()compiler so it can generate an optimal kernel of for that program . in this sense, we think that we will follow the ciaopp approach of combining inference with user - provided annotations in the new clp()compiler .the authors would like to thank the anonymous reviewers for their insightful comments .the research leading to these results has received funding from the madrid regional government under cm project p2009/tic/1465 ( prometidos ) , and the spanish ministry of economy and competitiveness under project tin-2008 - 05624 _ doves_. the research by rmy haemmerl has also been supported by picd , the programme for attracting talent / young phds of the montegancedo campus of international excellence . 10 jaffar , j. , maher , m. : onstraint lp : a survey .jlp * 19/20 * ( 1994 ) 503581 , saraswat , v. , deville , y. : design , implementation and evaluation of the constraint language cc(fd ) .journal of logic programming * 37*(13 ) ( 1998 ) 139164 dib , m. , abdallah , r. , caminada , a. : arc - consistency in constraint satisfaction problems : a survey . in : second international conference on computational intelligence ,modelling and simulation .( 2010 ) 291296 hermenegildo , m.v . ,bueno , f. , carro , m. , lpez , p. , mera , e. , morales , j. , puebla , g. : n overview of ciao and its design philosophy .theory and practice of logic programming * 12*(12 ) ( january 2012 ) 219252 http://arxiv.org/abs/1102.5497 .dincbas , m. , hentenryck , p.v . , simonis , h. , aggoun , a. : he constraint logic programming language chip . in : proceedings of the 2nd international conference on fifth generation computer systems .( 1988 ) 249264 carlsson , m. , ottosson , g. , carlson , b. : an open - ended finite domain constraint solver . in : proceedings of the9th international symposium on programming languages : implementations , logics , and programs : including a special trach on declarative programming languages in education .plilp 97 , london , uk , uk , springer - verlag ( 1997 ) 191206 d. diaz , s.a . ,codognet , p. : on the implementation of gnu prolog . theory and practice of logic programming * 12*(12 ) ( january 2012 ) 253282 codognet , p. , diaz ,d. : compiling constraints in clp(fd ) . j. log . program .* 27*(3 ) ( 1996 ) 185226 zhou , n.f . :programming finite - domain constraint propagators in action rules .. log . program .* 6*(5 ) ( september 2006 ) 483507 triska , m. : the finite domain constraint solver of swi - prolog . in schrijvers ,t. , thiemann , p. , eds . : functional and logic programming .volume 7294 of lecture notes in computer science .springer berlin / heidelberg ( 2012 ) 307316 morales , j.f . , hermenegildo , m.v . ,haemmerl , r. : odular extensions for modular ( logic ) languages . in : 21th international symposium on logic - based program synthesis and transformation ( lopstr11 ) , odense , denmark ( july 2011 ) to appear .daz , d. , codognet , p. : inimal extension of the wam for clp(fd ) . in : proceedings of the tenth international conference on logic programming , budapest , mit press ( june 1993 ) 774790 cabeza , d. , hermenegildo , m. : ew module system for prolog . in : international conference on computational logic ,number 1861 in lnai , springer - verlag ( july 2000 ) 131148 hermenegildo , m. , puebla , g. , bueno , f. , lpez - garca , p. : ntegrated program debugging , verification , and optimization using abstract interpretation ( and the ciao system preprocessor ) .science of computer programming * 58*(12 ) ( 2005 ) 115140 tarau , p. : inprolog 2006 version 11.x professional edition user guide .binnet corporation .( 2006 ) available from ` http://www.binnetcorp.com/ ` .schrijvers , t. , triska , m. , demoen , b. : or : extensible search with hookable disjunction . draft .available from http://users.ugent.be/~tschrijv/tor/ ( 2012 ) morales , j.f ., haemmerl , r. , carro , m. , hermenegildo , m.v .: ightweight compilation of ( c)lp to javascript . theory and practice of logic programming , 28th intl .conference on logic programming ( iclp12 ) special issue ( 2012 ) to appear ..... queens(n , l , lab , const ) : - length(l , n ) , domain(l , 1 , n ) , safe(l , const ) , labeling(lab , l ) . | we present a new free library for constraint logic programming over finite domains , included with the ciao prolog system . the library is entirely written in prolog , leveraging on ciao s module system and code transformation capabilities in order to achieve a highly modular design without compromising performance . we describe the interface , implementation , and design rationale of each modular component . the library meets several design goals : a high level of modularity , allowing the individual components to be replaced by different versions ; high - efficiency , being competitive with other implementations ; a glass - box approach , so the user can specify new constraints at different levels ; and a prolog implementation , in order to ease the integration with ciao s code analysis components . the core is built upon two small libraries which implement integer ranges and closures . on top of that , a _ finite domain variable _ datatype is defined , taking care of constraint reexecution depending on range changes . these three libraries form what we call the _ _ of the library . this is used in turn to implement several higher - level finite domain constraints , specified using indexicals . together with a labeling module this layer forms what we name _ the solver_. a final level integrates the clp()paradigm with our . this is achieved using attributed variables and a compiler from the clp()language to the set of constraints provided by the solver . it should be noted that the user of the library is encouraged to work in any of those levels as seen convenient : from writing a new range module to enriching the set of by writing new indexicals . |
the response to external electric and magnetic fields provides a fundamental tool for studying and altering the properties of materials with numerous attendant applications .in particular higher - order responses allow for ` manipulating light with light . 'thus , there is considerable interest in identifying molecular systems with large non - linear responses .one approach in this direction is based on push - pull systems , i.e. , chain - like molecules with an electron - donor group at one end and an electron - acceptor group at the other ( see fig . [ fig01 ] ) . when the backbone is a -conjugated oligomer the electrons of the backbone may respond easily to perturbations like those of the substituents and/or external fields . due to the donor and acceptor groups a large electron transfer , and , accordingly ,a large dipole moment can occur and one may hope for large responses of the dipole moment to external fields . for these -conjugated systems , each circle in fig .[ fig01 ] could be , for example , a vinylene group , a phenylene group , a methinimine group , or combinations of those .if the push - pull system is sufficiently large , we may split it into three parts , i.e. , a left ( l ) , a central ( c ) , and a right ( r ) part as shown in fig .[ fig01 ] .electrons of the central part are assumed to be so far from the terminations that they do not feel the latter ( or , more precisely , the effects of the terminations are exponentially decaying in the central part ) .the dipole moment , , is useful in quantifying the response of the system to an external electric field , here , is the component ( i.e. , , , or ) of the external field with the frequency and is the frequency of the response of the molecule to the field .the summations go over all the frequencies of the applied field . is the dipole moment in the absence of the field which vanishes for .moreover , is the linear polarizability , and , , are the first , second , hyperpolarizability .sum rules require that these quantities can be non - zero only if the frequency of the response , , equals the sum of the frequencies ( eventually multiplied by ) , i.e. , for we require . in the present paper we focus on static external fields , in which case .furthermore , we shall study a neutral system , although our arguments also are valid for charged systems as long as the extra charge is localized to the terminations .we let be the ( field - dependent ) total charge density ( i.e. , the sum of the nuclear and electronic charge densities ) , and choose the long axis to be .then the component of the total dipole moment that is of interest here , namely , is given by ( omitting its argument , ) where we have split the integral into contributions from the left , central , and right regions of the chain .the central region consists of identical neutral units .we can , therefore , write where is the number of units in c and is the component of the dipole moment of one of these units . in order to evaluate the other two contributions to the total dipole moment in eq .( [ eqn02 ] ) we define a ` typical ' center for each term , i.e. , and ( these could , e.g. , be the center of mass of the right and left parts , respectively ) , and let and be the components of these vectors .since the chain is neutral we , then , obtain the first term on the right hand side describes the contribution to the dipole moment associated with electron transfer from one end to the other .this term grows linearly with chain length ( due to ) as does the term in eq .( [ eqn03 ] ) . on the other hand , the last two terms in eq .( [ eqn04 ] ) describe local dipole moments that arise from the electron distributions within the two terminal regions and they are independent of the chain length .this discussion suggests that donor / acceptor ( = d / a ) substitution at the ends of long chains may change the charge distribution in r and l so as to strongly enhance the dipole moment and , consequently , produce a particularly large change in the dipole moment when the system is exposed to an external electric field .therefore , very many studies have been devoted to push - pull systems as a function of increasing length ( see , e.g. , [ ] ) .not only the electrons but also the structure ( phonons ) will respond to a static electric field .we will demonstrate that , for sufficiently long chains , the electronic response per unit of a push - pull system ( with structural relaxation taken into account ) becomes independent of the donor and acceptor groups , implying that the materials properties can not be improved upon substitution .our mathematical arguments for this finding are presented in the next section , and in sec .[ sec03 ] we illustrate and analyse the results through calculations on a model system .the particular case of inversion symmetry is discussed in sec .[ sec04 ] where we also make a comparison with previous results .finally , a summary is provided in sec .[ sec05 ] .the arguments we present are related to those originally given by vanderbilt and king - smith for an extended system in the absence of an external field .they argued that the permanent polarization ( i.e. dipole moment per unit length ) is a bulk property. very recently , kudin _et al._ proved that the permanent polarization is quantized for d / a substituted systems .neither of these works considered the induced polarization or the structural relaxation due to an external field .finally , in a recent paper we presented some of the arguments behind the present work but did not analyze the predictions as we do here using a model system. replacing some ( groups of ) atoms with others at the chain ends , the electronic orbitals with components near the ends will change .since the set of electronic orbitals is orthonormal , all other orbitals will change as well .accordingly , the charge distribution may change everywhere due to the substitutions .when an electrostatic field is applied as well , each orbital will respond to the field . since the orbitals will have changed due to the substitution ,so will their responses to the field .furthermore , the structural responses due to the field will also depend on the substitution at the ends .therefore , the dipole moment can depend upon both the substitution and the field . from these argumentsthere is no reason to believe that , , , , ( with being the number of repeated units ) will be independent of the substitution .however , we shall argue here that the charge in eq .( [ eqn04 ] ) can change , at most , by an integral number of elementary units for different d / a substitutions at fixed external static field .our proof is a generalization of arguments due to vanderbilt and king - smith ( see also [ ] ) , and was previously proposed by the present authors. it will be verified here by calculations on a model system and given a thorough analysis on that basis . for a given system ( with specified geometry ) , andvalue of the external field , we transform the set of occupied orbitals into a set of orthonormal , localized functions .those functions ascribed to c will be similar to the wannier functions of the infinite periodic system .the localized orbitals will be centered in one region , but may have tails sticking into another region .we assume that the terminal regions are large enough so that any functions centered therein , which differ from those of c , are exponentially vanishing in c. on the other hand , those functions ascribed to c , but centered on units next to l or r , will likely have tails extending into those regions .the density matrix can then be written in block - diagonal form with three blocks , one for each of the three regions .since the density matrix is idempotent , each block will be so , too , and there will be an integral number of electrons associated with each of the three sets of functions .that is to say , the number of electrons associated with the functions centered in the two end regions is integral .accordingly , any non - integral part of is associated with the tails of the functions in c that extend into r , which , per construction , is independent of the terminations , i.e. , also of d / a substitution .we conclude that , for different terminations , can change only by an integer .this is valid for long chains and all fields .therefore , the electronic response per unit of the chains to the field , with or without nuclear response , is independent of termination .the only possible change for different terminations is that may jump by an integer for different field strengths .in fact , our numerical studies on a hckel - type model will confirm this prediction .of course , in ab initio calculations , there may also be a jump due to changing the basis set or the method ( e.g. hartree - fock vs. kohn - sham dft ) .in order to explore in detail the predictions from above we studied a hckel like model for long , finite ( ab) chains . in our model , we use a basis set of orthonormal atomic orbitals ( aos ) with one ao per atom .the system has one electron per atom , and the nuclei are given charges of whereas the electronic charge is set equal to .( all quantities are expressed in atomic units in this paper . )given that is the ao of the atom ( ) and is the kohn - sham or fock single - electron hamiltonian we assume that only , , and are non - vanishing with values \nonumber\\ \langle \chi_j\vert\hat h\vert \chi_{j+2}\rangle&= & -[t_2-\alpha_2(z_{j+2}-z_j ) ] .\label{eqn06}\end{aligned}\ ] ] here is the position of the atom .different donor and acceptor groups are modeled by modifying the on - site energies of the terminating atoms and/or the terminating hopping integrals , +t_l\nonumber\\ \langle \chi_{4k+1}\vert\hat h\vert \chi_{4k+2}\rangle&= & -[t_1-\alpha_1(z_{4k+2}-z_{4k+1})]+t_r . \label{eqn07}\end{aligned}\ ] ] finally , we assume that in order to analyse the results we , first , define a reference structure for which the position of the atom is here is the length of the unit cell for an infinite , periodic system with the same electronic interactions and no external field .subsequently , we define for each atom the total energy is written as the sum over occupied orbital energies ( multiplied by 2 due to spin degeneracy ) augmented by a harmonic term in the nearest- and the next - nearest - neighbour bond lengths , is the strength of the electrostatic field . for the infinite , periodic chain without an external field, the lowest total energy corresponds to a certain lattice constant and the force constants and are determined so that and take certain chosen values . with beingthe orbital ( ordered according to increasing orbital energy ) we calculate the mulliken charge on the atom for field as which leads to the dipole moment the charge transfer is given through we also define where is the charge for the infinite , periodic chain in the absence of the field . the effects on the charge distribution of the push - pull chain due to including the field , whereas includes effects both from the field and from the terminations .note that gives the field - independent effect of the terminations .finally , it turns out to be useful to define the center and width of the orbital according to ^{1/2 } , \label{eqn15}\end{aligned}\ ] ] which is consistent with eq .( [ eqn07a ] ) .we performed calculations for six different terminations specified by .the results are summarized in figs .[ fig02 ] , [ fig02a ] , [ fig04 ] , [ fig05 ] , and [ fig06 ] . since our model is that of a finite chain with two different types of atoms , a and b , the mulliken charges in the central region take two values .this is clearly recognized in the presentation of in fig .[ fig02 ] for . in fig .[ fig02 ] it is also seen that near the ends , the mulliken charges differ from the values of the inner part and , moreover , these charges depend sensitively on the terminations . for the field strength these findingsare only marginally modified compared to those of a vanishing field ( not shown ) . from see that the combination of electrostatic field and termination leads to an internal polarization of each unit in c. actually , shows a reduced internal polarization compared to .thus , terminating the chain reduces the effect of the field in that regard . whereas contains information about the field - induced charge redistributions, contains additional information about the ( field - dependent ) effects of the terminations . for field - induced charge redistributions are smaller near the terminations than in the central parts . for the larger field , , in fig .[ fig02a ] the identification of the central region becomes much more difficult and , as we shall see below , electrons are transferred from one end to the other .moreover , in this case the field perturbs the system so strongly that the effects of the field are stronger than those of the terminations .this can be seen from the fact that and are very similar .the structure also depends upon the termination . for the intermediate field of fig .[ fig02 ] ( and for zero field as well ) the atomic coordinate is nearly constant in c but varies considerably near the ends where its value depends on the termination , as was the case for the atomic charges .for the higher field in fig .[ fig02a ] it appears as if no central region can be identified from this parameter .however , the fact that is essentially linear for the innermost atoms implies that there is a well - defined , repeating structure in c with a lattice constant differing from that of the field - free case .[ fig04 ] shows that the charge transfer , , is independent of termination ( though not independent of the field ) , with the exception of jumps by ( even ) integers .( the integers are even because we have not allowed for spin polarization . )however , the charge distribution inside r or l does depend on the terminations and , as a consequence , the dipole moment does as well . on the other hand , the variation of as a function of for different terminations follows parallel curves , implying that the ( hyper)polarizabilities are independent of the terminations .in fact , a least squares fit yields the values ( including maximum deviations ) : , , , and for all six terminations .as a function of field is discontinuous and the power series expansion is valid only up to the field where the discontinuity occurs .once such a discontinuity has been passed , the dipole moment depends more strongly on the field .this means that the only way of increasing the responses of long push - pull systems to dc fields is to design chains for which the integral electron transfers occur at low fields . at a given fieldthe size of the chain for which jumps in the charge ( i.e. zener tunneling ) take place depends on the terminations ( cf .[ fig05 ] ) . in the shortest chains ,for which zener tunneling does not occur , follows parallel curves as a function of chain length , , for different terminations .this means that the dipole moment and ( hyper)polarizabilities per unit become independent of termination .however , as seen in fig .[ fig05 ] , the slope of these curves increase after zener tunneling has taken place , implying that the dipole moment increases . assuming that the field - dependence of the dipole moment likewise increases , this suggests that the polarizability and/or hyperpolarizabilities per unit may increase for d / a substituted systems after an integral number of electrons has been transferred from one end to the other . in fig .[ fig06 ] we show an example of what happens to the molecular orbitals when the jumps take place .calculations were performed for field strengths between and in steps of , but in the figure we only show the results for fields where zener tunneling occurs . in all cases ,the curves vary smoothly as a function of field strength . at the lowest two fields , the occupied orbitals closest to the fermi level have a center in the left part ( ) , whereas the unoccupied orbitals closest to the fermi level are centered in the right part . at the field , two electrons ( one per spin direction ) are transferred from one side to the other , which again happens at a larger field ( ) . in the first case , we observe the occurrence of two new , very localized , orbitals close to ( but not at ) the fermi level . the energetically lower ( i.e occupied ) oneis localized towards to the chain end on the right side while the other ( unoccupied ) is localized towards the chain end on the left side . accompanyingthis interchange is a similar interchange of two rather delocalized orbitals , both of which are further away from the fermi level and centered closer to the middle of the chain .again , at the second electron transfer a pair of new , rather localized , orbitals near ( even closer to ) the fermi level show up towards the chain ends , and also this transfer is accompanied by some reorganization of the other orbitals .finally , fig .[ fig06 ] also shows an example of a reorganization of the orbitals , i.e. , for a field around . here, one localized , occupied orbital interchanges order with an adjacent ( in energy ) more delocalized orbital , but otherwise no further significant changes are observed .before proceeding to compare with previous results we develop an interesting consequence of our findings with regard to inversion symmetry .the same arguments can be applied for a system containing a mirror plane perpendicular to the chain axis , but here we shall for the sake of simplicity restrict ourselves to the case of inversion symmetry .suppose the long oligomer of interest contains a central region made up of units with inversion symmetry .even if the central part does not have inversion symmetry , it may be possible to create such with the addition of appropriate terminating groups .this is , for example , the case for oligomers of thienyleneethynylenes and thienylenevinylenes that were studied by geisler _et al._ many of the systems of interest fall into one of these two categories .since , according to our findings , d / a substitution can not affect the ( hyper)polarizabilities per unit , the latter must vanish even if the symmetry is not preserved .for instance , modifying the terminations of the systems of geisler _ et al ._ so that inversion symmetry no longer exists can not result in a non - vanishing if the chains are sufficiently long .a large fraction of previous observations are for systems of the type described in the preceding paragraph .some of these cases are discussed below along with others pertinent to our findings herein .we now briefly consider , in particular , the works mentioned in the introduction . in their combined experimental and theoretical study on some push - pull oligoenes , meyers _et al._ observed a ` negligible charge transfer all the way from the donor to the acceptor ' , which implies that is independent of the termination .on the other hand , in their theoretical study tsunekawa and yamaguchi examined shorter , nitrogen - containing push - pull oligomers .they noted that these systems are interesting from the perspective of maximizing , but our results establish that , for such to be true , the systems must be short enough so that our approach is inapplicable .this serves to highlight the point that apparent , but not real , discrepancies can occur due to shortness of the chain length .marder _ et al._ presented an approach for unifying the description of linear and nonlinear polarization in organic polymethine dyes .it has since been shown that their analysis is invalid if phonons are taken into account. here , however , we emphasize that the conclusions they draw regarding can , again , hold only for systems that are too short for our treatment to apply . clearly , the chain length required for validity of the treatment given here is an important issue . in fig .[ fig05 ] the dipole moment is converged for chains with some 20 units . however, this may be an artifact of our simple hckel model . in an experimental study and in several computational studies, the second hyperpolarizability per unitwas found to converge considerably slower which , in fact , agrees with our own earlier findings. thus , when focusing on higher - order non - linear responses quite large chains may be required for the results of the present work to be relevant . in shorter push - pull systems ( for instance those considered by geisler _et al._ or by morley _et al._ ) d / a substitution can have an influence on the response . as shown numerically by champagne _et al._, also converges relatively slowly as a function of size .they considered d / a substituted oligomers of polymetheimine [ also called polycarbonitrile , ( chn) .this system has a zigzag backbone of alternating c and n atoms with alternating bond lengths . without the bond length alternation it would , at least hypothetically , be possible to choose donor and acceptor groups so that the overall system is centrosymmetric .even if chemical arguments imply that this structure is unrealistic , a non - zero value of for long chains should be ascribed , strictly speaking , to the bond length alternation .polyphenylenes and polypyridines have been studied by zhang and lu. they focused on and as a function of the length of a closed ring for each system and applied a finite - field approach in their calculations .unfortunately , as we have shown earlier ( see , e.g. , [ ] ) , this approach will never converge to the results for the infinite , periodic chain . nevertheless ,although will vanish for the polyphenylenes , we predict that a non - zero value will occur for both short and long oligomers of the polypyridines .for the d / a substituted polyenes studied by champagne _et al._ our analysis confirms their findings , i.e. , that will vanish for sufficiently large chains .their numerical results indicate that goes through a maximum and that convergence to the infinite chain result for larger is slow .even the polarizability , , and the permanent dipole moment , , may converge more slowly , as a function of chain length , than predicted by our simple model .this is , for example , the case for the systems investigated by smith _et al._ and by kudin _et al._ in a recent study , botek _et al._ compared finite oligomers of [ ]helicenes and [ ]phenylenes that possess a helical structure for larger than roughly 6 . by making explicit use of the helical symmetry of the central regionwe predict that , when those systems are sufficiently long , d / a substitution will not be able to modify the electronic responses to static fields .the fact that botek _find changes upon d / a substitution implies that the chains of their study are not converged to the long chain limit .as long as the applied field is not so strong that an integral number of electrons is transferred from one end to the other , the answer to the question of the title is clearly : there can be no change .this comes from our mathematical analysis in sec .[ sec02 ] , which generalizes treatments presented previously by vanderbilt and king - smith and by kudin _ et al._, who considered only electronic polarization in the absence of an external electrostatic field .it is also in agreement with our own earlier prediction. calculations on a model system confirm the basic result and shed light on the nature of the end - to - end charge transfer .although the end charges , permanent dipole moment , and structure depend sensitively on the terminations neither the amount of charge transferred nor the ( hyper)polarizabilities per unit do so .the field and/or chain length at which the charge jumps take place also depend on the terminations .each jump is associated with an interchange of occupied and unoccupied molecular orbitals that are well - localized in the chain end region .these orbitals are close to but not at the fermi level .there is also an accompanying orbital reorganization .one consequence of our finding is that long unsubstituted chains which have inversion or mirror symmetry , or can be made symmetric by substitution , must have a vanishing first hyperpolarizability per unit .experimental and theoretical determinations are consistent with this fact , although apparent contradictions can occur for short chains .this work was supported by the german research council ( dfg ) through project sp439/20 within the spp 1145 .moreover , one of the authors ( ms ) is very grateful to the international center for materials research , university of california , santa barbara , for generous hospitality . | mathematical arguments are presented that give a unique answer to the question in the title . subsequently , the mathematical analysis is extended using results of detailed model calculations that , in addition , throw further light on the consequences of the analysis . finally , through a comparison with various recent studies , many of the latter are given a new interpretation . |
many works in signal / image processing are concerned with data restoration problems . for such problems ,the original data is degraded by a stable convolutive operator and by a non - necessarily additive noise .denotes the space of discrete - time real - valued signals defined on having a finite energy . ]the resulting observation model can be written as where denotes the noise effect and is some related parameter ( for example , may represent the variance for gaussian noise or the scaling parameter for poisson noise ) . in this context , our objective is to recover a signal , the closest possible to , from the observation vector assumed to belong to and available prior information ( sparsity , positivity , ) . in early works ,this problem was solved , mainly for gaussian noise , by using wiener filtering , or equivalently quadratic regularization techniques .later , multiresolution analyses were used for denoising by applying a thresholding to the generated coefficients .then , in order to improve the denoising performance , redundant frame representations were substituted for wavelet bases . in , authors considered convex optimization techniques to jointly address the effects of a noise and of a linear degradation within a convex variational framework . when the noise is gaussian , the forward - backward ( fb ) algorithm ( also known as thresholded landweber algorithm when the regularization term is an -norm ) and its extensions can be employed in the context of wavelet basis decompositions and its usecan be extended to arbitrary frame representations . however , in the context of a non - additive noise such as a poisson noise or a laplace noise , fb algorithm is no longer applicable due to the non - lipschitz differentiability of the data fidelity term .other convex optimization algorithms must be employed such as the douglas - rachford ( dr ) algorithm , the parallel proximal algorithm ( ppxa ) or the alternating direction method of multipliers ( admm ) .these algorithms belong to the class of proximal algorithms and , for tractability issues , they often require to use tight frame representation for which closed forms of the involved proximity operators can be derived .the goal of this paper is to propose a way to relax the tight frame requirement by considering an appropriate class of frame representations . in the following ,we consider two general convex minimization problems , which are useful to solve frame - based restoration problems formulated under a synthesis form ( sf ) or an analysis form ( af ) .the sf can be expressed as : and the af is : ( resp . ) denotes the frame analysis ( resp .synthesis ) operator . for every , -\infty,+\infty\right]}} ] is a convex , lower semicontinuous , and proper function . in several works , sf has been preferred since af appears to be more difficult to solve numerically . in the proposed framework ,both approaches have a similar complexity .this paper is organized as follows : in section [ sec : frames ] , the class of frames considered in this work is defined and their connections with filter bank structures is emphasized . in section [ sec : prox_algo ] , we show how these ( non necessarily tight ) frames can be combined with parallel proximal algorithms in order to solve problems and .the proposed approach is also applicable to related augmented lagrangian approaches .finally , restoration results are provided in section [ sec : results ] for scenarios involving poisson noise or laplace noise by using dual - tree transforms ( dtt ) and filter bank representations .* notation : * throughout this paper , designates the class of lower semicontinuous convex functions defined on a real hilbert space and taking their values in -\infty,+\infty\right]}} ] such that the associated frame operator is the injective linear operator defined as the adjoint of which is the surjective linear operator given by when , an orthonormal basis is obtained .further constructions as well as a detailed account of frame theory in hilbert spaces can be found in .a tight frame is such that , for some ,+\infty\right[}} ] such that .recall that an analysis filter bank can be put under its polyphase form by performing a polyphase decomposition followed by a real mimo ( multi - input multi - output ) filtering : + the polyphase decomposition is an operator from to with such that , for every , where is the -th polyphase component of order of the signal .the adjoint operator of is given by where , for every and , .so , allows us to concatenate square summable sequences into a single one .it can be noticed that and , which means that is an isometry and .+ the mimo filter is defined as where , for every and , is a siso ( single - input single - output ) stable filter .hence , the impulse response of this filter belongs to and its frequency response is a continuous function .in addition , it is assumed that is left invertible , that is : for every ] is equal to .the adjoint operator of is the mimo filter given by where , for every and , is the siso filter with complex conjugate frequency response .+ we have then the following result ( the proof is provided in appendix [ ap : f ] ) : [ prop : f ] the operator is a frame operator with frame constants } \sigma_{\rm min}(\nu) ] , where , for every , ,+\infty\right[}} ] are the minimum and maximum eigenvalues of .corresponds to the transconjugate of . ] in addition , we have : the resulting frame is not necessarily tight. however , when ) ] .then , and .when dealing with linear operators such that for any , proximal algorithms requiring the inversion of a linear operator at each iteration can however be designed .for example , algorithm [ algo : sa ] ( resp .algorithm [ algo : aa ] ) can be applied to problem ( resp .problem ) .( in these algorithms , the sequences and model possible numerical errors in the computation of the proximity operators at iteration .) initialization + ,+\infty\right[}}^r,(\kappa_s)_{1\le s \le s}\in { \ensuremath{\left]0,+\infty\right[}}^s ; \ ; ( v_{r,0})_{1\leq r\leq r } \in \big(\boldsymbol{\ell}^2({\ensuremath{\mathbb z}})\big)^r,(w_{s,0})_{1\leq s\leq s } \in\big(\boldsymbol{\ell}^2({\ensuremath{\mathbb z}})\big)^s \\x_0 = \arg \min_{u\in \boldsymbol{\ell}^2({\ensuremath{\mathbb z } } ) } \sum_{r=1}^r \eta_r { \vert l_r f^ { * } u - v_{r,0}\vert}^2 + \sum_{s=1}^s \kappa_s { \vert u - w_{s,0}\vert}^2\\ \end{array } \right. ] initialization + ,+\infty\right[}}^r,(\kappa_s)_{1\le s \le s}\in { \ensuremath{\left]0,+\infty\right[}}^s;\ ; ( v_{r,0})_{1\leq r\leq r } \in \big(\boldsymbol{\ell}^2({\ensuremath{\mathbb z}})\big)^r,(w_{s,0})_{1\leq s\leq s } \in\big(\boldsymbol{\ell}^2({\ensuremath{\mathbb z}})\big)^s \\y_0 = \arg \min_{u\in \boldsymbol{\ell}^2({\ensuremath{\mathbb z } } ) } \sum_{r=1}^r \eta_r { \vert l_r u - v_{r,0}\vert}^2 + \sum_{s=1}^s \kappa_s { \vert f u - w_{s,0}\vert}^2\\ \end{array } \right. ] the convergence of the sequence ( resp . ) generated by algorithm [ algo : sa ] ( resp .algorithm [ algo : aa ] ) to an optimal solution of the related optimization problem is guaranteed under the following technical assumptions ( see for more details ) : 1 . 2 . 3 . ) .4 . there exists ,2[ ] is a miso ( multi - input single - output ) filter ( for every , is a siso filter ) .now , invoking proposition [ prop : f ] and making use of sherman - morrison - woodbury identity yield : + for sf ( algorithm [ algo : sa ] ) for af ( algorithm [ algo : aa ] ) where .note that the idea of using the woodbury matrix identity to handle the inversion for sf was proposed in in the context of tight frames .the inversions in ( resp . ) can be performed by noticing that , as and are multivariate filters with frequency responses and , ( resp . ) is a mimo filter with frequency response : for every ] , so defining a closed convex constraint set .the considered sf ( resp .af ) problem is a particular case of problem ( resp . )where , , , , and with .the last function corresponds to the regularization term operating in the frame domain . in the considered problems , and proximity operators associated to , , and are derived from example [ ex : gamd ] , the projection onto , and example [ ex : gg ] . in our simulations , the parameter is empirically chosen to maximize the signal - to - noise - ratio ( snr ) .in general , the value of this parameter is not the same for fa and fs .note that an alternative to the proposed approach consists of resorting to primal - dual algorithms .these algorithms are appealing as they do not require any operator inversion and they can thus be employed with arbitrary frames .however , after appropriate choices for the weights , , and the relaxation parameter , the proposed approach appeared to be faster .for example , similar frequency domain implementations of the monotone + skew forward backward forward ( m+sfbf ) algorithm were observed to be about twenty times slower than the proposed method in term of iterations and computation time .figure [ fig : tf_ntf ] shows the restoration results for a cropped version of the `` barbara '' image in the presence of poisson noise and a uniform blur of size .we adopt a sf criterion and we consider a tight version ( i.e. , for every , ) as well as a non - tight version of complex dtt . the complex dtt is computed using symlets of length 6 over 3 resolution levels . in order to efficiently perform the inversions in and , fast discrete fourier diagonalization techniques have been employed .the use of the non - tight complex dtt including prefilters allows us to improve the quality of the results both visually and in terms of snr and ssim .figure [ fig : sa_aa ] displays a second restoration example for a cropped version of the `` straw '' image in the presence of laplace noise and a uniform blur of size .af results are presented by using a dtt and an eigenfilter bank ( and ) computed from the degraded image .this formulation leads to better results than those obtained with sf .significant gains in favour of the eigenfilter bank can be observed .[ cols="^,^,^,^ " , ] | a fruitful approach for solving signal deconvolution problems consists of resorting to a frame - based convex variational formulation . in this context , parallel proximal algorithms and related alternating direction methods of multipliers have become popular optimization techniques to approximate iteratively the desired solution . until now , in most of these methods , either lipschitz differentiability properties or tight frame representations were assumed . in this paper , it is shown that it is possible to relax these assumptions by considering a class of non necessarily tight frame representations , thus offering the possibility of addressing a broader class of signal restoration problems . in particular , it is possible to use non necessarily maximally decimated filter banks with perfect reconstruction , which are common tools in digital signal processing . the proposed approach allows us to solve both frame analysis and frame synthesis problems for various noise distributions . in our simulations , it is applied to the deconvolution of data corrupted with poisson noise or laplacian noise by using ( non - tight ) discrete dual - tree wavelet representations and filter bank structures . |
the evolution of life shows an overall trend towards an increase in size and complexity .one of the determining major innovations that have allowed biological systems to achieve a high degree of complexity has been the evolution of multicellularity and the emergence of supra - cellular hierarchies beyond single - cell organization . together with multicellularity , mechanisms to maintain stable phenotypes that underly consistent division of labor had to be developed .the study of the origins of form have a long tradition in biology .initiated by turing and rashevsky , numerous attempts to formalize a mathematical description of pattern formation have been made . as a result , spatial instabilities were proposed as a powerful rationale for the creation of spatial order , out of random fluctuations , around a homogeneous state in reaction - diffusion systems .the main feature of reaction - diffusion systems is the presence of diffusion - driven instabilities under certain parametric conditions , by which small perturbations in the system are amplified , leading to ordered spatial patterns .this family of models has been systematically studied and provides the basis for several natural mechanisms of pattern formation .the structures generated by these processes have a characteristic scale whose wavelength depends on the model parameters . along with this class of pattern - forming mechanisms , another possible class of models capable of organizing structures in space is based on cell - cell differential adhesion .such a mechanism explains the spatial re - arrangement of different cells belonging to disrupted tissues when mixed together _ in vitro_ . after a transient , clusters involving cells of the same classare often observed as spatially segregated from other cell types by means of the formation of well defined boundaries or layers . in this case , the underlying mechanism explaining the origin of patterns is that of energy - minimization dynamics , similar to the one used in physics for strongly interacting particle systems .both reported mechanisms are crucial in the formation of natural self - organized structures in developing embryos , and have been connected to the early forms of multicellularity . in this paperwe focus our attention on the early stages of the transition towards multicellularity , where the explicit connection between fitness , function and structure has been particularly difficult to elucidate and , thus , is commonly overlooked . to assess whether the structural organization of multicellular assemblies is related to differential fitness , we have developed an embodied computational model where turing - like structures appear , stemming from differential adhesion and stochastic phenotypic switching .fitness is intrinsically obtained by the introduction of a limiting nutrient and the production of a toxic waste byproduct , which respectively increase cell reproduction or death .one of the two cellular states is able to process waste at the cost of reduced proliferation .we observe that different parameter sets produce different spatial patterns , and that spatial organization can have a role in increasing fitness . finally , we discuss the implications of such results for the transition from unicellular to multicellular organisms and for the evolution of complexity .[ corerules ] is minimized .( b ) the final global configuration is a direct consequence of the micro rules imposed by the adhesion matrix .( c ) markovian process modelling the cell state transitions used in this paper .this very simple approach can aptly describe persister cell dynamics and phase variation phenomena . ]our model considers a population of cells living on a two - dimensional square lattice ( fig .1 ) , along with empty medium , and following the rules of a cellular potts model as described by steinberg ( see also ) . within this framework ,cells are discrete entities that occupy single lattice positions , have an associated state ( ) and move across the lattice trying to minimize their energetic potential .two states correspond to cellular phenotypes , namely white cells ( ) and black cells ( ) , while state represents empty space .in this paper we build and analyse two different expansions of the basic potts model : a _ hybrid differential adhesion - stochastic phenotipic switching _( da - sps ) model and an _ ecology and competition _ ( ec ) model . in the da - sps model ,cells are sorted by differential adhesion and can reversibly switch their phenotypes . in the ec modelwe include a simple metabolism by adding nutrient and toxic waste , whose concentrations drive cell proliferation and death . in the following sections we explain how cellular adhesion , phenotypic switching and metabolism are implemented in our models .the cell sorting process fundamentally occurs due to the differences in adhesion energy between states . following steinberg s differential adhesion hypothesis ( dah )we assume that the adhesion kinetics are driven by the minimization of adhesion energy between lattice sites , being cells more or less prone to remain together , and avoid or maximize contact with the external medium .the strength of interactions among different states can be defined by means of an adhesion matrix : each term in this matrix describes how favourable the pairwise interaction between two states is .the matrix is symmetric , i.e. , and has always . to avoid confusion, we will use the notation when we refer to a given state , and the notation to indicate a state occupying a given lattice site .[ differences ] the underlying idea here is that cells will tend to move whenever this allows the system to reach a lower energy . it can be shown that the energy function in a given position can be defined as follows : where is the set defined by the eight nearest neighbours of a cell in position ( moore s neighbourhood ) , each of which occupies a position , and has a defined state . to calculate the probability that the cell in ( ) will swap with a randomly chosen neighbour, we calculate the energy function when no swap occurs .this energy function , named , consists of two terms , one involving the cell in its original position and its neighbourhood set , and another involving the cell s neighbour , located in , with its neighbourhood .we then virtually swap the positions of the cell with its neighbour in , and calculate the energy function when swap occurs ( ) .the energy difference is then defined as : when the difference is negative , a decrease in the global energy occurs and the states will swap position . instead , when , the larger the difference the less the swap is likely to happen , with a probability following the boltzmann distribution . if we indicate as the probability that our cell moves from to , it can be shown that : where the parameter is a noise factor acting as a ` temperature ' , essentially tuning the degree of determinism of our system .the boltzmann factor acts in such a way that if , the probability of swapping is .note that cell - cell ( and cell - medium ) interactions are local ( fig .1 ) , meaning that a cell in ( ) interacts only with the set of its eight nearest neighbours . depending on the form of adhesion matrix , different patterns can be observed in a cell sorting system with two cell types .unless otherwise indicated , in our simulations we will apply a symmetric adhesion matrix ( see fig .sm-1 ) , where cells tend to attach preferentially to other cells in the same state , and secondarily to cells of the opposite state , while attachment with empty space is not favoured .the values of the adhesion matrix determine the structure of patterns formed by the cell sorting algorithm ( fig .2 ) , which can be perturbed by the effects of phenotypic switching . cells can perform reversible transitions between their states and , similarly to phase variation and persistence in natural bacterial populations .switching is regulated by transition probabilities and , where is a fixed scaling factor , introduced to regulate the relative speed between adhesion kinetics and phenotypic switching . by simply adding sps to a classical da model , cell sorting properties can change drastically for some adhesion matrices ( see fig .it is worth mentioning that in sps the transitions between states are not dependent on any molecular cue nor any cellular memory beyond their current state .cellular metabolism is defined by two simple pathways : the ability of both (white ) and (black ) phenotypes to transform nutrient ( constantly added to the lattice ) into cellular energy and waste byproduct : and the unique ability of cells to degrade waste : the cells can allocate resources for waste degradation , at the cost of reduced energy production and therefore proliferation , following a linear trade - off ( ) consistent with a maximum metabolic load and shared resources for protein synthesis .the temporal dynamics of metabolites , , for a given position in the lattice are described by : here , is the rate of input of the nutrient resource , and and correspond to the diffusion rate of nutrient and waste respectively .it is worth noting that , being an intracellular metabolite , it does not diffuse through the lattice .variables , and correspond to the exponential decay parameter of each of the metabolites , while defines the maximal absorption rate .the trade - off parameter adjusts the proportion of nutrient allotted to energy production or waste degradation in cells .taking into account spatial dynamics of metabolites , both kinds of cells can die either due to local excess of toxic waste or due to lack of internal energy : where indicates that the process equally affects both types of cells . is the maximum value of toxic waste a cell can sustain , is the minimum value of inner energy needed for survival , and defines the inner energy threshold needed for a cell to divide , provided an empty position exists in its neighbourhood : mother and daughter cells have the same properties .energy is equally split between the two cells after division .[ meanfield ] , ) phase space in the computational mean - field model , for .( sm-4 ) to see how the separation slope varies at different values of . ]to better understand the general properties of the model , we developed a mean - field ( ) approach to the set of odes that constitute the metabolism of cells in our system .as a starting point we use the waste differential equation ( 9 ) . in a well mixed scenario there is no spatial structure and all variables are homogeneous .hence , the diffusion term and the position subindices cease to be of relevance .therefore , at the steady state we get : = 0 , \nonumber\end{aligned}\ ] ] where and are the equilibrium concentrations of nutrient and waste respectively , and is the probability that a cell in the system has state . herewe separate the analysis into two solutions : \end{array } \right.\end{aligned}\ ] ] is equal to zero when ( trivial unstable solution ) or when the second nullcline is met . to develop the mathematical treatment we assume that the two populations of cell states and are at equilibrium .the ratio between populations can be then deduced from the persister cell population dynamics : the three terms on the right - hand side of these equations represent reproduction , stochastic switching and death , respectively . at the steady state , given that : the relation between the two populations is : since we have , the expected probabilities for each population at equilibrium are : being cell death a threshold function , all cells will die if , and in the opposite case no cell will die . therefore , whether the population will reach full occupation or not can be determined by incorporating into eq .( 1 ) and transforming the equation into an inequality : } \end{aligned}\ ] ] this expression defines the region of the parameter space in which , even at maximum population , and cells do not die .reordering the terms , we obtain : .\end{aligned}\ ] ] this inequality defines the boundary dividing the inhabited from the uninhabited region in the ( ) phase space . in fig .( sm-4 ) the boundary for different values of is shown .if waste degradation performed by cells is far greater than the passive decay term ( ) , then the denominator of eq .( 13 ) becomes : hence eq .( 12 ) gets simplified to : and the inequality in eq .( 14 ) becomes simpler : .\end{aligned}\ ] ] the concentration of nutrient at equilibrium is given by : ( 14 ) and ( 16 ) show that the boundary that in the mf model separates the inhabited from the uninhabited domain depends on , , and on the other parameters of the model appearing in the equations . as , , and have non - zero , positive values , and , and are constrained to the range ] can make the inequality true , resulting in a system dominated by death processes . in the simplified scenario described by inequality( 16 ) , is always in the range ] range , when .[ patterns ] to , , , with a symmetric adhesion matrix and after iterations .different arrangements -from spots to stripes to mazes- of both cell types are attained , reminiscent of fixed wavelength structures .( b ) average reachable fraction ( ) of cells of a particular type ( white and black circles represent and cells respectively ) , fixed .( c ) average domain count ( ) -defined as groups of contiguous cells with the same state- for each pair ( , ) . ][ kappa ] in the da - sps expansion of potts model , cells can perform phenotypic switching through a transition rule regulated by transition probabilities and , and their movement is driven by differential adhesion and the tendency to minimize interaction energy between cells .parameter is introduced as a scaling factor which regulates the relative speed at which cell sorting by adhesion and phenotypic switching occur . using this model, we assess the role in pattern formation of the individual transition probabilities ( , ) for a fixed .this analysis reveals that , in spite of model simplicity , and cells are able to self - organize in space in periodic structures .these patterns can range from spots to stripes to mazes , depending on the relative values of and ( fig .4a ) . to characterize the phases of the morphospace we applied a standard percolation algorithm to the final macro - state of each simulation . figure ( 4b ) shows that the average reachable fraction of each of the two states and displays a sharp transition .this defines three clear regimes : non - percolating ( spots ) , percolating ( fully connected maze ) and transition regime , where spots become stripes of increasing length and are marginally able to extend to other domains .interestingly , this transition occurs for a different value of the control parameter with respect to non - correlated percolation studies ( instead of ) . for this valuethe structures of both cell states percolate , giving rise to the labyrinth phase .although percolation analysis shows a sharp transition , the number of domains -i.e .clusters of lattice sites with same state- varies smoothly over the phase space ( fig .another interesting feature that can be observed in fig .( 4a , 4c ) is that even if the ratio remains constant , lower values of the transition parameters generate bigger structures with fewer domains . since in our model is a scaling factor for both and , we set out to quantify the effect of this parameter in the pattern formation process .( 5a ) shows the qualitative impact of the tuning of in a full - occupation da - sps model . at low values, sps occurs at a slower pace than the cell sorting process , which is therefore able to properly separate cells in two phases . instead , for high values, sps occurs at a faster pace than the cell sorting process , which brings about an almost random distribution of states .[ pqvary ] following the same strategy as before , we ran a set of simulations with periodic boundary conditions , transition probabilities , and different values of .we then applied a standard fourier transform analysis in order to confirm the existence of dominant frequencies in the spatial distribution of lattice states after iterations .( 5b ) displays the spatial frequency contribution in a radially averaged power spectrum ( raps ) .the presence of a single peak in the raps indicates that the periodic structures are built by a single dominant frequency without any specific orientation in the spatial domain , i.e. we obtain fixed wavelength structures reminiscent of turing patterns .furthermore , the peak position and width are subject to the particular value of , specifically the wavelength of the pattern decreases as increases . the particular mathematical relation between these two variables is displayed in fig .( 5c ) . [ adhstruct ] in the _ ecology and competition _( ec ) model cells are not only subject to adhesion processes and phenotypic switching , but can absorb the substrate which is constantly produced all over the lattice , and transform it into energy only ( cells ) or energy and waste ( cells ) .we compare the ec model with the _ mean - field _( mf ) model , to understand which properties can be predicted from the latter and which ones instead emerge from the complexity introduced by spatial organization and inhomogeneities in the levels of waste and substrate . we run a set of simulations , each with a different pair of and values , using periodic boundary conditions and fixing and .the parameter is fixed at , in such a way that adhesion processes occur times faster than phenotypic switching . in the initial time step , of latticeis occupied by randomly distributed and cells , being the ratio between the two phenotypes already set at the equilibrium value following ( eq . , ) , depending on and values .the results in fig .( 6 ) and in fig .( sm-2 ) show a clear correspondence between the analytical mf result and the simulated mf , indicating that the model we implemented properly reproduces the expected theoretical results . in the ec model , cellscan also occupy the region of the phase space which was left empty in the mf model .in fact , while in the mf model simulation all cells die instantaneously when the level of waste reaches ( the maximum level of a cell can sustain ) , in the ec model the death of a few cells can reduce the pressure on the system and allow population survival also in those parametrically unfavoured habitats where waste concentration can locally exceed .the slope in the ( ) phase space separating inhabited from uninhabited region in the mf model varies depending on the value of and of other relevant parameters of the model ( fig .sm-4 ) . for ,black and white cells are indistinguishable , and their behaviour is independent from the value of and , as can be calculated in the mf analytical model . as increases , cells can degrade waste better , but are also less able to elaborate nutrient and can die from lack of energy , unless and are set in such a way that there is a high probability for the cell to switch to a cell before inner value gets below ( the minimum value of a cell needs for survival ) . at steady statedifferent structures emerge , depending on , and values .the results for described in this paragraph are shown in fig .( 6 ) and in fig .( sm-2 ) . for high values of and , and cellsform turing patterns in a percolating structure . for high values of and low values of , cells dominate ( note that for they are still able to degrade w so as not to reach ) . for lower values of and higher values of , cells substitute cells less rapidly , can be reached more easily and cells start to die .lastly , we want to assess whether our ec model , integrating da , sps and metabolism in a habitat with nutrient and toxic waste , can present situations in which structural organization can affect the fitness of cells in the habitat .to do so , we launch a set of simulations in which cells with different adhesion matrices compete for reproduction . at the beginning of the simulation of the latticeis occupied by cells , which are divided in equal populations differing only by type of adhesion matrix , which can be _ aggregate _ , _ trabecular _ , _ symmetric _ , _ null _ , _ onions _ , _ sponge _ or _ unicellular _ ( see fig .the adhesion matrix type is transmitted by each cell to its offspring .cells are randomly distributed over the lattice regardless of the population they belong to .it is important to stress that in terms of preferential attachment , cells only sense the states and of neighbouring cells , independently from population type . for each set of parameters , and in the phase space ,we assess which type of adhesion matrix brings the related population to maximum fitness , by comparing the level of occupation of the lattice for each population . in fig .( 7a ) we represent cells of state or independently from their population , while in fig .( 7b ) we show the same cells differentiated by population . at the bottom ( fig .7c , 7d ) we represent the change of occupation level in time for each population . in the region of the phase space which is fully inhabited in the mf model, cells can survive independently from their adhesion matrix values and never die .for this reason the populations related to the various adhesion matrices are equally numbered in this area -with slight differences due only to growth speed before the lattice becomes saturated- and randomly distributed over space , with no emerging structure . however , in the domain of the phase space which was uninhabited in the mf , and have such values that do not guarantee survival of cells .since cell death may occur in the ec model for this region of the phase space , here the values of the adhesion matrix do make a difference and some species get selected over others .in particular , the population with ` trabecular ' adhesion matrix prevails . moreover , we can observe that in this area cells organize in a maze structure , exhibiting division of labour within the same population .lastly , in the frontier between the two zones various populations can coexist , with a prevalence for ` onion ' adhesion matrix at high values of . in fig .( sm-3 ) we show the relative occupation levels of each population at varying and .in this paper we have shown a novel way of constructing periodic arrangements of cell types in the form of a hybrid differential adhesion and stochastic switching process .this mechanism does not rely on differential diffusion ( normally found between the activator and inhibitor species in canonical turing - type systems ) yet it can create the same kind of structures in a predictable , scalable way .the key ingredients proposed at this level include the differential adhesion hypotheses stemming from steinberg s work ( that considers the minimization dynamics associated with a set of interacting spins or adhesion strengths ) and genetic switches following markovian stochastic dynamics , which are the source of cell diversity and the basis of some adaptive responses displayed by microbial populations .the switching dynamics can modify the types of patterns expected from the purely energy - driven scenario , thus indicating that potential forms of phenotypic change can lead to additional richness of pattern forming rules .a range of spatially ordered structures is obtained displaying characteristic length scales .being both key ingredients present in extant organisms , we consider that this simple mechanism might have been used originally ( and might be reproduced in the future by synthetic means ) to create regular structures in aggregates and colonies . in relation to pattern formation dynamics, our hybrid adhesion model offers an alternative way to generate turing patterns , which were up to now directly related with turing s rd mechanisms mediated by a diffusible molecule , or with apparently unrelated but mathematically equivalent systems such as direct contact - mediated regulation by means of which cells are affecting each other s internal rates of reactions .in fact , differently from what was proposed by babloyantz , in our hybrid da - sps model the molecules on the surface of one cell do not affect the rates of reactions in its neighbours : the phenotypic switching process occurs in any cell independently from its past and from its neighbour s state , and it is not influenced by the values of the adhesion matrix .the second relevant aspect considered is how these forming structures might be of benefit to a developing cooperative population in presence of nutrient resources and toxic agents .to do so we developed an ecology and competition model where a minimal metabolism enables positive or negative interactions between cells .cells can cooperate by metabolizing waste byproducts , yet they will suffer from decreased growth rates at higher population densities due to substrate attrition . in the ec modelfurther pattern - forming processes can be predicted . to further asses how structural organization can affect the fitness of cells in the habitat, we studied how cells with different adhesion matrices compete for reproduction .interestingly when many populations differing in terms of adhesion properties compete , in the region of the phase space with strong selective pressures only one of the populations survives ( fig . 7 ) .the selected specie consistently develops a periodic multicellular structure which is superior to both the unicellular and the unstructured multicellular one , suggesting that higher order properties might be of relevance to the establishment of functionality and cooperation .this simple competition model shows how minimal interaction properties pervading the metabolism of multiple species might come to play a central role in forcing the transition to collective fitness and behaviour , and sets the groundwork for explicitly evolutionary automata , where cells can optimize several genotype dimensions in order to attain more resources .we thank the members of the complex systems lab for useful discussions , and amads pags for useful hints on code debugging .this work has been supported by the botn foundation by banco santander through its santander universities global division , a mineco fellowship and by the santa fe institute .carroll , s.b .2001 . _ chance and necessity : the evolution of morphological complexity and diversity_. nature 409,1102 - 1109 nedelcu , a.m. and ruiz - trillo , i. 2015 ( eds . )_ evolutionary transitions to multicellular life : principles and mechanisms_. springer - verlag , london .economou , a. d. , ohazama , a. , porntaveetus , t. , sharpe , p. t. , kondo , s. , basson , m. a. and green , j. b. 2012 ._ periodic stripe formation by a turing mechanism operating at growth zones in the mammalian palate_. nature genetics , 44(3 ) , 348 - 351 .newman , stuart a. , and ramray bhat ._ dynamical patterning modules : a `` pattern language '' for development and evolution of multicellular form_. international journal of developmental biology 53.5 ( 2009 ) : 693 .steinberg , m s. 1975 ._ adhesion - guided multicellular assembly : a commentary upon the postulates , real and imagined , of the differential adhesion hypothesis , with special attention to computer simulations of cell sorting_. _ j. theor ._ 55 ( 2 ) : 431 - 43 .duran - nebreda , s. , bonforti , a. , montaez , r. , valverde , s. , and sol , r. 2015 ._ emergence of proto - organisms from bistable stochastic differentiation and adhesion_. _ arxiv preprint _ arxiv:1511.02079 . | spatial self - organization emerges in distributed systems exhibiting local interactions when nonlinearities and the appropriate propagation of signals are at work . these kinds of phenomena can be modeled with different frameworks , typically cellular automata or reaction - diffusion systems . a different class of dynamical processes involves the correlated movement of agents over space , which can be mediated through chemotactic movement or minimization of cell - cell interaction energy . a classic example of the latter is given by the formation of spatially segregated assemblies when cells display differential adhesion . here we consider a new class of dynamical models , involving cell adhesion among two stochastically exchangeable cell states as a minimal model capable of exhibiting well - defined , ordered spatial patterns . our results suggest that a whole space of pattern - forming rules is hosted by the combination of physical differential adhesion and the value of probabilities modulating cell phenotypic switching , showing that turing - like patterns can be obtained without resorting to reaction - diffusion processes . if the model is expanded allowing cells to proliferate and die in an environment where diffusible nutrient and toxic waste are at play , different phases are observed , characterized by regularly spaced patterns . the analysis of the parameter space reveals that certain phases reach higher population levels than other modes of organization . a detailed exploration of the mean - field theory is also presented . finally we let populations of cells with different adhesion matrices compete for reproduction , showing that , in our model , structural organization can improve the fitness of a given cell population . the implications of these results for ecological and evolutionary models of pattern formation and the emergence of multicellularity are outlined . |
in recent years , approximate characterizations of capacity regions of multi - user systems have gained more and more attention , with one of the most prominent examples being the characterization of the capacity of the gaussian interference channel to within one bit / s / hz in .one of the tools that arised in the context of capacity approximations and has been shown to be useful in many cases is the _ linear deterministic model _ introduced in . here, the channel is modeled as a deterministic mapping that operates on bit vectors and mimics the effect the physical channel and interfering signals have on the binary expansion of the transmitted symbols . basically , the effect of the channel is to erase a certain number of ingoing bits , while superposition of signals is given by the modulo addition . even though this model deemphasizes the effect of thermal noise , it is able to capture some important basic features of wireless systems , namely the superposition and broadcast properties of electromagnetic wave propagation . hence , in multi - user systems where interference is one of the most important limiting factors on system performance , the model can also be useful to devise effective coding and interference mitigation techniques .there are many examples where a linear deterministic analysis can be successfully carried over to coding schemes for the physical ( gaussian ) models or be used for approximative capacity or ( generalized ) degrees of freedom determination , see for example - .* contributions . * from a practical viewpoint , _ cellular systems _ are of major interest . generally , a cellular system consists of a set of base stations each communicating with a distinct set of ( mobile ) users .effective coding and interference mitigation schemes are still an active area of research .approximative models such as the linear deterministic approach might help to gain more insight into these problems . in , the capacity of a basic cellular setup , a multiple access interfering with a point to point link , has been determined for the linear deterministic model in the case of _ symmetric weak interference_. in this paper , we extend these results to the case of arbitrarily strong interference with respect to the achievable sum rate .furthermore , we use these results to lower bound the generalized degrees of freedom for the corresponding gaussian channel. * organization . *the paper is organized as follows : section [ sec : systemmodel ] introduces the system model . in section [ sec :outerbounds ] , we derive outer bounds on the achievable sum rate . in section [ sec : achievability ] , we construct coding schemes that achieve these outer bounds , thereby characterizing the sum capacity of the system . from these results , a lower bound on the the generalized degrees of freedom for the gaussian caseis derived in section [ sec : gendof ] .finally , section [ sec : conclusions ] concludes the paper .* notation .* throughout the paper , denotes the binary finite field , for which addition is written as , which is addition modulo 2 . for two matrices and , we denote by \in \mathbb{f}_2^{n_a + n_b \times m} ]. similarly , ]is given by taking only the rows to of the matrix .for , the positive part is denoted by .finally , and denote integer division and the modulo operation , respectively , where we use the convention .the system we consider here represents a basic version of the uplink of cellular system and consists of three transmitters ( mobile users ) and and two receivers ( base stations ) .the system is modeled using the _ linear deterministic model _ . here, the input symbol at transmitter is given by a bit vector and the output bit vectors at are deterministic functions of the inputs : defining the shift matrix by the input / output equations of the system are given by rcl y_1 & = & s^q - n_11 x_1 s^q - n_12x_2 s^q - n_13x_3 , + y_2 & = & s^q - n_21 x_1 s^q - n_22x_2 s^q - n_23x_3 . here, is chosen arbitrarily such that .note that gives the number of bits that can be passed over the channel between and , i.e. represent channel gains .there are three messages to be transmitted in the system : denotes the message from transmitter to the intended receiver .the definitions of ( block ) codes , error probability , achievable rates and the capacity region are according to the standard information - theoretic definitions .for the remainder of the paper , the transmission rate corresponding to message is represented by , the rate corresponding to by and the rate for by .the sum rate is written as . in the following ,we assume without loss of generality that and write . also , we let . in order to keep the presentation clear and reduce the number of cases to be distinguished , we make some further assumptions on the channel gains : we let and .we remark that these restrictions may seem unrealistic , but can easily be removed and the techniques applied in the following extend to more general cases as well .the corresponding ( real ) gaussian channel is defined by output symbols rcl [ eq : gaussianchannel ] y_1 & = & h_1 x_1 + h_2x_2 + h_ix_3 + z_1 , + y_2 & = & h_i x_1 + h_ix_2 + h_1 x_3 + z_2 with ( non - varying ) channel coefficients , input symbols subject to power constraints \leq p_i ] , ] , ,x_2^d = x_2^n[\sigma+1:n_1-n_i] ] and ] and ] and ] .then , it holds that we let and introduce the following labels for blocks of rows of the matrices and : , b ' = [ ( b_k)_{k=0}^{l-1};q'_{b } ] , b '' = [ ( b_k)_{k=1}^{l};q''_{b}] ], an optimal construction follows from the results in , and the extension to the case ] . for ,the achievable generalized degrees of freedom are shown in figure [ fig : wcurve ] for different values , together with the generalized degrees of freedom for the interference channel consisting of only and .note that the latter one represents the well - known _ w curve _ of the generalized degrees of freedom for the interference channel . for , the channel gain difference in the two - user cell can be exploited for interference alignment , pushing the achievable generalized degrees of freedom higher than the w curve , whereas for , the lower bound can be achieved by coding only for the interference channel consisting of and .in this paper , we studied the linear deterministic model for a cellular - type channel where a two user multiple access channel mutually interferes with a point to point link . under certain symmetry assumptions on the channel gains, we derived the sum capacity and the corresponding transmission schemes , which use interference alignment and linear coding across bit levels .while for a large parameter range , the sum capacity is identical to the sum capacity of the interference channel obtained by silencing the weaker user in the two - user cell ( multiple access channel ) , for a certain parameter range , the channel gain difference of in the two - user cell allows to get a higher sum rate using interference alignment .finally , from these results , a lower bound on the generalized degrees of freedom for the gaussian channel was given , increasing the w curve for the interference channel in a certain interference range .although we have considered a restricted setup in this paper , we believe that the achievability and converse arguments used in this paper give valuable insights for the consideration of more general systems . future work will study extensions to the more general cases with additional users .another interesting direction for future investigations is to further explore the connections to the gaussian equivalent of the channel , specifically concerning outer bounds on the generalized degrees of freedom of the system and approximate capacity characterizations .v. cadambe , s. jafar , and s. shamai , `` interference alignment on the deterministic channel and application to fully connected gaussian interference networks , '' _ ieee trans .inform . theory _55 , no . 1 ,pp . 269274 , 2009 .c. huang , v. cadambe , and s. jafar , `` interference alignment and the generalized degrees of freedom of the x channel , '' in _ proc .information theory ( isit ) _ ,seoul , korea , june / july 2009 , pp . 19291933 .j. bhler and g. wunder , `` on interference alignment and the deterministic capacity for cellular channels with weak symmetric cross links , '' in _ proc .symp . on information theory ( isit )_ , saint petersburg , russia , 2011 , accepted for publication , available at arxiv:1104.0136 . | in this paper , we use the linear deterministic approximation model to study a two user multiple access channel mutually interfering with a point to point link , which represents a basic setup of a cellular system . we derive outer bounds on the achievable sum rate and construct coding schemes achieving the outer bounds . for a large parameter range , the sum capacity is identical to the sum capacity of the interference channel obtained by silencing the weaker user in the multiple access channel . for other interference configurations , the sum rate can be increased using interference alignment , which exploits the channel gain difference of the users in the multiple access channel . from these results , lower bounds on the generalized degrees of freedom for the gaussian counterpart are derived . |
reliable and accurate prediction of precipitation is of great importance in agriculture , tourism , aviation and in some other fields of economy as well . in order to represent the uncertainties of forecasts based on observational data and numerical weather prediction ( nwp ) models one can run these models with different initial conditions or change model physics , resulting in a forecast ensemble . in the last two decadesthis approach has became a routinely used technique all over the world and recently all major weather prediction centres have their own operational ensemble prediction systems ( eps ) , e.g. the consortium for small - scale modelling ( cosmo - de ) eps of the german meteorological service ( dwd ; * ? ? ?* ; * ? ? ?* ) , the prvision densemble arpege ( pearp ) eps of mteo france or the eps of the independent intergovernmental european centre for medium - range weather forecasts . with the help of a forecast ensemble one can estimate the distribution of the predictable weather quantity which opens up the door for probabilistic forecasting . by post - processing the raw ensemble the most sophisticated probabilistic methods result in full predictive cumulative distribution functions ( cdf ) and correct the possible bias and underdispersion of the original forecasts .the underdispersive character of the ensemble has been observed with several ensemble prediction systems and this property also leads to the lack of calibration . using predictive cdfs one can easily get consistent estimators of probabilities of various meteorological events or calculate different prediction intervals .recently , probably the most widely used ensemble post - processing methods leading to full predictive distributions ( for an overview see e.g. * ? ? ?* ; * ? ? ?* ) are the bayesian model averaging ( bma ; * ? ? ?* ) and the non - homogeneous regression or ensemble model output statistics ( emos ; * ? ? ?* ) , as they are partially implemented in the ensemblebma and ensemblemos packages of r .the bma predictive probability density function ( pdf ) of the future weather quantity is the mixture of individual pdfs corresponding to the ensemble members with mixture weights determined by the relative performance of the ensemble members during a given training period . to model temperature or sea level pressurea normal mixture seems to be appropriate , wind speed requires non - negative and skewed component pdfs such as gamma or truncated normal distributions , whereas for surface wind direction a von mises distribution is suggested . however , in some situations bma post - processing might result , for instance , in model overfitting or over - weighting climatology .in contrast to bma , the emos technique uses a single parametric pdf with parameters depending on the ensemble members .again , for temperature and sea level pressure the emos predictive pdf is normal , whereas for wind speed truncated normal , generalized extreme value ( gev ; * ? ? ?* ) , censored logistic , truncated logistic , gamma and log - normal distributions are suggested .however , statistical calibration of ensemble forecasts of precipitation is far more difficult than the post - processing of the above quantities . as pointed out by , precipitation has a discrete - continuous nature with a positive probability of being zero and larger expected precipitation amount results in larger forecast uncertainty . introduced a bma model where each individual predictive pdf consists of a discrete component at zero and a gamma distribution modelling the case of positive precipitation amounts . uses extends logistic regression to provide full probability distribution forecasts , whereas suggests an emos model based on a censored gev distribution .finally , propose a more complex three step approach where they first fit a censored and shifted gamma ( csg ) distribution model to the climatological distribution of observations , then after adjusting the forecasts to match this climatology derive three ensemble statistics , and with the help of a nonhomogeneus regression model connect these statistics to the csg model .based on the idea of we introduce a new emos approach which directly models the distribution of precipitation accumulation with a censored and shifted gamma predictive pdf .the novel emos approach is applied to 24 hour precipitation accumulation forecasts of the eight - member university of washington mesoscale ensemble ( uwme ; * ? ? ?* ) and the 11 member operational eps of the hungarian meteorological service ( hms ) called aire limite adaptation dynamique dvelopment international - hungary eps ( aladin - huneps ; * ? ? ?* ; * ? ? ?* ) . in these casestudies the performance of the proposed emos model is compared to the forecast skills of the gev emos method of and to the gamma bma approach of serving as benchmark models .as mentioned in the introduction , the emos predictive pdf of a future weather quantity is a single parametric distribution with parameters depending on the ensemble members . due to the special discrete - continuous nature of precipitationone should think only of non - negative predictive distributions assigning positive mass to the event of zero precipitation . mixing a point mass at zero and a separate non - negative distributiondoes the job ( see e.g. the bma model of * ? ? ?* ) , but left censoring of an appropriate continuous distribution at zero can also be a reasonable choice .the advantage of the latter approach is that the probability of zero precipitation can directly be derived from the corresponding original ( uncensored ) cumulative distribution function ( cdf ) , so the cases of zero and positive precipitation can be treated together .the emos model of utilizes a censored gev distribution with shape parameter ensuring a positive skew and finite mean , whereas our emos approach is based on a csg distribution appearing in the more complex model of .consider a gamma distribution with shape and scale having pdf where denotes value of the gamma function at .a gamma distribution can also be parametrized by its mean and standard deviation using expressions now , let and denote by the cdf of the distribution .then the shifted gamma distribution left censored at zero ( csg ) with shape , scale and shift can be defined with cdf this distribution assigns mass to the origin and has generalized pdf where denotes the indicator function of the set .short calculation shows that the mean of equals whereas the -quantile ( ) of equals if , and the solution of , otherwise .now , denote by the ensemble of distinguishable forecasts of precipitation accumulation for a given location and time .this means that each ensemble member can be identified and tracked , which holds for example for the uwme ( see section [ subs : subs3.1 ] ) or for the cosmo - de eps of the dwd . in the proposed csg emos model the ensemble members are linked to the mean and variance of the underlying gamma distribution via equations where denotes the ensemble mean .mean parameters and variance parameters of model can be estimated from the training data , consisting of ensemble members and verifying observations from the preceding days , by optimizing an appropriate verification score ( see section [ subs : subs2.2 ] ). however , most of the currently used epss produce ensembles containing groups of statistically indistinguishable ensemble members which are obtained with the help of random perturbations of the initial conditions .this is the case for the aladin - huneps ensemble described in section [ subs : subs3.2 ] or for the 51 member ecmwf ensemble .the existence of several exchangeable groups is also a natural property of some multi - model epss such as the the thorpex interactive grand global ensemble or the glameps ensemble .suppose we have ensemble members divided into exchangeable groups , where the group contains ensemble members , such that .further , we denote by the member of the group . in this situation ensemble members within a given groupshould share the same parameters resulting in the exchangeable version of model .note , that the expression of the mean ( or location ) as an affine function of the ensemble is general in emos post - processing ( see e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , whereas the dependence of the variance parameter on the ensemble mean is similar to the expression of the variance in the gamma bma model of , and it is in line with the relation of forecast uncertainty to the expected precipitation amount mentioned in the introduction .moreover , practical tests show that , at least for the uwme and aladin - huneps ensemble considered in the case studies of section [ sec : sec5 ] , models and , respectively , significantly outperform the corresponding csg emos models with variance parameters where are the ensemble variance andthe more robust ensemble mean difference , respectively .further , compared to the proposed models , natural modifications in the csg emos variance structure do not result in improved forecasts skills .the main aim of probabilistic forecasting is to access the maximal sharpness of the predictive distribution subject to calibration .the latter means a statistical consistency between the predictive distributions and the validating observations , whereas the former refers to the concentration of the predictive distribution .this goal can be addressed with the help of scoring rules which measure the predictive performance by numerical values assigned to pairs of probabilistic forecasts and observations . in atmospheric sciences the most popular scoring rules for evaluating predictive distributions are the logarithmic score , i.e. the negative logarithm of the predictive pdf evaluated at the verifying observation , and the continuous ranked probability score ( crps ; * ? ? ?* ; * ? ? ?for a predictive cdf and an observation the crps is defined as where denotes the indicator of a set , while and are independent random variables with cdf and finite first moment .the crps can be expressed in the same units as the observation and one should also note that both scoring rules are proper and negatively oriented , that is the smaller the better .for a csg distribution defined by the crps can be expressed in a closed form , showed that following the ideas of and , the parameters of models ( and as well ) are estimated by minimizing the mean crps of predictive distributions and validating observations corresponding to forecast cases of the training period .we remark that optimization with respect to the mean logarithmic score , that is , maximum likelihood ( ml ) estimation of parameters , has also been investigated .obviously , in terms of crps this model can not outperform the one fit via crps minimization , however , in our test cases the ml method results in a reduction of the predictive skill of the csg emos model in terms of almost all verification scores considered , so the corresponding values are not reported .the eight - member uwme covers the pacific northwest region of north america and operates on a 12 km grid .the ensemble members are obtained from different runs of the fifth generation pennsylvania state university national center for atmospheric research mesoscale model ( psu - ncar mm5 ; * ? ? ?* ) with initial and boundary conditions from various weather centres .we consider 48 h forecasts and corresponding validating observations of 24 h precipitation accumulation for 152 stations in the automated surface observing network in five us states .the forecasts are initialized at 0 utc ( 5 pm local time when daylight saving time ( dst ) is in use and 4 pm otherwise ) and we investigate data for calendar year 2008 with additional forecasts and observations from the last three months of 2007 used for parameter estimation . after removing days and locations with missing data83 stations remain resulting in 20522 forecast cases for 2008 . to 9 cma ) b ) figure [ fig : fig1]a shows the verification rank histogram of the raw ensemble , that is the histogram of ranks of validating observations with respect to the corresponding ensemble forecasts computed for all forecast cases ( see e.g. * ? ? ?* section 7.7.2 ) , where zero observations are randomized among all zero forecasts .this histogram is far from the desired uniform distribution as in many cases the ensemble members overestimate the validating observation .the ensemble range contains the observed precipitation accumulation in of the cases , whereas the nominal coverage of the ensemble equals , i.e .hence , the uwme is uncalibrated , and would require statistical post - processing to yield an improved forecast probability density function .the ensemble forecasts produced by the operational aladin - huneps system of the hms are obtained with dynamical downscaling of the global pearp system of mto france by the aladin limited area model with an 8 km horizontal resolution .the eps covers a large part of continental europe and has 11 ensemble members , 10 exchangeable forecasts from perturbed initial conditions and one control member from the unperturbed analysis .the data base at hand contains ensembles of 42 h forecasts ( initialized at 18 utc , i.e. 8 pm local time when dst operates and 7 pm otherwise ) for 24 h precipitation accumulation for 10 major cities in hungary ( miskolc , sopron , szombathely , gyr , budapest , debrecen , nyregyhza , nagykanizsa , pcs , szeged ) together with the corresponding validating observations for the period between 1 october 2010 and 25 march 2011 .the data set is fairly complete since there are only two dates when three ensemble members are missing for all sites .these dates are excluded from the analysis .the verification rank histogram of the raw ensemble , displayed in figure [ fig : fig1]b , shows far better calibration , than that of the uwme .the coverage of the aladin - huneps ensemble equals , which is very close to the nominal value of ( ) .as mentioned earlier , the predictive performance of the csg emos model is tested on ensemble forecasts produced by the uwme and aladin - huneps epss , and the results are compared with the fits of the gev emos and gamma bma models investigated by and , respectively , and the verification scores of the raw ensemble .we remark that according to the suggestions of for estimating the parameters of the gev emos model for a given day , the estimates for the preceding day serve as initial conditions for the box constrained broyden - fletcher - goldfarb - shanno optimization algorithm . compared with the case of fixed initial conditions this approach results in a slight increase of the forecast skills of the gev emos model , whereas for the csg emos method , at least in our case studies ,fixed initial conditions are preferred .further , we consider regional ( or global ) emos approach ( see e.g. * ? ? ?* ) which is based on ensemble forecasts and validating observations from all available stations during the rolling training period and consequently results in a single universal set of parameters across the entire ensemble domain . to get the first insight about the calibration of emos and bma post - processed forecasts we consider probability integral transform ( pit ) histogramsgenerally , the pit is the value of the predictive cdf evaluated at the verifying observation , however , for our discrete - continuous models in the case of zero observed precipitation a random value is chosen uniformly from the interval between zero and the probability of no precipitation . obviously , the closer the histogram to the uniform distribution , the better the calibration . in this waythe pit histogram is the continuous counterpart of the verification rank histogram of the raw ensemble and provides a good measure about the possible improvements in calibration .the predictive performance of probabilistic forecasts is quantified with the help of the mean crps over all forecast cases , where for the raw ensemble the predictive cdf is replaced by the empirical one .further , as suggested by , diebold - mariano ( dm ; * ? ? ?* ) tests are applied for investigating the significance of the differences in scores corresponding to the various post - processing methods .the dm test takes into account the dependence in the forecasts errors and for this reason it is widely used in econometrics .besides the crps we also consider brier scores ( bs ; * ? ? ? * section 8.4.2 ) for the dichotomous event that the observed precipitation amount exceeds a given threshold .for a predictive cdf the probability of this event is , and the corresponding brier score is given by see e.g. . obviously , the bs is negatively oriented and the crps is the integral of the bss over all possible thresholds . in our case studies we consider 0 mm precipitation , 5 , 15 , 25 , 30 mm and 1 , 5 , 7 , 9 mm threshold values for the uwme and aladin - huneps ensemble , respectively , corresponding approximately to the 45th , 75th , 85th and 90th percentiles of the observed non - zero precipitation accumulations , and compare the mean bss of the pairs of predictive cdfs and verifying observations over all forecast cases ..5 truecm .-values of kolmogorov - smirnov tests for uniformity of pit values for the uwme .means of random samples of sizes each . [ cols="<,^,^,^",options="header " , ] concerning the two emos approaches , the verification scores of table [ tab : tab6 ] together with the results of the corresponding dm tests for equal predictive performance ( see table [ tab : tab7 ] ) display similar behavior as in the case of the uwme .there is no significant difference between the mae values of the csg and gev emos methods and the former results in the lowest crps and the sharpest central prediction interval .further , the emos models significantly outperform both the raw ensemble and the gamma bma approach , despite the raw ensemble is rather well calibrated and has far better predictive skill than the bma calibrated forecast .note that the large mean crps and coverage of the bma predictive distribution is totally in line with the shape of the corresponding pit histogram of figure [ fig : fig4 ] .2 mm 2 mm 2 mm the good predictive performance of the aladin - huneps ensemble can also be observed on the large amount of negative skill scores reported in table [ tab : tab8 ] and on the reliability diagrams of figure [ fig : fig5 ] .similar to the case of the uwme , for 0 mm threshold the gamma bma model has good predictive performance , whereas for higher threshold values it underperforms the csg and gev emos models and the raw ensemble .however , in connection with the reliability diagrams one should also note that the hectic behavior of the graphs ( compared to the rather smooth diagrams of figure [ fig : fig3 ] ) is a consequence of the shortage of data , as the verification period contains only 394 observations of positive precipitation , which is around one third of the forecast cases . taking into account both the uniformity of the pit values and the verification scores in tables [ tab : tab6 ] and [ tab : tab8 ] it can be said that the proposed csg emos model has the best overall performance in calibration of the raw aladin - huneps ensemble forecasts of precipitation accumulation .a new emos model for calibrating ensemble forecasts of precipitation accumulation is proposed where the predictive distribution follows a censored and shifted gamma distribution , with mean and variance of the underlying gamma law being affine functions of the raw ensemble and the ensemble mean , respectively .the csg emos method is tested on ensemble forecasts of 24 h precipitation accumulation of the eight - member university of washington mesoscale ensemble and on the 11 member aladin - huneps ensemble of the hungarian meteorological service .these ensemble prediction systems differ both in the climate of the covered area and in the generation of the ensemble members . by investigating the uniformity of the pit values of predictive distributions , the mean crps of probabilistic forecasts , the brier scores and reliability diagrams for various thresholds , the mae of median forecasts and the average width and coverage of central prediction intervals corresponding to the nominal coverage , the predictive skill of the new approach is compared with that of the gev emos method , the gamma bma model and the raw ensemble . from the results of the presented case studies one can conclude that in terms of calibration of probabilistic and accuracy of point forecasts the proposed csg emos model significantly outperforms both the raw ensemble and the bma model and shows slightly better forecast skill than the gev emos approach .* acknowledgments . *sndor baran is supported by the jnos bolyai research scholarship of the hungarian academy of sciences .dra nemoda partially carried out her research in the framework of the center of excellence of mechatronics and logistics at the university of miskolc .the authors are indebted to michael scheuerer for his useful suggestions and remarks and for providing the r code for the gev emos model .the authors further thank the university of washington muri group for providing the uwme data and mihly szcs from the hms for the aladin - huneps data .99 bao l , gneiting t , raftery ae , grimit ep , guttorp p. 2010 .bias correction and bayesian model averaging for ensemble forecasts of surface wind direction ._ monthly weather review _ * 138*:18111821 , doi : 10.1175/2009mwr3138.1 .baran s. 2014 .probabilistic wind speed forecasting using bayesian model averaging with truncated normal components ._ computational statistics and data analysis _ * 75*:227238 , doi : 10.1016/j.csda.2014.02.013 .baran s , lerch s. 2015 .log - normal distribution based emos models for probabilistic wind speed forecasting . _ quarterly journal of the royal meteorological society _ * 141*:22892299 , doi : 10.1002/qj.2521 .baran s , sikolya k , veress l. 2013 . estimating the risk of a down s syndrome term pregnancy using age and serum markers : comparison of various methods ._ communications in statistics simulation and computation _ * 42*:16541672 , doi : 10.1080/03610918.2012.674596 .buizza r , houtekamer pl , toth z , pellerin g , wei m , zhu y. 2005 .a comparison of the ecmwf , msc , and ncep global ensemble prediction systems ._ monthly weather review _ * 133*:10761097 , doi : 10.1175/mwr2905.1 .descamps l , labadie c , joly a , bazile e , arbogast p , cbron p. 2014 .pearp , the mto - france short - range ensemble prediction system ._ quarterly journal of the royal meteorological society _ * 141*:16711685 , doi : 10.1002/qj.2469 .fraley c , raftery ae , gneiting t. 2010 .calibrating multimodel forecast ensembles with exchangeable and missing members using bayesian model averaging ._ monthly weather review _ * 138*:190202 , doi : 10.1175/2009mwr3046.1 .friederichs p , thorarinsdottir tl .forecast verification for extreme value distributions with an application to probabilistic peak wind prediction ._ environmetrics _ * 23*:579594 , doi : 10.1002/env.2176. gebhardt c , theis se , paulat m , bouallgue zb .uncertainties in cosmo - de precipitation forecasts introduced by model perturbations and variation of lateral boundaries ._ atmospheric research _* 100*:168177 , doi : 10.1016/j.atmosres.2010.12.008 . gneiting t. 2014 .calibration of medium - range weather forecasts ._ ecmwf technical memorandum _ no . 719 .( available from : http://old.ecmwf.int/publications/library/ecpublications/_pdf/tm/701-800/tm719.pdf . )[ accessed on 24 november 2015 ] gneiting t , balabdaoui f , raftery ae .probabilistic forecasts , calibration and sharpness ._ journal of the royal statistical society : series b _ * 69*:243268 , doi : 10.1111/j.1467 - 9868.2007.00587.x . gneiting t , raftery ae , westveld ah , goldman t. 2005 . calibrated probabilistic forecasting using ensemble model output statistics and minimum crps estimation ._ monthly weather review _ * 133*:10981118 , doi : 10.1175/mwr2904.1 .grell ga , dudhia j , stauffer dr .1995 . a description of the fifth - generation penn state / ncar mesoscale model ( mm5 ) .technical note ncar / tn-398+str . national center for atmospheric research , boulder .( available from : http://www2.mmm.ucar.edu/mm5/documents/mm5-desc-doc.html ) [ accessed on 24 november 2015 ] hamill tm .2007 . comments on `` calibrated surface temperature forecasts from the canadian ensemble prediction system using bayesian model averaging . ''_ monthly weather review _ * 135*:42264230 , doi : 10.1175/2007mwr1963.1. hodyss d , satterfield e , mclay j , hamill tm , scheuerer m. 2015 .inaccuracies with multi - model post - processing methods involving weighted , regression - corrected forecasts ._ monthly weather review _ doi : 10.1175/mwr - d-15 - 0204.1 .iversen t , deckmin a , santos c , sattler k , bremnes jb , feddersen h , frogner i - l . 2011 .evaluation of glameps a proposed multimodel eps for short range forecasting ._ tellus a _* 63*:513530 , doi : 10.1111/j.1600 - 0870.2010.00507.x .messner jw , mayr gj , zeileis a , wilks ds .heteroscedastic extended logistic regression for postprocessing of ensemble guidance ._ monthly weather review _ * 142*:448456 , doi : 10.1175/mwr - d-13 - 00271.1 .scheuerer m. 2014 .probabilistic quantitative precipitation forecasting using ensemble model output statistics ._ quarterly journal of the royal meteorological society _ * 149*:10861096 , doi : 10.1002/qj.2183 .scheuerer m , hamill tm .statistical post - processing of ensemble precipitation forecasts by fitting censored , shifted gamma distributions _ monthly weather review _ * 143*:45784596 , doi : 10.1175/mwr - d-15 - 0061.1 .sloughter jm , gneiting t , raftery ae .probabilistic wind speed forecasting using ensembles and bayesian model averaging ._ journal of the american statistical association _ * 105*:2537 , doi : 10.1198/jasa.2009.ap08615 .sloughter jm , raftery ae , gneiting t , fraley c. 2007 .probabilistic quantitative precipitation forecasting using bayesian model averaging ._ monthly weather review _ * 135*:32093220 , doi : 10.1175/mwr3441.1. swinbank r , kyouda m , buchanan p , froude l , hamill tm , hewson td , keller jh , matsueda m , methven j , pappenberger f , scheuerer m , titley ha , wilson l , yamaguchi m. 2015 . the tigge project and its achievements ._ bulletin of the american meteorological society _ , doi : 10.1175/bams - d-13 - 00191.1 .thorarinsdottir tl , gneiting t. 2010 .probabilistic forecasts of wind speed : ensemble model output statistics by using heteroscedastic censored regression ._ journal of the royal statistical society : series a _ * 173*:371388 , doi : 10.1111/j.1467 - 985x.2009.00616.x .williams rm , ferro cat , kwasniok f. 2014 .a comparison of ensemble post - processing methods for extreme events . _ quarterly journal of the royal meteorological society _ * 140*:11121120 , doi : 10.1002/qj.2198 . | recently all major weather prediction centres provide forecast ensembles of different weather quantities which are obtained from multiple runs of numerical weather prediction models with various initial conditions and model parametrizations . however , ensemble forecasts often show an underdispersive character and may also be biased , so that some post - processing is needed to account for these deficiencies . probably the most popular modern post - processing techniques are the ensemble model output statistics ( emos ) and the bayesian model averaging ( bma ) which provide estimates of the density of the predictable weather quantity . in the present work an emos method for calibrating ensemble forecasts of precipitation accumulation is proposed , where the predictive distribution follows a censored and shifted gamma ( csg ) law with parameters depending on the ensemble members . the csg emos model is tested on ensemble forecasts of 24 h precipitation accumulation of the eight - member university of washington mesoscale ensemble and on the 11 member ensemble produced by the operational limited area model ensemble prediction system of the hungarian meteorological service . the predictive performance of the new emos approach is compared with the fit of the raw ensemble , the generalized extreme value ( gev ) distribution based emos model and the gamma bma method . according to the results , the proposed csg emos model slightly outperforms the gev emos approach in terms of calibration of probabilistic and accuracy of point forecasts and shows significantly better predictive skill that the raw ensemble and the bma model . _ key words : _ continuous ranked probability score , ensemble calibration , ensemble model output statistics , gamma distribution , left censoring . |
consider a memoryless channel with input , output , and state .we assume that the channel state , distributed according to , is provided to the decoder , and a noisy state observation , generated by through side channel , is available causally at the encoder . here , , , and are defined over finite alphabets , , , and , respectively . in thissetting ( see fig . [fig : model ] ) , shannon s remarkable result ( see also ( * ? ? ?* eq . ( 3 ) ) and ( * ? ? ?7.2 ) ) implies that the channel capacity is given by the auxiliary random variable is defined over alphabet with , whose joint distribution with factors as where is the indicator function , and , , are different mappings from to . without loss of generality ,we set , , , and order the mappings , , in such a way that the first mappings are moreover , we assume that .the capacity formula ( [ eq : noisystate ] ) can be simplified in the following two special cases . specifically , when there is no encoder side information , the channel capacity reduces to (* eq . ( 7.2 ) ) where ; on the other hand , when perfect state information is available at the encoder ( as well as the decoder ) , the channel capacity becomes ( * ? ? ?* eq . ( 7.3 ) ) where . for comparison ,consider the following similarly defined quantity where the joint distribution of is also given by ( [ eq : jointdistribution ] ) .we shall refer to as the generalized probing capacity . by the functional representation lemma ( see also ( * ? ? ?* lemma 1 ) ) , can be defined equivalently as where clearly , moreover , we have if and are independent ( i.e. , ) , and if is a deterministic function of ( i.e. , ) . to elucidate the operational meaning of and its connection with , it is instructive to consider the special case where is a binary erasure channel with erasure probability ( denoted by ) , which corresponds to the probing channel setup studied in .the probing channel model is essentially the same as the one in fig .[ fig : model ] except that , in fig .[ fig : model ] , the encoder ( which , with high probability , observes approximately state symbols out of the whole state sequence of length when is large enough ) has no control of the exact positions of these symbols whereas , in the probing channel model , the encoder has the freedom to specify the positions of these symbols according to the message to be sent .it is shown in that this additional freedom increases the achievable rate from to .now consider an example ( see also fig .[ fig : szchannel ] ) where for this example , it can be verified that note that is strictly greater than unless or .it follows by ( [ eq : endpoint1 ] ) and ( [ eq : endpoint2 ] ) that to gain a better understanding , we plot and against for ] ) defined as [ lem : core ] given any binary - input channel and state distribution , for ] , clearly , is a capacity - achieving input distribution of channel when .therefore , we have ) is in fact an equality . ] note that , u\in\mathcal{u},\label{eq : combine3}\end{aligned}\ ] ] where ( [ eq : subp ] ) is due to ( [ eq : subh2 ] ) , and ( [ eq : combine3 ] ) is due to ( [ eq : orderpsi ] ) and ( [ eq : uconstruction ] ) .moreover , , u\in\mathcal{u}.\label{eq : newsub}\end{aligned}\ ] ] define in light of ( [ eq : combine3 ] ) , , u\in\mathcal{g}_{\delta}.\label{eq : invariant}\end{aligned}\ ] ] for any , there must exist some and such that ; furthermore , since , we have where continuing from ( [ eq : newsub ] ) , , u\in\mathcal{u},\label{eq : comb4}\end{aligned}\ ] ] where ( [ eq : invokeposinput ] ) is due to ( [ eq : posinput ] ) , and ( [ eq : twoimply ] ) is due to ( [ eq : pos ] ) and ( [ eq : neg ] ) . combining ( [ eq : twoactiveinputs ] ) , ( [ eq : ineqtoeq ] ) , ( [ eq : invariant ] ) , ( [ eq : comb4 ] ) , and the fact yields the desired result .recall that ( with input alphabet and output alphabet ) is said to be a stochastically degraded version of ( with input alphabet and output alphabet ) if there exists satisfying we can write ( [ eq : equivalent ] ) equivalently as by viewing , , and as probability transition matrices. the following result is obvious and its proof is omitted .[ lem : degraded ] if is a stochastically degraded version of , then next we extend lemma [ lem : core ] to the general case by characterizing the condition under which is a stochastically degraded version of . [ lem : erasure ] is a stochastically degraded version of if and only if the problem boils down to finding a necessary and sufficient condition for the existence of such that it suffices to consider the case since lemma [ lem : erasure ] is trivially true when .note that combining ( [ eq : cond1 ] ) and ( [ eq : cond2 ] ) gives in light of ( [ eq : inview ] ) , it can be readily seen that the existence of conditional distribution satisfying ( [ eq : cond1 ] ) is equivalent to the existence of probability vector satisfying ( [ eq : ns2 ] ) .clearly , ( [ eq : ns ] ) is a necessary and sufficient condition for the existence of such .[ thm : theorem1variant ] for any binary - input channel , state distribution , and side channel , if in view of lemmas [ lem : core ] , [ lem : degraded ] , and [ lem : erasure ] , we have if ( [ eq : translate ] ) is satisfied . combining ( [ eq : comb1 ] ) and( [ eq : comb2 ] ) completes the proof of theorem [ thm : theorem1variant ] .now we proceed to prove theorem [ thm : theorem1 ] by translating ( [ eq : translate ] ) ( which is a condition on that is universal for all binary input channels and state distributions ) to an upper bound on .this upper bound , however , depends inevitably on the state distribution . for any violating ( [ eq : translate ] ) ( i.e , ) , we have where ( [ eq : pinsker ] ) is due to pinsker s inequality , and is a minimizer of , . as a consequence , ( [ eq : translate ] )must hold if .this completes the proof of theorem [ thm : theorem1 ] .first consider the special case where is a generalized symmetric channel ( with crossover probability ] , where .2 . and ( this case can arise only when ) : we have for , where .3 . otherwise : we have for and for ] , where .2 . and ( this case can arise only when ) : we have for , where .3 . otherwise : we have for and for ] . in view of lemmas [ lem : erasure ] and [ lem : nssym ], we have note that does not depend on ( under the assumption ) ; as a consequence , and do not depend on either . for and in fig .[ fig : szchannel ] ( see also ( [ eq : para1 ] ) and ( [ eq : para2 ] ) ) , we show in appendix [ app : overline ] that setting gives ( cf . fig . [fig : plot1 ] ) and ( cf .[ fig : plot2 ] ) . in this subsection, we shall examine the following two implicit conditions in theorem [ thm : theorem1 ] : 1 .perfect state information at the decoder , 2 .causal noisy state observation at the encoder .if no state information is available at the decoder , then the channel capacity is given by where the joint distribution of is given by ( [ eq : jointdistribution ] ) .furthermore , if there is also no state information available at the encoder , then the channel capacity becomes where .define :\tilde{c}(p_{y|x , s},p_s , p_{\tilde{s}_{ge}^{(\epsilon)}|s})\\ & \hspace{2.1in}=\underline{\tilde{c}}(p_{y|x , s},p_s)\}.\end{aligned}\ ] ] the proof of the following result is similar to that of proposition [ prop : ns1 ] and is omitted .[ prop : ns3 ] 1 .there exists such that for all satisfying if and only if . if and only if where is defined in ( [ eq : defdelta ] ) , is an arbitrary maximizer of the optimization problem in ( [ eq : noedstate ] ) , and as shown by the following example , the necessary and sufficient condition ( [ eq : prop3ns ] ) is not always satisfied even when .let where is the modulo-2 addition .it can be verified that ( [ eq : prop3ns ] ) is not satisfied for this example ; indeed , fig .[ fig : nsdecoder ] indicates that here we give an alternative way to prove ( [ eq : example ] ) .write , where and are two mutually independent bernoulli random variables with ,\\ & p_{\delta}(1)=\frac{\mu-\nu}{1 - 2\nu}.\end{aligned}\ ] ] it is clear that .\label{eq : ex1}\end{aligned}\ ] ] in light of lemma [ lem : erasure ] , is a stochastically degraded version of and consequently if . combining ( [ eq : ex1 ] ) and ( [ eq : ex2 ] ) proves ( [ eq : example ] ) .now we proceed to examine the second implicit condition .if the noisy state observation is available non - causally at the encoder , the gelfand - pinsker theorem ( see also ( * ? ? ?7.3 ) ) indicates that the channel capacity is given by where the joint distribution of factors as it turns out that is bounded between and , i.e. , indeed , the first inequality is obvious , and the second one holds because in fig .[ fig : gp ] we plot against for $ ] , where and are given by ( [ eq : para1 ] ) with and ( [ eq : para2 ] ) , respectively ; it can be seen that is strictly greater than except when .so the causality condition on the noisy state observation at the encoder is not superfluous for theorem [ thm : theorem1 ] .we have shown that the capacity of binary - input channels is very sensitive " to the quality of the encoder side information whereas the generalized probing capacity is very robust " . herethe words sensitive " and robust " should not be understood in a quantitative sense .indeed , it is known that , when , the ratio of to is at least 0.942 and the difference between these two quantities is at most .011 bit ; in other words , the gain that can be obtained by exploiting the encoder side information ( or the loss that can be incurred by ignoring the encoder side information ) is very limited anyway .binary signalling is widely used , especially in wideband communications .so our work might have some practical relevance .however , great caution should be exercised in interpreting theorems [ thm : theorem1 ] and [ thm : theorem2 ] .specifically , both results rely on the assumption that the channel state takes values from a finite set and is not essential ] , which is not necessarily satisfied in reality ; moreover , the freedom of power control in real communication systems is not captured by our results .nevertheless , our work can be viewed as an initial step towards a better understanding of the fundamental performance limits of communication systems where the transmitter side information and the receiver side information are not deterministically related .finally , it is worth mentioning that our results might have their counterparts in source coding .we shall show that , for any binary - input channel , state distribution , and side channel , if [ lem : newthreshold ] is a stochastically degraded version of if where let denote the maximum likelihood estimate of based on .it suffices to show that is invertible and is a valid probability transition matrix if ( [ eq : anotherbound ] ) is satisfied . [ cols="^,^,^,^",options="header " , ] [ lem : aninequality ] for , we have which , together with the fact , implies the desired result .when or , we have , which implies . when , the maximizer of the optimization problem in ( [ eq : nostate ] ) , denoted by , is unique and is given by now consider specified by table [ tab1 ] .it can be verified that moreover , where ( [ eq : invokeineq1 ] ) and ( [ eq : invokeineq2 ] ) follow from lemma [ lem : aninequality ] .therefore , we have which , together with ( [ eq : maxepsilon ] ) , proves ( [ eq : proofinappb1 ] ) for .next consider specified by table [ tab2 ] .it can be verified that moreover , where ( [ eq : invokeineq1again ] ) and ( [ eq : invokeineq2again ] ) follow from lemma [ lem : aninequality ] .therefore , we have which , together with ( [ eq : maxq ] ) , proves ( [ eq : proofinappb2 ] ) for .when or , we have , which implies and .when , the maximizer of the optimization problem in ( [ eq : perfectstate ] ) , denoted by , is unique and is given by in view of ( [ eq : invewepsilon ] ) and ( [ eq : inviewq ] ) , it suffices to show that indeed , for , and the last inequality is true according to lemma [ lem : aninequality ] .the authors wish to thank the associate editor and the anonymous reviewer for their valuable comments and suggestions .j. wang , j. chen , l. zhao , p. cuff , and h. permuter , on the role of the refinement layer in multiple description coding and scalable coding , " _ ieee trans .inf . theory _ ,3 , pp . 14431456 , mar . | for any binary - input channel with perfect state information at the decoder , if the mutual information between the noisy state observation at the encoder and the true channel state is below a positive threshold determined solely by the state distribution , then the capacity is the same as that with no encoder side information . a complementary phenomenon is revealed for the generalized probing capacity . extensions beyond binary - input channels are developed . = [ draw , -stealth , semithick ] = [ draw , rectangle , text width=7em , text centered , minimum height=12 mm , node distance=8em , semithick ] = [ draw , rectangle , text width=4em , text centered , minimum height=40 mm , node distance=8em , semithick ] binary - input , channel capacity , erasure channel , probing capacity , state information , stochastically degraded . |
in , the authors introduce a model for stochastic gene expression to study the heterogeneity of cell populations .they assume that the cells , or for example the product of some gene , can be in two distinct states or colonies .let and be the sizes of these colonies , which are here considered as birth with migration processes .we assume that the birth rates are either or with , and that the associated migration rates and are such that , that is cells located in the colony having the smaller birth rate migrate at a higher rate to the colony with the higher birth rate than the other way round .if the birth and migration rates are assigned once and for all to a corresponding colony ( e.g. and to , and and to ) , then the mean sizes and satisfy the pair of differential equations ( see or ) according to , we say that cells of the first colony represented by are _ unfit _ ( they have the lower birth rates ) , and conversely that cells of the second colony represented by are _ fit_. the proportion of fit cells in the global population , , , satisfies the non - linear differential equation then , as , , where follows directly from ( [ fit ] ) , see section [ s.stationary ] .this describes the equilibrium value of the proportion of fit cells in a non - changing environment .fixing the values of the parameters , and , we can ask for the value of which maximizes the proportion of fit cells , i.e. the equilibrium value of : the optimal strategy is to keep all the fit cells in the fit state , that is to set their migration rate to zero , .this leads to , and thus the optimal solution would be a homogeneous population .observations reveal however that most cell populations are not homogeneous ; to explain this , the authors of propose to introduce a small modification in the model by allowing environmental changes ( for related questions in this context , see e.g. ) , and show through monte - carlo simulations that the homogeneous solution is then not always optimal .the idea in their model is to allow the birth and migration rates to switch at random times from one colony to the other , so that cells in the fit colony become unfit and vice versa .if for example an environmental change occurs at some random time ( ) , then the function representing the proportion of fit cells solves ( [ fit ] ) up to time , and just after , say at time , the fit cells corresponding to become unfit and vice versa .the proportion of fit cells is then switched to .after , the random process solves ( [ fit ] ) with initial data at time , until a new environmental change occurs , say at time .there is a new switch , and the process is again solution of ( [ fit ] ) , until a new event occurs and so on . in ,the fluctuations of the environment are modeled using a renewal process ; the instants , , are such that the sequence of random variables given by , , is i.i.d .distributed according to some law on .the authors then use monte - carlo simulations to estimate the limiting value of the time averages along trajectories of the process , of the form this limiting average value is denoted by to express its dependency on the migration rate , when all the remaining parameters are fixed .their simulations indicate that there is a range of parameters ( not too large ) such that which means that heterogeneous populations are more adapted than homogeneous ones in a switching environment . in this paper, we study mathematically the limiting behavior of the stochastic process and the associated time average by giving its stationary measure , and we provide mathematical formulas and numerical solutions , which might be of interest in practical laboratory experiments ( see e.g. ) .our technique uses the process , , which is such that , for some mapping ( see definition [ markovchain ] ) . is a stochastic recursive markov chain , and can be expressed as an additive functional of the trajectory of . in section [ s.stationary ] , we recall a theorem from on the convergence of stochastic recursive chains , which applies in this setting .we give conditions ensuring the existence and uniqueness of a stationary measure , as well as geometric ergodicity . in section [ stationarity ], we consider the case where is exponential of parameter , and show that has a density with respect to lebesgue measure .we furthermore prove in theorem [ ergodic ] that a multiple of solves a second order differential equation with weak singularities .proposition [ diffsolutions ] provides series expansions for , which are necessary to derive properties of near the singularities . in section [ s.numerical ] ,we show numerical solutions , using the series expansions of proposition [ diffsolutions ] to start the numerical integration .we provide an example where , which shows that it can be better to allow fit cells to migrate to the unfit state than to conserve all the fit cells in the fit state in such a switching environment .this is a regime where it is suitable for the colonies to anticipate bad hypothetical future events .we first give some basic results for the differential equation ( [ fit ] ) . the right hand side of ( [ fit ] ) can be factored into , where , , and .then implies that and that the derivative is positive when is in the interval , negative in ] will enter after an almost surely finite time .( however , implies . )we thus restrict our study to the interval .given , we define the mapping such that is the value of the solution of ( [ fit ] ) at time when starting at at time . using separation of variables for ( [ fit ] ), we obtain the relation where we set . given , let denote the time interval the orbit of the dynamical system ( [ fit ] ) needs to join and , , when starting at time at .then [ markovchain ] given , consider the markov chain with values in defined by where the sequence of random variables is i.i.d .distributed according to some law on .this markov chain describes the evolution of , at the instants just before the switches , with .we first recall and adapt results of on the convergence of such markov chains , also called _ stochastic recursive chains _ , see e.g. .the general setting is described by a complete separable metric space , the set of values taken by the markov chain , a family of mappings , indexed by parameters living in some parameter space , and a probability measure on . given an i.i.d .sequence of random elements , , of law , we can consider the markov chain given by .the following theorem gives conditions for the existence and uniqueness of a stationary measure ( theorem 1.1 of ) . in what follows, denotes the law of the markov chain and ] , for constants and with and ; this bound holds for all times and all starting positions , * the constant does not depend on or ; the constant does not depend on , and , where . in our setting , is given by and the parameter set is just .the prokhorov distance ] and write ( [ opdiff ] ) in matrix form : [w]^{\rm t}=0.\ ] ] if we write as , we get the lower triangular matrix =\left[\begin{array}{ccccc } \rho(\rho+\alpha_0 - 1)+\beta_0&0&\ldots\\ \rho \alpha_1+\beta_1&(\rho+1)(\rho+\alpha_0)+\beta_0&0&\ldots\\ \rho \alpha_2+\beta_2&(\rho+1)\alpha_1 + \beta_1&(\rho+2)(\rho+1+\alpha_0)+\beta_0&0&\ldots\\ \vdots&\vdots&\vdots&\vdots&\ddots \end{array}\right].\ ] ] a solution =[1,w_1,w_2,\ldots] ]may be calculated by the recursion scheme with these coefficients , the function is a solution of ( [ newdiff ] ) . from the general theory of linear differential equations in the complex planeit follows that is analytic in the disc of radius centered at , but the power series for might have a convergence radius .if is not an integer , another solution , linearly independent of , can be obtained in the same way from .if , however , is an integer , the corresponding matrix has the entry for , and we look in this case for a solution of the form . as is a solution , the terms in containing cancel and the function must satisfy the equation identifying with the infinite row =[1,w_1^{(2)},w_2^{(2)},\ldots]$ ] , we can write this in matrix form [w^{(2)}]^{\rm t}=-c[v_1,v_2,\ldots]^{\rm t}.\ ] ] for the right - hand side one checks easily that for and .therefore we can resolve the inhomogeneous linear system ( [ lindiff2 ] ) in the following way : 1 .we determine for in the same way as .2 . we set and determine the constant by the equation .3 . we determine the coefficients for by the recursion formula we shall not go into further details , for example present concrete formulas expressing the by the , because we do nt really need the solution of ( [ gendiff ] ) in our case , as we have shown in the proof of theorem [ ergodic ] . in order to find fundamental solutions near the singularity , we can apply the same method once more , but using the variable transformation one easily checks that in this case the indicial equation is and that therefore the two characteristic exponents at are we obtain thus the second fundamental system of solutions and . 3 ( 1998 ) ._ ergodicity and stability of stochastic processes _ , wiley series in probability and statistics , new - york .( 1960 ) . the strong law of large numbers for a class of markov chains ._ , * 31 , * 801803 . iterated random functions ._ siam review _ , * 41 , * 4576 . _higher transcendental functions . _bateman manuscript project .mcgraw - hill ( 2000 ) _ solving ordinary differential equations i , nonstiff problems _ , springer series in computational mathematics .second edition ._ analysis fr physiker und ingenieure _ , springer verlag .( 1973 ) birth , death and migration processes ._ biometrika _ , * 59 , * 4969 .( 1991 ) _ modelling biological populations in space and time _ , cambridge studies in mathematical biology .cambridge university press .( 2004 ) stochastic gene expression in fluctuating environments _ genetics _ , * 167 , * 523530 . | we study a stochastic model proposed recently in the genetic literature to explain the heterogeneity of cell populations or of gene products . cells are located in two colonies , whose sizes fluctuate as birth with migration processes in switching environment . we prove that there is a range of parameters where heterogeneity induces a larger mean fitness . _ keywords : _ gene expression , recursive chain , ergodic , stationary measure |
identified the phenomenon of shear dispersion in which a passive scalar , e.g. a chemical pollutant , released in a pipe poiseuille flow spreads along the pipe according to a diffusion law .the corresponding diffusivity , often termed effective diffusivity to distinguish it from molecular diffusivity , is inversely proportional to molecular diffusivity when the latter is small ( see also * ? ? ?* ; * ? ? ?this effective diffusivity is associated with a random walk along the pipe that results from the random sampling of the poiseuille flow by molecular brownian motion across the pipe .the diffusive description of this random walk , and the corresponding gaussian profile of the scalar concentration , of course only apply on time scales that are much longer than the lagrangian correlation time scale .shear dispersion is a striking example of a broad class of phenomena in which the interaction between fluid motion and brownian motion leads to a strong enhancement of dispersion and to effective diffusivities that are orders of magnitude larger than molecular diffusivity .the importance of these phenomena in applications , in particular industrial , biological and environmental applications , is obvious .this has motivated studies of effective diffusivity in many different flows ( see * ? ? ? * for a review ) .these include spatially periodic flows which can be analysed using the method of homogenisation .this method , which exploits the separation between the ( small ) scale of the flow and the ( large ) scale of the scalar field that emerges in the long - time limit , has proved highly valuable : it applies to more complicated flows , including time - dependent and random flows , and provides a unifying framework for methods used earlier .shear dispersion , in particular , can be regarded as a special case of homogenisation applied to periodic flows , where cells repeat in the along pipe direction and the flow in each cell is simple poiseuille flow . in the large literature on shear dispersion, efforts have been made to overcome the restriction to large times that underlies the diffusive approximation , and improved asymptotic estimates that capture some of the early - time behaviour have been obtained ( see for a review and for more recent results ) . for periodic flows , because the effective diffusivity is more difficult to compute , the focus has mainly remained on the derivation of asymptotic estimates and bounds , in particular in the limit of small molecular diffusivity ( e.g. * ? ? ?* ; * ? ? ?here we consider a different aspect .the characterisation of dispersion in the long - time limit by an effective diffusivity and hence by a gaussian scalar distribution holds only close to the centre of mass of the distribution : the results of homogenisation are in essence a manifestation of the central - limit theorem and apply only to particles displaced from the mean by distances .our aim is to go beyond this and describe the concentration far from the mean . to achieve this, we derive large - deviation estimates for the concentration , that is , we derive the rate function in an approximation of the form for the scalar concentration at position and time .( top panel ) and its logarithm ( bottom panel ) in a couette flow as a function of for and ( from left to right , curves have been offset for clarity ) .monte carlo results ( symbols ) are compared with the large - deviation and diffusive predictions ( solid and dashed lines).,title="fig:",height=226 ] + ( top panel ) and its logarithm ( bottom panel ) in a couette flow as a function of for and ( from left to right , curves have been offset for clarity ) .monte carlo results ( symbols ) are compared with the large - deviation and diffusive predictions ( solid and dashed lines).,title="fig:",height=226 ] large - deviation theory extends the central - limit theorem and applies to numerous probabilistic problems ( e.g. * ? ? ?* ; * ? ? ?when applied to the stochastic differential equations governing the motion of fluid particles advected and diffused in a fluid flow , it naturally yields an improved approximation to the scalar concentration ( interpreted as a particle - position probability function , cf .this approximation is valid for distances from the mean that are rather than and therefore captures the tails of the distribution .these are typically non - gaussian and not adequately represented by the diffusive approximation .this is illustrated in figure [ fig : pdfcouette ] by the example of dispersion in a plane couette flow , one of the shear flows considered in detail in this paper .the top panel shows the profile along the flow of the cross - stream averaged concentration at four successive times in the case of small molecular diffusivity .the figure compares the averaged concentration obtained numerically using a monte carlo simulation ( symbols ) with the gaussian , diffusive approximation ( dashed lines ) and the large - deviation approximation derived in 23 ( solid lines ) .the units of and have been chosen so that the maximum flow velocity and ( taylor ) effective diffusivity are both .the inadequacy of the diffusive approximation in describing the tails of the concentration and the superiority of the large - deviation approximation are apparent in the top panel for the earliest profile .they are obvious for all the profiles in the bottom panel which displays the results using logarithmic scale for .this emphasises the tails of to reveal how the diffusive prediction overestimates dispersion and to demonstrate the effectiveness of the large - deviation approximation .we note that while large deviation formally applies for , it appears here remarkably accurate for moderate .( the discrepancies between large - deviation and monte carlo results for are mainly attributable to the limitations of the straightforward monte carlo method used here and are much reduced with the more sophisticated methods discussed in 3 . )as the couette - flow example illustrates , large - deviation theory provides estimates of the low scalar concentrations in the tails , where the diffusive approximation fails .this makes it relevant to a range of applications in which low concentrations matter .examples include the prediction of the first time at which the concentration of a pollutant released in the environment exceeds a low safety threshold , and the quantification of the impact of stirring on chemical reactions in a fluid . in such examples, there is a strong sensitivity of the response ( physiological or chemical ) to low scalar concentrations that makes the logarithm of the concentration , and hence the rate function , highly relevant quantities .this broad observation can be made precise for the certain classes of chemical reactions . for f - kpp reactions ( e.g.* ) , the combination of diffusion and reaction leads to the formation of concentration fronts that propagate at a speed that turns out to be controlled by the large - deviation statistics of the dispersion and given explicitly in terms of the rate function ( ; see also , ch .7 , , ch . 2 , and ) .the present paper starts in [ sec : formulation ] with a relatively general treatment of the large - deviation theory of dispersion which applies to time - independent periodic flows and to shear flows .the key result is a family of eigenvalue problems parameterised by a variable .the principal eigenvalue , , is the legendre transform of the rate function .these eigenvalue problems can be thought of as generalised cell problems in that they resemble and extend the cell problem that appears when homegenization is used to compute effective diffusivities . in [ sec : ldev][sec : prob ] we present two alternative derivations of the the eigenvalue problems : the first is a direct asymptotic method that treats the large - deviation form of the concentration as an ansatz ( see * ? ? ?* ) ; the second follows the standard probabilistic approach based on the ellis grtner theorem and considers the cumulant generating function of the particle position ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?we then discuss the relation between large deviation and homogenisation ( [ sec : hom ] ) .homogenisation , and the corresponding diffusive approximation , are shown to be recovered when the eigenvalue problems yielding are solved perturbatively for small up to errors .carrying out the perturbation expansion to higher orders provides a systematic way of improving on the diffusive approximation ; in the case of shear dispersion , this recovers earlier results .the rest of the paper is devoted to dispersion in specific shear and periodic flows .we compute the functions and for the classical couette and poiseuille flows in [ sec : shear ] by solving the relevant one - dimensional eigenvalue problem numerically .we also obtain asymptotic results for the concentration at small and large distances from the centre of mass . while the first limit recovers the well - known expression for the effective diffusivity of shear flows , the second captures the finite propagation speed that exists when diffusion along the pipe is neglected .this provides a transparent example of the limitations of the diffusive approximation .section [ sec : cell ] is devoted to a standard example of periodic flow , the two - dimensional cellular flow with streamfunction .the numerical solution of the corresponding eigenvalue problems for specific values of the pclet number ( measuring the relative strength of advection and diffusion ) reveals interesting features of the dispersion , such as anisotropy , that are not captured in the diffusive approximation . using a regular perturbation expansion, we derive explicit results in the limit of small .we examine the opposite , large - pclet - number limit in a companion paper ( * ? ? ?* hereafter part ii ) .we conclude the paper with a discussion in [ sec : disc ] . throughout the present paper and part ii, we verify the predictions of large - deviation theory against direct monte carlo simulations of particle dispersion .this is not without challenges since this requires estimating the tails of distributions which are associated with rare events and are , by definition , difficult to sample .we have therefore used importance sampling and implemented two methods that are applicable broadly .these are described in appendix [ sec : motecarlo ] .two other appendices are devoted to technical details of certain asymptotic limits .we start with the advection diffusion equation for the concentration of a passive scalar . using a characteristic spatial scale as reference length and the corresponding diffusive time scale , where is the molecular diffusivity , as a reference time, this equation can be written in the non - dimensional form [ eqn : ad - dif ] _ t c + c = ^2 c , where is the pclet number . here is the typical magnitude of the velocity field , which is assumed to be time independent , , and divergence free , . equation ( [ eqn : ad - dif ] ) can be considered as the fokker planck equation associated with the stochastic differential equation ( sde ) which governs the position of fluid particles , [ eqn : sde ] = ( ) t + , where denotes a brownian motion . in this interpretation and with , the initial condition for the concentration is and the concentration at later times can then be thought of as the transition probability for a particle to move from at to at .we focus on this initial condition and use the notation when the dependence on needs to be made explicit . in this paperwe consider two somewhat different flow configurations .the first , relevant to taylor dispersion , corresponds to parallel shear flows , with unidirectional and varying in the cross - flow direction only , and a domain that is bounded in this direction .the concentration then satisfies a no - flux condition at the boundary .the second configuration corresponds to a periodic in an unbounded domain . in both cases ,our interest is in the dispersion in the unbounded directions of the domain .the shear - flow configuration can essentially be regarded as a particular case of the more general periodic - flow configuration , with the domain extending over only one period in the streamwise direction and no - flux boundary conditions replacing periodicity conditions .because of this , we consider the two configurations together when developing the general large - deviation approach in the rest of this section .any ambiguity that may arise as a result will be clarified in [ sec : shear ] and [ sec : cell ] when we apply the approach separately to shear flows and to two - dimensional periodic flows and obtain explicit results .mixed configurations , in which the flow is periodic in certain directions and bounded in others , could also be treated with no essential changes .we are interested in the form of for . under the assumption that , the solution to ( [ eqn : ad - dif ] ) can be sought as the expansion [ eqn : expc ] c(,t|_0 ) = t^-d/2 ^-t g ( ) ( _ 0 ( , ) + t^-1 _ 1 ( , ) + ) , = ( -_0)/t , where is the number of spatial dimensions .this can be considered to be a wkb expansion with as large parameter .the leading - order approximation [ eqn : largedevi ] c(,t|_0 ) ~t^-d/2 ( , ) ^-t g ( ) , has the characteristic large - deviation form in which is the cramr or rate function ( e.g. * ? ? ?* ; * ? ? ?* and references therein ) .the conservation of total mass the spatial integral of imposes that [ eqn : g0 ] _ g ( ) = 0 and explains the presence of the prefactor in ( [ eqn : largedevi ] ) , as an application of laplace s method shows .note that we concentrate on this leading - order approximation throughout and hence omit the subscript from .introducing the expansion ( [ eqn : expc ] ) into ( [ eqn : ad - dif ] ) and retaining only the leading order terms gives ( _ g - g ) = ^2 - ( + 2 _ g ) + ( _ g + reduces to [ eqn : eig1 ] ^2 - ( + 2 ) + ( + ||^2 ) = f ( ) , where can be regarded as a parameter .this can be rewritten compactly as [ eqn : expq ] ^ ( ^2 - ) ( ^- ) = f ( ) , in which the form of the operator on the left - hand side makes transparent the connection to the advection diffusion operator .the function satisfies no - flux boundary conditions when impermeable boundaries are present or periodic boundary conditions in the case of unbounded domains with periodic .equation ( [ eqn : eig1 ] ) is central to this paper . together with its associated boundary conditions, it gives a family of eigenvalue problems for parameterised by , with as the eigenvalue .solving these eigenvalue problems ( numerically in general ) provides as the principal eigenvalue , that is , the eigenvalue with largest real part . the rate function then recovered by noting from ( [ eqn : legendre ] ) that and are related by a legendre transform [ eqn : legendre1 ] f ( ) = _ ( - g ( ) ) g()= _ ( - f ( ) ) .the fact that the critical points of are suprema and the convexity of can be deduced from the probabilistic interpretation of discussed below .is differentiable ( e.g. * ? ? ?] it follows that [ eqn : qxi ] = _ f , which gives a one - to - one map between the parameter and the physical variable .the eigenfunction of ( [ eqn : eig1 ] ) associated with can therefore be equivalently thought of as a function of , as in ( [ eqn : largedevi ] ) , or of , as in ( [ eqn : eig1 ] ) .note that the maximum principle can be used to show that is real and that is sign definite ( e.g. * ? ? ?this is consistent with the asymptotics ( [ eqn : largedevi ] ) and the observation that the concentration is positive for all time if it is initially positive . to summarise , solving the eigenvalue problem ( [ eqn : eig1 ] ) for arbitrary and performing a legendre transform of the principal eigenvalue yields the large- approximation ( [ eqn : largedevi ] ) of the concentration .this approximation is valid for and thus , as discussed below , extends the standard diffusive approximation which requires .the eigenvalue problem ( [ eqn : eig1 ] ) can be thought of as a generalised cell problem since , as discussed in [ sec : hom ] , it generalises the cell problem of homogenisation theory . derive this eigenvalue problem as part of a floquet bloch theory for linear equations with periodic coefficients and term it ` shifted cell problem ' ( see also , and 4 below ) .an alternative view of the problem considers the moment generating function [ eqn : genfun ] w(,,t)= ^ , ( 0)=for the position of the fluid particles satisfying ( [ eqn : sde ] ) . here denotes the expectation over the brownian process in ( [ eqn : sde ] ) .the generating function obeys the backward kolmogorov equation [ eqn : backkol ] _ t w = w + ^2 w , w(,,0)=^ ( e.g. * ? ? ?* ; * ? ? ?a solution can be sought in the form [ eqn : w ] w(,,t)=^+ f ( ) t ^ ( , ) , where the function remains to be determined but will shortly be identified with that in ( [ eqn : legendre ] ) .introducing ( [ eqn : w ] ) into ( [ eqn : backkol ] ) leads to [ eqn : eig2 ] ^2 ^+ ( + 2 ) ^+ ( + | |^2 ) ^= f ( ) ^ , with no - flux or periodic boundary conditions .this corresponds to a family of eigenvalue problems , again parameterised by , which are the adjoints of those in ( [ eqn : eig1 ] ) , and hence have the same eigenvalues and in particular the same principal eigenvalue , justifying the notation in ( [ eqn : w ] ) .this eigenvalue controls for . as a result, it can alternatively be defined by [ eqn : f ] f ( ) = _ t ^(t ) and interpreted as the limit as of the cumulant generating function scaled by .this function is convex by definition .the relationship between the large- asymptotics of encoded in and that of can be made obvious .noting from the definition ( [ eqn : genfun ] ) that is the legendre transform with respect to of with the variable dual to , we apply laplace s method to obtain where denotes the asymptotic equivalence of the logarithms as and we use ( [ eqn : largedevi ] ) to write . from ( [ eqn : w ] )we obtain the first part of ( [ eqn : legendre1 ] ) . under the assumption of differentiability of , which ensures that is convex , the second part follows , allowing the computation of the rate function . the argument used in this subsection , which relies on laplace s method to establish a connection between rate function and scaled cumulant generating function ,is an instance of the grtner ellis theorem , a fundamental result of large - deviation theory which extends cramr s treatment of the sum of independent random numbers ( see , e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?rigorous results for a problem very similar to that defined above can be found in ( * ? ? ?* ch . 7 ) .it may be worth contrasting the large - time ( ) large deviations discussed in this paper , with the small - noise ( ) large deviations developed by freidlin & wentzell ( see * ? ? ?* ) : while for small noise a single ( maximum - likelihood or instanton ) trajectory controls the rate function , this is not generally the case for large time .as we discuss in the case of shear flows in 3 , it is only for and sufficiently large that can be expressed in terms of single trajectory and that the two forms of large deviations intersect .some properties of and are useful to infer properties of the dispersion directly from without the need to carry out the legendre transform explicity .as noted , and are convex .therefore , from ( [ eqn : qxi ] ) , increasing correspond to increasing , and can be thought of as a proxy for the more physical variable .it is clear from ( [ eqn : f ] ) that ; correspondingly , _f(0)= _ * , defines which , by ( [ eqn : legendre1 ] ) , minimizes .( [ eqn : largedevi ] ) then indicates that the maximum of and its centre of mass are located at .qualitatively the legendre transform implies that a slow growth of away from its minimum corresponds to a rapid growth of and vice versa .in particular , linear asymptotes for , say as in the one - dimensional case , correspond to vertical asymptotes for , as .this implies that vanishes for , reflecting a finite maximum transport speed for the scalar .exactly linear asymptotes do not arise for because the eigenvalue problem ( [ eqn : eig1 ] ) for has the simple solution which corresponds to a purely diffusive behaviour .however , for large , there can be a range of values of for which is approximately linear and a finite transport speed controls scalar dispersion .much of the literature on scalar dispersion focuses on the computation of an effective diffusivity governing the dispersion for and . in this approximation , ( [ eqn : ad - dif ] ) reduces to the diffusion equation [ eqn : effdiff ] _ t c + c = ( c ) , where is the spatial average of , and is an effective diffusivity tensor .alternatively , and can be obtained from the particle statistics using [ eqn : effdiff2 ] _ t = _ t ( - t ) ( - t ) = .the form of has been derived for a variety of flows using several essentially equivalent methods , starting with taylor s work on shear flows . in the last 20 years ,homogenisation , as reviewed in and , has become the systematic method of choice .the diffusive approximation ( [ eqn : effdiff ] ) can be recovered from the more general large deviation results : since the assumption implies that and hence that , we can expand according to f()= _ * + _ f + o(||^3 ) , where is the hessian of evaluated at . taking the legendretransform gives g ( ) ~ ( - _ * ) _ f^-1 ( - _ * ) . in this approximationthe concentration is c(,t|_0 ) ^-(- _ * t ) _f^-1 ( - _ * t)/(2 t ) corresponding to the solution of ( [ eqn : effdiff ] ) with [ eqn : effdiff3 ] = _ * = _ f/2 .this result also follows from ( [ eqn : effdiff2 ] ) noting that the mean and covariances that appear on the left - hand sides are given by the first and second derivatives with respect to of the cumulant generating function evaluated .since the diffusive approximation is recovered from the large - deviation results by an expansion for small , it can be expected that the method of homogenisation is equivalent to the perturbative solution of the eigenvalue problem ( [ eqn : eig1 ] ) or ( [ eqn : eig2 ] ) .this is plainly the case .consider the periodic - flow configuration and assume that for simplicity .expanding = 1 + || _ 1 + ||^2 _ 2 + f = || _ 1 + ||^2 _ 2 + , and introducing this into ( [ eqn : eig1 ] ) yields at , where is a unit vector . averaging this equationgives that .the solution is then written as in terms of the periodic , zero - average solution of the so - called cell problem [ eqn : cellprob ] ^2 - = . ( see * ? ? ? * ) . at order , the eigenvalue problem reduces to averaging gives where the second equalities follows after some manipulations using ( [ eqn : cellprob ] ) ( see * ? ? ? * for details ) .this corresponds to an effective diffusivity with components which is the standard homogenisation result .an analogous computation detailed in appendix [ sec : exp ] shows how the homogenisation results for shear flows are recovered from the large - deviation calculation .the perturbative solution of the eigenvalue problem ( [ eqn : eig1 ] ) offers a route for the systematic improvement of the diffusive approximation .such improvements , which have been derived for shear flows by , and others ( see * ? ? ?* for a review ) , extend the diffusion equation ( [ eqn : effdiff ] ) to include higher - order spatial derivatives and increase the accuracy of the approximation for .they lead to effective equations of the form [ eqn : gendiff ] _ t c + c = _ ij _ ij c + _ ijk^(3)_ijkc + _ ijkl^(4 ) _ ijlk c + , where summation over repeated indices is understood and we have introduced higher - order effective tensors , etc .the behaviour of the large - deviation function as encodes all these tensors .this can be deduced from the large - deviation form ( [ eqn : largedevi ] ) of the concentration which implies that and . combining theseformally leads to the effective equation [ eqn : efff ] _ t c = f(- ) c. comparison with ( [ eqn : gendiff ] ) shows that the various effective tensors that appear are given as derivatives of at .hence they can be computed by continuing the perturbative solution of the eigenvalue problem ( [ eqn : eig1 ] ) to higher orders in .this is demonstrated to for shear flows in appendix [ sec : exp ] .another kind of improvement captures finite - time effects , specifically the fact that the mean and variance of the particle position have corrections to their linear growth which depend on initial conditions .these corrections have been computed for some shear flows and termed ` initial displacement ' and ` variance deficit ' .although we do not consider them further in what follows , it can noted that eq .( [ eqn : backkol ] ) for the moment generating function is exact .its solution for finite time can be expressed as a series of the form , where and denote the complete set of eigenvalues and eigenfunctions of ( [ eqn : eig2 ] ) .the constants can be determined from the initial condition of the concentration .it is clear , then , that the first 2 terms in the taylor expansion of , where the mode corresponds to the eigenvalue , determine the initial displacement and variance deficit ; the other eigenvalues contribute to exponentially small corrections . in the rest of the paper , we apply the results of this section to several specific shear and periodic flows .we start with the case of shear flows for which the eigenvalue problems ( [ eqn : eig1 ] ) and ( [ eqn : eig2 ] ) simplify considerably .consider the advection by a parallel shear flow in two dimensions , in a channel of width corresponding to for the dimensionless coordinate . without loss of generality ( exploiting a suitable galilean transformation as necessary ) the velocity can be assumed to satisfy [ eqn : zeroav ] u = _-1 ^ 1 u(y ) y = 0 . because it is the longitudinal dispersion that is of interest , we modify ( [ eqn : largedevi ] ) andtake the large - deviation form of the concentration to be [ eqn : cdisp ] c(,t ) ~t^-1/2 ( y , ) ^-t g ( ) , = ^-1 x / t , assuming .similarly , we write the moment generating function as [ eqn : w - disp ] w(q,,t ) = ^^-1 q x ^^-1 q x + f(q ) t ^(y ) . note that and depend only on the longitudinal variables and and that can be taken -independent because of the -independence of the flow .the factors are introduced in ( [ eqn : cdisp])([eqn : w - disp ] ) for convenience : they lead to a legendre pair of functions and that are independent of in the limit , at least for . the eigenvalue problem ( [ eqn : eig1 ] ) then reduces to the schrdinger form [ eqn : eig - disp ] + ( q u(y ) + ^-2 q^2 ) = f(q ) .this one - dimensional eigenvalue problem is completed by the no - flux boundary conditions [ eqn : bc - disp ] ( -1)=(1)=0 .note that the operator in ( [ eqn : eig - disp ] ) is self adjoint and hence the same equation arises for the eigenvalue problem ( [ eqn : eig2 ] ) for associated with the moment generating function .note also that ( [ eqn : eig - disp ] ) can be derived more directly using the feynman kac formula . to see this , write ( [ eqn : sde ] ) explicitly as [ eqn : sdeshear ] x = u(y ) t + w_1 ,y = w_2 , and note that . the generating function ( [ eqn : w - disp ] )then becomes using the feynman kac formula ( e.g. * ? ? ?* ) , is seen to satisfy and hence , for , to depend on as with the principal eigenvalue in ( [ eqn : eig - disp ] ) .alternatively , ( [ eqn : eig - disp ] ) is obtained when seeking normal - mode solutions of the form to the advection diffusion equation ( [ eqn : ad - dif ] ) provided that the identification and is made .the large - deviation form of is then recovered by applying the steepest - descent method to the normal - mode expansion of .the large - deviation approach makes it clear that the saddle point in the plane is on the imaginary axis with a purely imaginary associated frequency .below we solve ( [ eqn : w - disp])([eqn : bc - disp ] ) numerically for some classical shear flows . several general remarks can already be made .first , the term proportional to in ( [ eqn : eig - disp ] ) is associated with longitudinal ( molecular ) diffusion . for , it can be neglected for , leading to the simpler eigenvalue problem [ eqn : eig - disp2 ] + q u(y ) = f(q ) which makes clear that and hence are independent of in the limit with .the large - deviation form of can be written in terms of dimensional variables and as [ eqn : shearscale ] c(_*,t _ * ) ^-a^-2 t _ * g(x_*/(ut _ * ) ) , and its range of validity as and . inwhat follows , we mostly concentrate on the limit and solve ( [ eqn : eig - disp2 ] ) rather than ( [ eqn : eig - disp ] ) : the effect of the neglected longitudinal diffusion on is straightforward , since it simply adds , but the corresponding change in is somewhat more complicated .it is nonetheless a simple matter to estimate the size of for which the neglect of longitudinal diffusivity ceases to be a good approximation .second , the perturbative solution of eigenvalue problem ( [ eqn : eig - disp ] ) for , provides an effective diffusivity as sketched in [ sec : hom ] . in terms of , the dimensional effective diffusivity is expressed from ( [ eqn : shearscale ] ) as [ eqn : effdiff4 ] _ * = f(0 ) , and is inversely proportional to the molecular diffusivity in the limit .the perturbative calculation carried out in appendix [ sec : exp ] gives [ eqn : fshear ] f(0 ) = ( _ -1^y u(y ) y ) ^2 . and recovers the explicit form of as obtained using homogenisation ( e.g. * ? ? ?* ; * ? ? ?the first of the corrections to the diffusive approximation of and is also computed in appendix [ sec : exp ] .third , the asymptotics of ( [ eqn : eig - disp2 ] ) indicates that tends to as , where denote the maximum and minimum velocities in the channel .this can be seen by noting that is the lowest eigenvalue of a schrdinger operator which , in the semiclassical limit , is given by the minimum of the potential ( e.g. * ? ? ?the implication , as discussed in [ sec : prob ] , is that as . physically, this corresponds to the fact that fluid particles have longitudinal velocities in the range ] .this is only an approximation of course : when longitudinal molecular diffusion is taken into account , there is no limit on the propagation speed .it is readily seen that the term becomes comparable to in for and that the rate function is approximately the diffusive for near ( ) or larger ( smaller ) .this form of can also be shown to arise from an application of the small - noise large - deviation theory and is controlled by a single maximum - likelihood trajectory .( this applies only when is sufficiently large : the dimensional expression ( [ eqn : shearscale ] ) makes this clear , with an argument of the exponential that scales like whereas the small - noise large deviation necessarily leads to a scaling , corresponding to a factor with our non - dimensionalisation . )finally , we note that the eigenfunctions , where the dependence is inferred from the -dependence using , have a simple interpretation . for amount of scalar at for can be approximated as [ eqn : cond ] _t^c(x , y , t ) x ( , y ) ^-t g( ) , since , by the convexity of , the integral is dominated by the contribution of the endpoint . therefore gives the scalar distribution across the shear flow of particles with average speed greater than .similarly , for , gives the distribution of particles with speed less than .we now examine classical shear flows , starting with the plane couette flow [ eqn : couette ] u(y)=y .the dispersion in this flow is illustrated in figure [ fig : pdfcouette ] .the figure shows how the diffusive and large - deviation approximations provide a good approximation in the core of the scalar distribution and how only large deviation captures the tails .figure [ fig : pdfcouette ] does not resolve the tails of with sufficient detail to assess the validity of the large - deviation approximation fully , however . inwhat follows , we test systematically the large - deviation prediction for , defined as [ eqn : f - disp ] f(q ) = _ t ^^-1 q x(t ) with our shear - flow scaling , by comparing the value obtained by solving the eigenvalue problem ( [ eqn : eig - disp ] ) for a range of with careful monte carlo estimates .the eigenvalue problem is solved using a finite - difference scheme .( an exact solution can be written in terms of airy functions , but it is not particularly illuminating ) .the monte carlo estimates approximate the right - hand side of ( [ eqn : f - disp ] ) as an average over a large number of solutions of ( [ eqn : sdeshear ] ) .however , a straightforward implementation does not provide a reliable estimate for except for small values of .this is because for moderate to large is controlled by rare realisations which are not sampled satisfactorily . to remedy this , it is essential to use an importance - sampling technique which concentrates the computational effort on these realisations .for the results reported in this paper , we have implemented a version of grassberger s pruning - and - cloning technique which we describe in appendix [ sec : resamp ] .results for the plane couette flow are displayed in the leftmost panels of figure [ fig : shearall ] .the top panel shows the eigenvalue and monte carlo approximations of along with asymptotic approximations valid for small and large .the small- approximation for is found from ( [ eqn : fshear ] ) as [ eqn : smallqcouette ] f(q ) ~ q^2 q 0 .the large- approximation is obtained by noting that for , the solution to ( [ eqn : eig - disp2 ] ) is localised in boundary layers near .concentrating on , we introduce and into ( [ eqn : eig - disp2 ] ) . to leading order , this gives - y = , with solution decaying as .imposing the boundary condition at gives the equation for .hence we have [ eqn : largeqcouette ] f(q ) ~|q| - 1.019 |q|^2/3 |q| , using symmetry to deal with .the top left panel of figure [ fig : shearall ] confirms the validity of the eigenvalue calculation and of the asymptotic estimates . in the case of the estimates , a constantis added to ( [ eqn : largeqcouette ] ) to ensure a good match ; with this correction , the asymptotic formula appears to be accurate for as small as , say . the dispersive approximation corresponding to the parabola ( [ eqn : smallqcouette ] ) overestimates for all , indicating that this approximation overestimates the speed of dispersion or equivalently the magnitude of the tails of the distribution .the rate function is shown in the second row of figure [ fig : shearall ] .the solid curve is obtained by legendre transforming the function computed by numerical solution of the eigenvalue problem .this is compared with direct monte carlo estimates .again , it is crucial to use importance sampling to obtain a reliable estimate of for not small .we have chosen to integrate a modified dynamics in which particles , instead of simply diffusing in the -direction , also experience of drift towards the wall at ( or ) .a better sampling is obtained because the wall regions control for large ; the method is described in appendix [ sec : girs ] .the figure also shows the asymptotic approximations for deduced from ( [ eqn : smallqcouette ] ) and ( [ eqn : largeqcouette ] ) by legendre transform and given by [ eqn : asxicouette ] g ( ) ~ ^2 0 g ( ) ~ 1 .the match between the values of derived from the eigenvalue problem and those obtained by monte carlo sampling provides a direct check on the validity of the large - deviation theory .the discrepancy between the exact and its diffusive approximation confirms that diffusion overestimates the dispersion speed , as inferred already from the plot of . the finite support of the concentration distribution for $ ] , arising from the neglect of longitudinal molecular diffusion , is also hinted at by the large slopes of for .the large- approximation to ( with term fixed by inspection ) is seen to be accurate for and could be combined with the small approximation to provide a satisfactory uniform approximation . the third panel on the left of figure [ fig : shearall ] shows the map between that arises as part of the legendre transform .this identifies the location which control the corresponding exponential moment for large .finally , the fourth panel shows profiles of the eigenfunctions of ( [ eqn : eig - disp ] ) for several values of . according to ( [ eqn : cond ] ) ,these give the structure of the concentration profile for larger than .thus , for instance , the eigenfunction for approximately corresponds to ( see third panel ) . as and hence increase ( or decrease )the profile becomes more and more localised in the region of maximum ( or minimum ) velocity , that is , near ( ) .the eigenfunctions for finite are to be contrasted with the standard ( homogenisation ) results on taylor dispersion which correspond to eigenfunctions that are small , perturbations to the uniform eigenfunction . [ cols="^,^,^,^ " , ] the eigenfunctions of ( [ eqn : eig1 ] ) shown in figure [ fig : efunctionpe250 ] for three different values of illustrate different regimes of dispersion that arise at increasingly larger distances from the scalar - release point . for small and hence for small , is almost uniform : a gentle gradient in the cell interiors is compensated by a rapid change in boundary layers that appear around the separatrices in agreement with the homogenisation treatment . for larger and , inside the cell is no longer close to uniform ; instead , it is approximately constant along streamlines but varies across streamlines , from small values at the centre to large values near the separatrices .again , boundary layers around the separatrices ensure periodicity .finally , for large and , is close to zero in the cell interiors and the scalar is confined within boundary layers .this qualitative description of the eigenfunctions is consistent with the evolution of the scalar field shown in figure [ fig : simu250 ] ; it is supported by the asymptotics results reported in part ii .this paper discusses the statistics of passive scalars or particles dispersing in fluids under the combined action of advection and molecular diffusion .it shows how large - deviation theory provides an approximation to the scalar concentration or particle - position pdf in the large - time limit .this approximation , expressed in terms of the rate function , is valid in the tail of the distribution as well as in the core ; it considerably generalises the more usual diffusive approximation which characterises the dispersion by a single effective - diffusivity tensor .the rate function is deduced from the solution of the generalised cell problem ( [ eqn : eig1 ] ) , a one or two - parameter family of eigenvalue problems that generalise the cell problem solved when computing the effective diffusivity using the method of homogenisation .the application to shear flows reveals features of the dispersion that are not captured by the standard theory of shear dispersion initiated by .in particular , it shows that the diffusive approximation dramatically overestimates scalar concentrations far away from the centre of mass .the reason for this is that the mechanism underlying shear dispersion the interaction between shear and cross - stream molecular diffusion leads to along - flow dispersion with a finite speed , namely the maximum flow speed .the non - zero concentrations beyond the limits imposed by this finite speed are entirely attributable to molecular diffusion and thus controlled by molecular rather than effective diffusivity . at intermediate distances from the centre of mass , however ,the non - diffusive effects can in some cases increase and in some cases decrease dispersion .this can be detected in some of the results for standard shear flows displayed in figure [ fig : shearall ] or be deduced from the order - by - order corrections to the diffusive approximation discussed in [ sec : hom ] .our analysis of spatially periodic flows and , in particular , of the classical cellular flow further demonstrates the benefits of large - deviation theory over homogenisation and the resulting diffusive approximation .the anisotropy of the dispersion in this flow , for instance , although a clear consequence of the streamline arrangement , is overlooked by the diffusive approximation but quantified by large deviation . as for shear flows , there is also a finite speed effect for the dispersion in cellular flow ; this is more subtle and is elucidated in part ii which devoted to a detailed analysis to the large- limit .the differences between the diffusive and large - deviation approximations for the scalar concentration are significant at large enough distances away from the centre of mass of the scalar .since the concentration at such distances is small , large deviation applied to problems involving purely passive scalars is of practical importance in situations where low concentrations matter , as would be the case , for instance , for very toxic chemicals . in such applicationsthe logarithm of the concentration is often a relevant measure of the chemical s impact ; it is read off from the rate function since .as mentioned in 1 , for scalars that are reacting , the properties of dispersion at large distances embodied in can be critical in determining the main features of the scalar distribution .this was made explicit in the work of and which relates the speed of propagation of fronts for scalars experiencing f - kpp - type reactions to the rate function characterising passive dispersion .following from this relationship , the results of the present paper and of part ii can be used to predict front speeds in a range of shear and periodic flows .we will report elsewhere the novel predictions that can be obtained in this manner .we conclude by remarking that the large - deviation treatment of scalar dispersion can be extended to a class of flows much broader than that considered in the present paper .dispersion in time - periodic flows , random flows and turbulent flows can also be characterised by a rate function to improve on the approximation provided by effective diffusivity . in the time - periodic casean extension of the theory discussed in 2 is straightforward : the eigenfunction in ( [ eqn : largedevi ] ) should depend on as well as on and , leading to an additional term in the eigenvalue problem ( [ eqn : eig1 ] ) and to the requirement that be time periodic which determines the eigenvalue . in the random case , under the assumption of homogeneous and stationary statistics for , is determined by the analogous requirement that , a random function , be homogeneous and stationary .implementing this requirement is not straightforward , however , and monte carlo methods with importance sampling of the types described in appendix b may be best suited for the computation of the rate function .* acknowledgments .* jv acknowledges support from grant ep / i028072/1 from the uk engineering and physical sciences research council .it follows from the scaled large - deviation form of for shear flows ( [ eqn : cdisp ] ) that in these expressions , is related to by and factors describing the error in the wkb - like expansion ( [ eqn : cdisp ] ) are omitted .thus if we write [ eqn : exp ] f(q ) ~_n=1^n _ n q^n , an equation for follows in the form the solution to this equation gives for a form similar to ( [ eqn : cdisp ] ) with approximated by the legendre transform of the -term taylor expansion of at .in particular , truncating at gives the dispersive approximation with effective diffusivity ( [ eqn : effdiff ] ) .the perturbative solution of ( [ eqn : eig - disp ] ) is straightforward : introducing ( [ eqn : exp ] ) and into ( [ eqn : eig - disp ] ) and omitting the term in gives at the first three orders , [ eqn:3eqns ] = _ 1 - u , = _ 2 + _ 1 _ 1 - u _ 1 =_ 3 + _ 2 _ 1 + _ 1 _ 2 - u _ 2 . integrating the first equation and using ( [ eqn : zeroav ] ) gives and = - _ -1^y u(y ) y. an explicit expression for follows , which can be chosen such that .integrating the second equation in ( [ eqn:3eqns ] ) and using the above gives [ eqn : alpha2 ] _ 2 = u _ 1 = ( _ -1^y u(y ) y ) ^2 . up to the factor , this is the effective diffusivity of taylor and homogenisation theory .the function can then computed explicitly and the condition imposed .finally , integrating the third equation in ( [ eqn:3eqns ] ) gives [ eqn : alpha3 ] _3 = u _ 2 = u _ 1 ^ 2 , in agreement with .note that the analogue of ( [ eqn : alpha2 ] ) for pipe flows is [ eqn : alpha2ax ] _ 2 = 2 _ 0 ^ 1 ( _ 0^r r u(r ) r ) ^2 .we test the theoretical results by estimating the cumulant generating function from monte carlo simulations .this relies on solving ( [ eqn : sde ] ) for an ensemble of trajectories , then computing [ eqn : samp ] w_k(t ) = _k=1^k w^(k)(t ) , w^(k)(t ) = ^^(k)(t ) , for fixed . since as , for and large . when is small , this method provides a good estimate of with moderately large , say or . for of order one or large ,obtaining even a crude estimate of requires an exceedingly large number of realisations .this is because the cumulant generating function is determined by exponentially rare , hence difficult to sample , realisations whose weights are exponentially larger than those of typical realisations . to estimate accurately with a reasonable number of realisations ,it is necessary to use an importance - sampling method which concentrates the computational efforts on realisations that dominate ( [ eqn : samp ] ) .we have adopted a simple method based on grassberger s pruning - and - cloning technique ( see also * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) which we now describe .every few time steps in the numerical integration of ( [ eqn : sde ] ) , the current weight of each realisation is compared to the average . if , where is a parameter of the method ( typically chosen as or ) , the realisation is cloned : an additional realisation is created and integrated forward from the initial condition .the two clones subsequently follow different trajectories , for because they experience different brownian motions .the statistics of are left unchanged provided that the weight of the cloned realisations is divided by , that is , the weights in ( [ eqn : samp ] ) are multiplied by additional factors of for each cloning experienced by realisation . if , on the other hand , the realisation is pruned : it is killed with probability and , if surviving , its weight is multiplied by . to keep the number of realisations constant ,random realisations are either cloned or killed .we have implemented a slight extension of the method described in which the number of clones for realisations with , is .the resampling steps make the method very efficient , and the results reported in the paper typically required a few minutes of computation on a modest desktop computer .crucial to this efficiency is the fact that the cloning - pruning process tailors the ensemble of realisations to a particular value of by selecting those which dominate .the rate function can be estimated directly by monte carlo simulations , using a binning procedure to approximate .this is of course highly inefficient for the parts of away from its minimum since these are controlled by exponentially rare realisations which are poorly sampled .one way of remedying this is to integrate a modified dynamics following the importance - sampling technique discussed in . for shear flows ,we have adopted the following approach .the modified dynamics , denoted by tilde , is given by [ eqn : sdemod ] x = u(y ) t + w_1 , y = r(y ) t + w_2 , instead of ( [ eqn : sdeshear ] ) . here is a function chosen so that the distribution of better samples the regions where is large ( or small ) which control for away from .girsanov s formula relates averages under the original dynamics ( [ eqn : sde ] ) to averages under this modified dynamics according to .thus can be approximated by integrating numerically ( [ eqn : sdemod ] ) for an ensemble of trajectories and using a discretised version of the relation this result is used for to estimate the tails of and hence the form of for large with a much better sampling than achieved with the original dynamics . for the numerical results reported in [ sec : couette][sec : poiseuille ] , we have used to efficiently sample the portion of controlled by trajectories that remain localised near the wall at ( leading to anomalously large for couette flow and anomalously small for poiseuille flow ) , and to sample trajectories localised near the maximum of the plane poiseuille flow . the value of the parameter was chosen by trial - and - error to obtain the best representation of a portion of the curve . a similar modified dynamics for both and used in the case of the pipe poiseuille flow in [ sec : pipe ] .in the limit , the eigenvalue problem ( [ eqn : eig1 ] ) can be solved perturbatively by introducing the expansions of the eigenfunctions and eigenvalue into ( [ eqn : eig1 ] ) .the leading - order , , equation is solved for and which reduces the equation to on integrating over a period , the left - hand side vanishes , leading to .the solution is then found in the form [ eqn : phi1smallpe ] _ 1 = a x y + b x y + c x y + d x y , where the constants and are readily computed . integrating the equation over a period leads to the eigenvalue correction substituting ( [ eqn : phi1smallpe ] ) and taking the explicit form of the constants into account yields [ eqn : f2smallpe ] f_2 = . | the dispersion of a passive scalar in a fluid through the combined action of advection and molecular diffusion is often described as a diffusive process , with an effective diffusivity that is enhanced compared to the molecular value . however , this description fails to capture the tails of the scalar concentration distribution in initial - value problems . to remedy this , we develop a large - deviation theory of scalar dispersion that provides an approximation to the scalar concentration valid at much larger distances away from the centre of mass , specifically distances that are rather than , where is the time from the scalar release . the theory centres on the calculation of a rate function characterising the large - time form of the scalar concentration . this function is deduced from the solution of a one - parameter family of eigenvalue problems which we derive using two alternative approaches , one asymptotic , the other probabilistic . we emphasise the connection between the large - deviation theory and the homogenisation theory that is often used to compute effective diffusivities : a perturbative solution of the eigenvalue problems in the appropriate limit reduces at leading order to the cell problem of homogenisation theory . we consider two classes of flows in some detail : shear flows and periodic flows with closed streamlines ( cellular flows ) . in both cases , large deviation generalises classical results on effective diffusivity and captures new phenomena relevant to the tails of the scalar distribution . these include approximately finite dispersion speeds arising at large pclet number ( corresponding to small molecular diffusivity ) and , for two - dimensional cellular flows , anisotropic dispersion . explicit asymptotic results are obtained for shear flows in the limit of large . ( a companion paper , part ii , is devoted to the large- asymptotic treatment of cellular flows . ) the predictions of large - deviation theory are compared with monte carlo simulations that estimate the tails of concentration accurately using importance sampling . |
the random walk model of price changes in financial time series has been so durable because it is nearly correct . at first approximation the difference between real price changes and the random walk model is too small to be detected using traditional time series analysis .more precisely , when looking at large samples of data , some features appear that break the random walk approximation .for example , the statistics of price distribution at small time scales is not gaussian but governed by non - extensive statistics .we can also detect large range correlation in the absolute returns , which mean that persistent behaviors exist that are not embedded in the random walk model , which can be seen as a consequence of the non - extensive statistics .more explicitly , the non - extensive formalism provides an expression for the probability density function of price returns at a given time scale : \right\}_+{^\frac{1}{1-q}}\ ] ] where is a real parameter representing the degree of non - extensivity ( in the gaussian limit ) , is a normalization constant and is a scale parameter .also , is proportional to the variance of the distribution . in the expression of , the subindex + indicates that if the expression inside the brackets is non - positive . in general , for real markets , using large samples of data, the index can be found to range from to .this represents intuitively the degree of the resulting anomalous diffusion from the underlying interaction among financial trades . under certain approximations , regarding a free diffusion process , we can write : super diffusion occurs for . in fig .[ 5minret ] , we illustrate this behavior on the return distributions of the euro future contract , sampled in 5 minutes units .the similarity in shape for two different years , 2002 where the euro contract was moving up and 2005 where it was globally down is observed .these shapes correspond effectively to .however , any deviations from the gaussian limit can not be detected at a local level , when we observe the market in a short window of time .our first statement is still valid . at first order of observation , prices in financial marketsbehave randomly and it remains impossible to predict whether the next price movement will be up or down . in the following , we show that this difference between real markets and random walks , as small as it is , is detectable using modern statistical analysis with hypothesis testing , even when we observe the market locally .in particular , it is detectable once we wan build a trading systems on the basis of multivariate analysis and hypothesis testing .indeed , tools of statistical physics have been proven to be efficient in many areas , like extracting the average properties of a macroscopic system from its microscopic dynamics , even if approximately known .the same holds for financial systems . even though it is difficult or almost impossible to write down the microscopic equation of motion that drives prices at each instant, it is possible to extract a relevant statistical information , that makes sense to take decisions at a local level . in a first part, we exemplify this issue on the behavior with time of the euro future contract ( ec ) at high frequency .we show that we can infer the non - random content of the ec erratic behavior using a multivariate analysis embedded in a trading algorithm . in a second and third parts ,we examine different markets , largely uncorrelated to the euro future , namely the dax and cacao future contracts ( labeled as fdax and cc ) .the same procedure is followed using a trading system , based on the same ingredients .for the system running on the cc , we use the same system as built on the euro future .we show that similar and good results can be obtained on this variety of markets and we conclude that this is an evidence that some invariants , as encoded in our system , have been identified .we use five minutes sampling of the ec time series , from january 2000 till august 2011 , which makes 839k quotes that we use to build the trading system .we conserve only the close of each quote .this large sample of data points is necessary to infer statistical properties with a high confidence level , as shown in the following .also , in the context of this analysis , the fine tuning of the time series with a five minutes resolution is useful to focus on possible intermittent behavior of the series at small scales ( five minutes ) , that could disappear at larger scales .a typical quote of the ec is like .the unit of the last digit is what we call a basis point .for example , we consider that a price movement from to corresponds to a price change of basis points .more precisely , if we buy the contract at time t1 ( on the quote q1 ) at and sell this contract at time t2 ( on the quote q2 ) at , then this trade corresponds to a gain of basis points ( without fees ) . to keep the procedure as close to reality as possible we consider fees of two times the slippage , which means that this trade is counted in our approach as a trade of basis point ( net of fees ) . a fundamental issue in the analysis is to break the data samples in three parts , that we call in - sample , out - sample and live - sample .the decomposition is done as follows : * 2000 - 2007 : in - sample * 2008 - 2009 : out - sample * 2010 - 2011 : live - sample what is the interest of this decomposition of the data series ?the idea is that we intend to build a trading system on this series .this means that we intent to design an algorithm that will take decisions like buy or sell 1 ec contract at a given quote .this decision at a given quote will be based on multivariate analysis , as mentioned in the beginning . in order to process this way, we need a data sample on which the algorithm is built and all parameters of the algorithm are fitted .therefore , this data sample needs to be large in order to be relevant statistically .this sample is called in - sample ( i ) and is defined as the period 2000 - 2007 .the second sample , called out - sample ( ii ) , is used as a validation stage .all algorithms built on ( i ) are expected obviously to give satisfactory results on ( i ) . however , as parameters of the model are fitted on the sample ( i ) , there is no guarantee that the model could behave properly on another data sample .if it does so , this means that the algorithm is not a pure artifact and contains a part of the real dynamics of the market .this is the purpose of the sample ( ii ) , defined as the period 2008 - 2009 .if the trading system built on ( i ) fails on ( ii ) , it is rejected and another algorithm is designed .note that we have other intermediate validation stage to make the full process more robust : we come back on this point later in the article .also , note that there is no guarantee at this level that what we describe in this paragraph is possible .finally , once we have obtained an algorithm that works on ( ii ) and satisfies our robustness tests , if any , we observe it on what we call the live - sample ( iii ) , defined as the period 2010 - 2011 .our building process is made to guarantee at this step the good functioning of the trading system and that s what we show in the following .note that if we can drive the analysis to this last step and if it works , it is a clear proof of our claim of the previous part on a specific example ( ec ) : the difference between real markets and random walks , as small as it is , is detectable using modern statistical analysis in multivariate analysis .the multivariate approach refers to the number of parameters introduced in the definition of triggers for trades decisions along the ec series .the basic elements of the algorithm design can be much simple .the gross featurse of a trend - following strategy are exposed in .let us note the value of the price series at time .an exponential moving average of length memory can be defined on this time series , as : from being initially with no position , a trend - following system buys one share when reaches a given value and stays long until hits the value , at which point the system sells back and takes the opposite position , and so on .a complexity can easily be added to this mechanism by defining intermediate thresholds to break positions taken by the system .the trade distribution for this simple theoretical system is given in fig .[ theo ] . obviously , there is no possibility with such a simple algorithm to reconstruct a profitable strategy over ten years of high frequency data .however , we can use the ground idea of this mechanism , namely trend - following , in building a more complex architecture .we use four different memory lengths and consider crossing of this exponential moving averages , as potential triggers for trade decisions .not all moving averages are used for each decision .the choice is based on ranges of volatilities .indeed , we have observed that there are some transition domains in volatilities of prices where it is preferable not to trade or to branch more stringent triggers .an important idea in our structure is also some exit conditions based on extreme conditions in profit , either on positive or negative .these last conditions depend also on the time window on which the non nominal cumulative profit is realized . on a fundamental ground ,our system architecture is just a refinement of a basic trend - following strategy .in addition , the system has learned how to play with volatility ranges to trigger decisions and how to protect over - profits realized for example in high volatile periods .therefore , if we can show that this strategy leads to profitable results ( net of fees ) , it will be a proof of the validity of the trend - following hypothesis on the market , taking into account multivariate tests to activate the trend follower .let us note that the use of moving averages is a powerful experimental method to access to the non - trivial statistical texture of a time series .if we consider 2 standard moving averages of lengths and , with , then the density of crossing points of the 2 averages is given by : ^{h-1}\ ] ] where is the hurst exponent , that characterizes the persistence or anti - persistent of the data series .we have parameters optimized on the in - sample ( i ) .the optimization is performed in order to achieve the best sharpe ratio .results are shown in fig . [ in ] .we present the behavior with time of the ec contract itself as well as the cumulative equity of the designed trading system ( expressed in basis points , net of fees ) .we observe the nice behavior of the equity , increasing with time , which shows that the strategy is profitable and coherent with respect to different market regimes .the bottom plot in fig .[ in ] corresponds also to the running of the trading system , but this time on the randomized in - sample .exactly , we have added to each quote value of the data series ( i ) a random number that ranges between and times the slippage of the ec contract .and we run the trading system on this series , which leads to the bottom plot of fig .this randomization is necessary as we do not want the trading system to be dependent on the point - to - point correlation and also the model must be flexible to absorb distortion of the data series .this is what we observe in fig .[ in ] ( bottom ) : the system is robust against randomization of the data series .the degradation observed on the overall profit is not dramatic and the equity is still much reasonable .note that all systems designed that have failed at this stage have been rejected .before considering the out - sample stage , we have an intermediate essential step of validation of the trading system . to ensure that the system is robust , we need more that the randomization of the data series . we need to distort the strategy itself in many ways : for example , force an exit of given trade at a given time , do not execute randomly some trades , delay the execution of orders by several quotes , execute an order but at a wrong price , with a prejudice for the trading system , multiply the fees ( slippage ) by a factor 2,3 or 4 etc .thus , we have a list of stress tests and for each case , the trading system is run and a result is obtained .all this must be done on the original data series ( i ) and on its randomized version . in all cases, we must observe that the system is stable and robust .this is shown in fig .[ stress ] , where we present the sharpe ratio for all stress tests considered .we do not provide the equity in each configuration .we summarize each case by one entry in fig .[ stress ] , as a value of the sharpe ratio for the case under study .the idea is that the robustness is ensured if we do not observe pathological values in the sharpe ratios , even for the more extreme stress tests .this is what we observe in fig .[ stress ] , with an average value of and a rms of . in all configurations, the model stays reasonable . by this method, we have also shown that the trading system does not depend on the fine tuning of any of the fitted parameters .otherwise , a few stress tests would have failed deeply . on the contrary ,our strategy depends weakly on any of its inputs , which gives a lot of flexibility on all variables of the system with always a profitable result obtained . at this step ,it is not unreasonable to claim that we have designed a robust algorithm .however , a new validation stage is determinant using the out - sample ( ii ) .this is a decisive test as we are running on new data , that the system does not know , in the sense that parameters have been fitted on another set of data . in principle parametersare robust as we have already explored many configurations for the data series and the system .however , the out - sample test will kill all systems that still have some elements of over - fit in their construction . indeed ,such systems fail to give good results when running on the sample ( ii ) and are rejected .this is what happens for most of the systems that can be designed if the input ideas are not carrying decisive features of the inside dynamics of the time series .that s why it is not an easy task and many attempts are needed before converging towards acceptable solutions . in fig .[ out ] we show the result for the trading system described above .we observe a correct behavior of the equity , which qualifies definitely this system .we interpret this as a clear evidence that the time series of the ec exhibits features of trend - following , under certain conditions , as encoded in our trading system . finally , in fig .[ live ] , we check the result on the live - sample period ( iii ) , in 2010 - 2011 . here, we do not expect any failure , otherwise the full process described above must be rejected .effectively , we observe a nice behavior of the cumulative equity ( net of fees ) , much compatible with what has been designed on the in - sample .this confirms our statement above on the dynamical content of the trading algorithm we have presented here . in order to illustrate very simply the gross feature of the model, we present two distributions in fig .we show the trade return spectrum ( fig .[ dist]-left ) , in which we recognize a typical trend - following system , reminiscent from the standard behavior plotted in fig .we observe also in fig .[ dist ] ( right ) that the system is effectively working at high frequency with an average duration of trades of minutes . from the above discussion , we know that our system is robust against a variation of the sampling of the data , for example from 5 minutes to 10 minutes . also , as the average duration of trades is minutes for the nominal system , it makes sense to move the system from 5 minutes to 10 minutes data sampling and check the results . in order to make the change in an optimal way , we have rescaled some parameters such that we have a perfect homotopy between the construction at 5 and 10 minutes sampling .results on the live sample of data ( iii ) is shown in fig .[ livefinal ] .we observe the good behavior , in accordance ( homotopy ) with fig.[live ] , as expected .a final comment is in order concerning the stress tests and robustness analysis .this study ensures statistically the reasonable functioning of the system whatever market regimes and trading conditions .we can see the system as an unfolding procedure , transforming the price series to a trade series .if we do not control obviously the price series , we have a control on how the trade series develop , regardless the behavior of the price series . that s what we have shown abovethis is a decisive element in the construction of the model .the feed back from the cumulative equity itself is an element of the strategy reconstruction , involved with the basic conditions of the model as described earlier .let us add that this is a clear advantage we get using this kind of approach for decoding the markets to a certain extend .the idea is to use the same system , as built in section [ ec ] , on the dax future ( fdax ) series .if we use exactly the same system as defined for euro future , we obtain a sharpe ratio of over 10 years of data .this is reasonable but it makes sense to re - optimize some parameters on the fdax series . in the updating process , some conditions are also re - examined and modified according to what data ( in - sample ) requires .we use again a five minutes sampling for the fdax time series , from january 1999 till august 2011 .this which makes 460k quotes of data . we conserve only the close of each quote . as in section [ ec ] , this large sample of data points is necessary to infer statistical properties with a high confidence level .a typical quote of the fdax is like .the unit of the last digit is what we call a basis point .for example , we consider that a price movement from to corresponds to a price change of basis points .more precisely , if we buy the contract at time t1 ( on the quote q1 ) at and sell this contract at time t2 ( on the quote q2 ) at , then this trade corresponds to a gain of basis points ( without fees ) . to keep the procedure as close to reality as possible we consider fees of two times the slippage , which means that this trade is counted in our approach as a trade of basis point ( net of fees ) .we follow the analysis process detailed previously .then , 3 samples of data are defined : * 1999 - 2006 : in - sample * 2007 - 2009 : out - sample * 2010 - 2011 : live - sample as we start the analysis of the fdax in 1999 , we end up the in - sample in 2006 and not 2007 as was done for the ec time series .as mentioned above , the basic elements of the algorithm design are the same as the ones used for the ec time series in section [ ec ] .the main differences concern the treatment of volatility ranges .there is a stronger focus on this issue for the fdax model .we have always parameters optimized on the in - sample ( i ) , the system is then validated on ( ii ) and observed on ( iii ) . the optimization is performed in order to achieve the best sharpe ratio on ( i ) .results are summarized in fig .we present the behavior with time of the fdax contract itself as well as the cumulative equity of the designed trading system ( expressed in basis points , net of fees ) for the three data samples defined above .we observe the nice behavior of the equity , increasing with time , which shows that the strategy is profitable and coherent with respect to different market regimes .we observe also that the out - sample ( ii ) validates the good behavior of the strategy , which is confirmed on the recent period 2010 - 2011 .as explained in section [ ec ] , we have also guaranteed the robustness of the algorithm using a battery of stress tests . in all cases, we must observe that the system is stable and robust .this is shown in fig .[ stress2 ] , where we present the sharpe ratio for all stress tests considered . as in section [ ec ], the idea is that the robustness is ensured if we do not observe pathological values in the sharpe ratios , even for the more extreme stress tests .we interpret the good result obtained in fig .[ in2 ] as a clear evidence that the time series of the fdax exhibits features of trend - following , under certain other conditions , as encoded in our trading system and as already observed on the ec time series .this will be confirmed again on the cacao future in the next section . finally , in fig .[ live2 ] , we illustrate explicitly the result on the live - sample period ( iii ) , in 2010 - 2011 .effectively , we identify a nice behavior of the cumulative equity ( net of fees ) , much compatible with what has been designed on the in - sample .this confirms our statement above on the dynamical content of the trading algorithm we have presented here . in order to illustrate very simply the gross feature of the model, we present two distributions in fig .[ dist2 ] .we show the trade return spectrum ( fig .[ dist2]-left ) , in which we recognize a typical trend - following shape .we observe also in fig .[ dist2 ] ( right ) that the system is effectively working at high frequency with an average duration of trades of less that minutes .for the cacao future ( cc ) , we have at our disposal the data series ranging only from 2003 till mid-2010 .then , we have chosen to run exactly the ec system on this index , with fees for each trade always equal to 2 times the slippage .we do not produce any further optimization .results are presented in fig .[ cacao ] .we observe a reasonable equity curve , which proves that the system is functioning correctly on this index , uncorrelated to ec and fdax .this confirms the message of this article that the trading system defined above contains elements of invariants of financial markets .tools of statistical physics have been proven to be efficient in many scientific areas . in a similar way for financial time series , knowing that the difference between real markets and random walks is very small , a modern statistical multivariate analysis can help to extract this difference .this is what is encoded in trading systems .we have shown how to achieve the construction of such a system on the euro future contract at high frequency .a typical element of the dynamics of this system is then accessible , namely the trend - following content involved in a more complex architecture on volatilities .then , we have produced other examples on completely different markets , largely uncorrelated to the euro future , the dax and cacao future contracts .the same procedure is followed using similar seed ideas and technical inputs .we have shown that similar results can then be obtained and we conclude that this is an evidence that some invariants , as encoded in our system , have been identified , on very different markets explored over a 10 years period of time . one essential point in our process is that trading models , like the one used in our approach , are highly sensitive to non - linear relations in price series .this comes with the multivariate data analysis . in this article, we have also focused the discussion on the necessity of a deep robustness analysis to ensure the validity of the overall construction .an immediate question can be raised concerning the rationale behind this content .our observation is universal in the sense that the same algorithm , for example on ec , is running on more that 10 years of data , where the monetary policy has changed several times .then , our approach is not attached to a particular regime of interest rates .there are certainly herding behaviors at the origin of the values of parameters encoded in our system. these herding phases may appear with strengths governed by certain fear levels , corresponding to volatility domains .also , in some circumstances , nothing special can be said .finally , a global rationale explanation of a given trading system is very complex and probably not unique .this is beyond the scope of this article .see ideas in . in this article, we have completed a pure experimental analysis .the concept of invariance comes with the observation that we extract similar seed features from largely uncorrelated financial time series .this is a first step , rooted exclusively on data . c. tsallis , braz .j. phys .* 29 * ( 1999 ) 1 .d. prato and c. tsallis , phys .e * 60 * ( 1999 ) 2398 .a. rapisarda , a. pluchino and c. tsallis , arxiv : cond - mat/0601409 .c. amsler _ et al ._ [ particle data group ] , _ phys .* b667 * ( 2008 ) 1 ; available at pdg.lbl.gov . | for the pedestrian observer , financial markets look completely random with erratic and uncontrollable behavior . to a large extend , this is correct . at first approximation the difference between real price changes and the random walk model is too small to be detected using traditional time series analysis . however , we show in the following that this difference between real financial time series and random walks , as small as it is , is detectable using modern statistical multivariate analysis , with several triggers encoded in trading systems . this kind of analysis are based on methods widely used in nuclear physics , with large samples of data and advanced statistical inference . considering the movements of the euro future contract at high frequency , we show that a part of the non - random content of this series can be inferred , namely the trend - following content depending on volatility ranges . of course , this is not a general proof of statistical inference , as we focus on one particular example and the generality of the process can not be claimed . therefore , we produce other examples on a completely different markets , largely uncorrelated to the euro future , namely the dax and cacao future contracts . the same procedure is followed using a trading system , based on the same ingredients . we show that similar results can be obtained and we conclude that this is an evidence that some invariants , as encoded in our system , have been identified . they provide a kind of quantification of the non - random content of the financial markets explored over a 10 years period of time . # 1#2#3#4#1 * # 2 * ( # 3 ) # 4 * about the non - random content of financial markets * laurent schoeffel + + cea saclay , irfu / spp , 91191 gif / yvette cedex , + france |
as one of the three clock - synchronization algorithms studied for wireless sensor networks ( wsns ) under unknown delay , leng and wu proposed the generalization of the maximum - likelihood - like estimator ( mlle ) proposed by noh _ et al_. . to overcome the drawback of the mlle that it can utilize only the time stamps in the first and the last of message exchanges , they extend the gap between two subtracting time stamps from to a range of ] , in eq .( 33 ) of is now simplified as follows : k .- l .noh , q. m. chaudhari , e. serpedin , and b. w. suter , `` novel clock phase offset and skew estimation using two - way timing message exchanges for wireless sensor networks , '' _ ieee trans ._ , vol .55 , no . 4 ,pp . 766777 , apr . | the generalization of the maximum - likelihood - like estimator for clock skew by leng and wu in the above paper is erroneous because the correlation of the noise components in the model is not taken into account in the derivation of the maximum likelihood estimator , its performance bound , and the optimal selection of the gap between two subtracting time stamps . this comment investigates the issue of noise correlation in the model and provides the range of the gap for which the maximum likelihood estimator and its performance bound are valid and corrects the optimal selection of the gap based on the provided range . clock synchronization , two - way message exchanges , maximum likelihood estimation . |
consider a heat conductor having ( positive ) constant initial temperature while its boundary is constantly kept at zero temperature .this physical situation can be described by the following initial - boundary value problem for heat equation : here the _ heat conductor _ is a bounded domain in the euclidean space with lipschitz boundary and denotes the normalized temperature of the conductor at a point and time a _ hot spot _ is a point at which the temperature attains its maximum at each given time that is such that if is convex ( in this case is said a _ convex body _ ) , it is well - known by a result of brascamp and lieb that is concave in for every and this , together with the analyticity of in implies that for every there is a unique point at which the gradient of vanishes ( see also ) . the aim of this paper is to give quantitative information on the hot spot s location in a convex body. a description of the evolution with time of the hot spot can be found in ; we summarize it here for the reader s convenience .a classical result of varadhan s tells us where is located for small times : since ( here is the distance of from ) , we have that where and is the _ inradius _ of . in particular, we have that for large times instead , we know that must be close to the maximum point of the first dirichlet eigenfunction of indeed , denoting with the eigenvalue corresponding to , we have that converges to locally in as goes to ; therefore ( see ) while it is relatively easy to locate the set by geometrical means , does not give much information : locating either or has more or less the same difficulty . in this paper , we shall develop geometrical means to estimate the location of ( or ) , based on two kinds of arguments .the former is somehow reminiscent of the proof of the maximum principle of alexandrov , bakelmann and pucci and of some ideas contained in , concerning properties of solutions of the monge - amp ` ere equation .the estimates obtained in this way are applicable to any open bounded set , not necessarily convex .let be a bounded open set and denote by the closure of its convex hull ; we shall prove the following inequality ( see theorem [ th : nonconvessi ] ) : ^n}\,.\ ] ] here , is the diameter of and is a constant , depending only on the dimension , for which we will give the precise expression ; observe that the quantity is scale invariant .when is convex , more explicit bounds can be derived ; for instance , the following one relates the distance of from to the inradius and the diameter : where again is a constant depending only on ( see theorem [ th : conv ] for its expression ) .we point out that the so called _ santal point _ of always satisfies , hence this can also be used to locate such a point ( see section [ sec : polar ] and remark [ santalo ] ) .the latter argument relies instead on the following idea from .let be the unit sphere in for and define the hyperplane and the two half - spaces ( here the symbol denotes the usual scalar product in ) .suppose has non - empty intersection with the interior of the conductor and set then _ if the reflection of with respect to the hyperplane lies in then can not contain any critical point of _ this is a simple consequence of _ alexandrov s reflection principle _ based on hopf s boundary point lemma ( see section [ sec : alexandrov ] for details ) .based on this remark , for a convex body we can define a ( convex ) set the _ heart _ of such that for every ( in fact , we will prove that can not contain the hot spot for any ) .the heart of is easily obtained as the set as we shall see , the two methods have their advantages and drawbacks , but they are , in a sense , complementary . on the one hand , while inequalities and are quite rough in the case in which has some simmetry ( e.g. they do not allow to precisely locate even when is a ball ) , by the second argument , the problem of locating is quite trivial ; on the other hand , while in some cases ( e.g. when has no simmetries or contains some flat parts , as example [ exa : joint ] explains ) , we can not exclude that the heart of extends up to the boundary of estimates and turn out to be useful to quantitatively bound away from thus , we believe that a joint use of both of them provides a very useful method to locate or . studies on the problem of locating can also be found in : there , by arguments different from ours and for the two - dimensional case , the location of is estimated within a distance comparable to the inradius , uniformly for arbitrarily large diameter . in section[ sec : folding ] we shall relate to a function of the direction the _ maximal folding function _ and we will construct ways to characterize it .we will also connect to the fourier transform of the characteristic function of : this should have some interest from a numerical point of view .finally , in section [ sec : algoritmo ] , we will present an algorithm to compute when is a polyhedron : based on this algorithm , we shall present some numerical computations .in this section , if not otherwise specified , is a bounded open set and we denote by the closure of the convex hull of .notice that is a convex body , that is a compact convex set , with non empty interior . inwhat follows , denotes the -dimensional lebesgue measure of a set and the -dimensional hausdorff measure of its boundary ; also , will be the volume of the unit ball in we recall here some notations from .the _ gauge function of centered at a point _ is the function defined by observe that we have for every ; in particular , if then is 1-homogeneous .we set so that is the convex function whose graph is the cone projecting from the point .it is also useful to recall the definition of the _ support function _ of that is as it is easily seen , is a -homogeneous convex function ; viceversa , to any convex -homogeneous function it corresponds exactly one convex body whose support function is ( refer to , for instance ) .the _ polar set of with respect to _ is the convex set coinciding with the unit ball of the `` norm '' . ] centered at , that is if is in the interior of , then is compact .observe that this can be equivalently defined as we also recall that for every convex body the function ] thus , is readily obtained by observing that the function of into the braces is increasing and hence bounded below by we are now ready to prove the first quantitative estimate on the location of : this will result from a combination of the previous lemma and .[ th : nonconvessi ] under the same assumptions of theorem [ thmestimx0 ] , we have that in particular , the following estimate holds true : applying lemma [ lmestimk * ] with and corollary [ cor : estimxinfty ] gives : thus , easily follows by observing that . finally , using the isodiametric inequality ^n\!\!,\ ] ] in conjunction with ,we show the validity of . estimates and involve the first eigenvalue , which in general is not easy to compute explicitly ; when is convex , we can estimate from above by means of basic geometric quantities , thus providing an easily computable lower bound on .this is the content of the following theorem , which represents the main contribution of this section .[ th : conv ] if is convex , then ,\ ] ] where is the inradius of , denotes the first dirichlet eigenvalue of in the unit ball and is the isoperimetric ratio of . in particular, the following bound from below on holds true .\ ] ] the proof of readily follows by combining to the following upper bound on proved in ( * ? ? ?* theorem 2 ) . using and the two inequalities ^{n-1},\ ] ]we end up with . observe that using and the inequality which follows from the monotonicity and scaling properties of , we can infer ,\ ] ] thus providing a lower bound which is strictly greater than , as long as the ratio is strictly smaller than and .inequality is in fact a corollary of a sharper inequality holding for starshaped sets .we can then give a refinement of which holds in this larger class : to this end , we borrow some notations from .a set is said to be _ strictly starshaped _ with respect to a point if it is starshaped with respect to and if its support function centered at , i.e. is uniformly positive , that is .let be a strictly starshaped set with locally lipschitz boundary , as in we define where denotes surface measure on according to this notation , ( * ? ? ?* theorem 3 ) states : arguing as in theorem [ th : conv ] and using in place of gives the following estimate .[ th : ambiguo ] let be a strictly starhaped set with locally lipschitz boundary and denote by the closure of the convex hull of .then we remark that is sharper and more general than , and it is at the same time more explicit than , in the sense that , differently from the number can be computed directly from the support function ( which exactly determines a convex set ) . [ santalo ] it is worth noticing that the santal point of always satisfies ( as well as for every ) , then it satisfies all the estimates we proved for in this section .in particular , theorem [ th : conv ] ( or theorem [ th : ambiguo ] ) can be used as well to estimate the location of the santal point of a convex set .in this section , for the reader s convenience , we recall some relevant facts about _ aleksandrov s symmetry principle , _ which has been extensively used in many situations and with various generalizations ( see for a good reference ) . for let and be the sets defined in and . also , define a linear transformation by the matrix : where is the kronecker symbol and the are the components of .then the application defined by represents the reflection with respect to as already mentioned , if is a subset of we set [ th : nohotspot ] let be a bounded domain in with lipschitz continuous boundary and suppose the hyperplane defined by has non - empty intersection with assume that if is not symmetric with respect to , then does not contain any ( spatial ) critical point of the solution of . for and function is well - defined and is such that hence in by the strong maximum principle for parabolic operators ( see ) .since on we obtain that on it ( is in fact the interior normal unit vector ) , by hopf s boundary lemma for parabolic operators .we conclude by noticing that on with the same arguments and a little more work , one can extend this result to more general situations , involving nonlinearities both for elliptic and parabolic operators . as an example , here we present the following result .[ th : nocritical ] let and satisfy the same assumptions as those of proposition [ th : nohotspot ] ; in particular suppose that let be a solution of class of the system : where is a locally lipschitz continuous function .if is not symmetric with respect to , then does not contain any critical point of the proof runs similarly to that of proposition [ th : nohotspot ] ; the relevant changes follow .the function defined for satisfies the conditions : where the function defined by is bounded by the lipschitz constant of in the interval .$ ] hence in by the arguments used in .let then and the strong maximum principle can be applied to obtain that in the conclusion then follows as before by hopf s boundary lemma .an immediate consequence of this theorem is the following result .[ coro : nohotspot ] let and satisfy the same assumptions as those of proposition [ th : nohotspot ] .let be the first ( positive ) eigenfunction of with homogeneous dirichlet boundary conditions .if is not symmetric with respect to , then does not contain any critical point of what follows , we shall assume that is a convex body , that is a compact convex set with non - empty interior .occasionally , we will suppose that is of class i.e. a set whose boundary is an -dimensional submanifold of of class we are interested in determining the function given by which will be called the _maximal folding function _ of defines in turn a subset of the _ heart _ of as of course , is a closed convex subset of observe that can be bounded below and above by means of the support functions of and the following results motivate our interest on and [ prop : heart ] let be a convex body . * the hot spot of the point and any limit point of always belong to moreover , and must fall in the interior of whenever this is non - empty . *the center of mass of always belongs to the heart of * if is strictly convex , the incenter of belongs to * let if there exist independent directions such that then in particular , if then reduces to and the hot spot of is stationary .* let then items ( i ) and ( iv ) follow by observing that , for the set is contained in and is symmetric with respect to hence , \ dy\end{aligned}\ ] ] and the last term is non - negative , vanishing if and only if is -symmetric . items ( ii ) and ( iii ) are easy consequences of proposition [ th : nohotspot ] and corollary [ coro : nohotspot ] .for a fixed let us define which is non negative , thanks to ( ii ) . then and for every such that .hence thus taking the maximum as varies on we obtain .informations on convex heat conductors with a stationary hot spot can be found in .formula deserves some comments : observe that for every fixed , the minimum problem inside the braces amounts to finding a direction close to , so to maximize , and such that at the same time we can fold as much as possible , so to minimize the difference .we conclude this subsection by an example that shows how the simultaneous application of proposition [ prop : heart ] and the results of section 2 substantially benefits the problem of locating [ exa : joint ] let us consider a spherical cap with .thanks to the simmetry of , it is easily seen that its heart is given by which is a vertical segment touching the boundary at the point . in particular , by this method we can not exclude that the hotspot ( or the point ) is on the boundary . however , we can now use the results of section [ sec : polar ] , to further sharpen this estimate on the location of : indeed , applying theorem [ th : conv ] , we get ,\ ] ] where we used that and . the following theorem whose proof can be found in ( * ?* theorem 5.7 ) guarantees that , for a regular set ( not necessarily convex ) , the maximal folding function is never trivial .[ th : fraenkel ] let be a bounded open ( not necessarily convex ) subset of with boundary , and denote by the convex hull of for every , there exists such that , for every in the interval we have : 1 . ; 2 . , for every .unfortunately , the previous result is just qualitative and does not give any quantitative information about the maximal folding function .moreover , notice that the assumption on can not be dropped , even in the case of a convex domain : think of the spherical cap in example [ exa : joint ] , for which we have ..3 cm in order to compute we need some more definitions .we set and for every we define the segment then , we denote by the projection operator on , that is the application defined by and , for in the set the _ shadow of in the direction _ we define : we say that a convex body is _ -strictly convex _ if does not contain any segment parallel to if is -strictly convex , then for every ( equivalently ) such that the normal to at is orthogonal to , the set degenerates to the singleton we point out that is a convex function on , while is concave ; moreover , if we set we have that and , as soon as is -strictly convex , where obviously denotes the graph of the relevant functions .[ th : brasco ] let be a convex body . for the function given by then observe that let since then , for every point with and we have that in particular , for we obtain that and hence thus , if maximizes by taking we see that for every and therefore , and hence if we now remember that , for a convex domain , the quantity is the _ width of in the direction _ , we immediately get a nice consequence of the previous theorem .[ th : corewidth ] let be a convex body. then we have the following estimate for the width of in the direction : we first observe that , so that and yields then , from the definition of width , using ( [ supp_core ] ) , ( [ lambda ] ) and ( [ -lambda ] ) , we get thus concluding the proof . in general , inequality is strict .for example , in consider the ellipse given by with .the function can be easily computed in this case : for every we get the set is the image of a _ quadrifolium _ ( a _ rhodonea _ with petals ) by the mapping .thus , for example , by choosing the direction the right - hand side of equals while clearly the left - hand side is zero since , due to the simmetries of .this example also highlights the interest of the quantity , which can be seen as a measure of the lack of symmetry of in the direction of .the function in can be explicitly computed by the use of the fourier transform : this is the content of the next result .[ th : fourier ] let be a convex body and for , let be the function defined in . where denotes the fourier transform of the characteristic function of and differentiation in the direction for and we write and with , and .by fubini s theorem we compute for we then obtain : \ , e^{-iy\cdot\eta}dy.\end{aligned}\ ] ] therefore , by the inversion formula for the fourier transform , we have : by , we also obtain that and hence formula follows from and at once .if is a polygon , can be explicitly computed in terms of the vertices of .let be a ( convex ) polygon with vertices we assume that are ordered counterclockwise and we set rewriting as a boundary integral ( see ) by means of the divergence theorem , we have that where is the exterior normal to the -th side of also , is easily computed from the previous expression : ^ 2}\\ & + { \displaystyle}\frac{i}{|\eta|^2}\,{\sum\limits}_{j=1}^n |p_{j+1}-p_j|(\nu_j\cdot\eta)\ , \frac{(p_{j+1}\cdot{\omega})\,e^{-ip_{j+1}\cdot\eta}-(p_j\cdot{\omega})\,e^{-ip_j\cdot\eta}}{(p_{j+1}-p_j)\cdot\eta}. \end{split}\ ] ] we conclude this section by presenting some necessary conditions for the optimality of to this aim , we first state and prove an easy technical result for the subdifferential of a function .[ th : tech ] let be a convex open set .let and be a convex and , respectively , a concave function from to . if attains its maximum at a point , then it is clear that both and are non - empty .since is a maximum point , we get and hence for every and if then we have as a consequence of the definitions of and , we have that implies that belongs to the boundary of we are now in a position to state a necessary optimality condition .[ th : necessary ] let be a convex body and .suppose that attains its maximum at a point that is set and for every denote by its reflection with respect to the optimal hyperplane , that is then : 1 .if , we have where denotes the _ normal cone _ of at a point ; 2 . if , there holds where we can suppose for simplicity that we first suppose that is an interior point .since by its very definition is the sum of a convex function and a concave one , by lemma [ th : tech ] let us now set the reflection of in the optimal hyperplane is and for then , since , implies .if is on the boundary of then or may be empty : this is clearly the case if or have some vertical parts .observe that actually we have the following possibilities : 1 . ; 2 . .if ( 1 ) holds , then at every , the convex body has only supporting hyperplanes parallel to : these are invariant with respect to the action of , so that their reflections are supporting hyperplanes for at and formula or easily follows . if ( 2 ) holds , we have and let us call .then and we have , so that by observing that , follows . under the same notations of theorem [ th : necessary ] , if admits a ( unique ) unit normal at the point , then it admits a unit normal at the point too and in particular , if we have .it is sufficient to observe that in this case and hence is a consequence of or . using theorem [ th : necessary ], we obtain an interesting upper bound on the maximal folding function for a strictly convex domain , in terms of its support function .if is strictly convex , then for every first observe that thanks to the of the support function , the maximization problem in can be equivalently settled in .the strict convexity of implies that ( see ) .moreover , for every , with .thus , with the same notations as in theorem [ th : necessary ] , for every and using the condition , we also get ; on the other hand , and , which implies the following hence we can conclude by simply applying theorem [ th : brasco ] .if is symmetric with respect to a hyperplane orthogonal to , then equality holds in and both quantities equal .otherwise , in general inequality is strict as figure informs us . the intersection of the two straight lines corresponds to the dark dot corresponds to notice that , following an argument similar to that of the proof of proposition [ paolo ] , we can in fact give a precise characterization of the maximal folding function in terms of the support function .precisely the following holds where if is not strictly convex ( and then is not ) the above formula still remains valid , up to suitably interpreting the gradient of as the subdifferential .if is a convex polyhedron , then the conclusions of theorem [ th : brasco ] can be improved : roughly speaking , we can discretize the optimization problem ( [ lambda ] ) , by only visiting the projections of the vertices of on we begin with the following general result . in other words ,if touches " from the interior at and is contained in the interior of some segment on the boundary of , then the boundary of must contain all the segment at the same . indeed , let be a support hyperplane to at and denote by the half - space delimited by and containing ; then is also a support hyperplane to at and .thus , while this implies since is not an endpoint of , and hence . now , let be a convex polyhedron .if is not a vertex of then belongs to the relative interior of some -dimensional facet of with and hence it belongs to the relative interior of a segment with ( at least ) one end at some vertex of and in two examples .in the first picture observe that , by means of , we also know that is at a positive ( and computable ) distance from the boundary of ,title="fig : " ] and in two examples . in the first picture observe that , by means of , we also know that is at a positive ( and computable ) distance from the boundary of ,title="fig : " ] in case ( ii ) , the -dimensional facet of containing must be orthogonal to the hyperplane however , the same argument used for case ( i ) can easily be worked out in the first author was partially supported by the european research council under fp7 , advanced grant n. 226234 `` analytic techniques for geometric and functional inequalities '' , while the second and third authors were partially supported by the prin - miur grant `` aspetti geometrici delle equazioni alle derivate parziali e questioni connesse '' .h. l. brascamp , e. h. lieb , _ on extensions of brunn - minkowski and pr`ekopa - leindler theorems , including inequalities for log - concave functions , and with an application to the diffusion equation _ , j. funct .( 1976 ) , 366389 .m. chamberland , d. siegel , _ convex domains with stationary hot spots _20 ( 1997 ) 11631169 .l. e. fraenkel , introduction to maximum principles and symmetry in elliptic problems , cambridge university press 2000 .p. freitas , d. krejcirk , _ a sharp upper bound for the first dirichlet eigenvalue and the growth of the isoperimetric constant of convex domains _ , proc .american math .( 2008 ) , 29973006 .b. gidas , w. m. ni , l. nirenberg , _ symmetry and related properties via the maximum principle _ ,68 ( 1979 ) , no .3 , 209243 .d. grieser , d. jerison , _ the size of the first eigenfunction on a convex planar domain _ , j. amer .soc . 11 ( 1998 ) , 4172 .r. gulliver , n.b .willms , _ a conjectured heat flow problem , _ in solutions , siam review 37 , ( 1995 ) 100104 . c. e. gutierrez , the monge - ampre equation , progress in nonlinear differential equations and their applications , 44 .birkhuser boston , inc . ,boston , ma , 2001 . c. s. herz , _ fourier transforms related to convex sets , _ ann .75 ( 1962 ) 8192 .b. kawohl , _ a conjectured heat flow problem , _ in solutions , siam review 37 ( 1995 ) 104105 .m. s. klamkin , _ a conjectured heat flow problem , _ in problems , siam review 36 ( 1994 ) 107. n. korevaar , _ convex solutions to nonlinear elliptic and parabolic boundary value problems , _ indiana univ .j. 32 ( 1983 ) , 603614 .r. magnanini , s. sakaguchi , _ on heat conductors with a stationary hot spot _ ,ann . mat .pura appl . 183( 2004 ) , 123 .r. magnanini , s. sakaguchi , _ polygonal heat conductors with a stationary hot spot _, j. anal . math . 105( 2008 ) , 118 .m. h. protter , h. f. weinberger , maximum principles in differential equations , prentice - hall , englewood cliffs , n. j. , 1967 .s. sakaguchi , _ behavior of spatial critical points and zeros of solutions of diffusion equations , _ selected papers on differential equations and analysis , 1531 , amer . math .ser . 2 , 215 , amer .soc . , providence , ri , 2005 .j. serrin , _ a symmetry problem in potential theory , _ arch .rational mech .43 ( 1971 ) , 304318 .r. schneider , convex bodies : the brunn - minkowski theory , cambridge university press 1993 .g. talenti , _ some estimates of solutions to monge - ampre type equations in dimension two _ , ann . scuola norm .( 1981 ) , 183230 . s. r. s. varadhan , _ on the behaviour of the fundamental solution of the heat equation with variable coefficients , _ comm .pure appl .20 , ( 1967 ) , 431455 . | we investigate the location of the ( unique ) hot spot in a convex heat conductor with unitary initial temperature and with boundary grounded at zero temperature . we present two methods to locate the hot spot : the former is based on ideas related to the alexandrov - bakelmann - pucci maximum principle and monge - amp ` ere equations ; the latter relies on alexandrov s reflection principle . we then show how such a problem can be simplified in case the conductor is a polyhedron . finally , we present some numerical computations . |
device - to - device ( d2d ) communications have emerged as a promising paradigm for 3gpp lte - advanced ( lte - a ) networks , which provide mobile wireless connectivity , reconfigurable architectures , as well as various wireless applications ( e.g. , network gaming , social content sharing and vehicular networking ) for better user experience . with d2d communications , nearby devices in a cellular networkcan communicate with each other directly bypassing the base stations .conventional d2d communications commonly refer to direct information exchanges among devices in human - to - human and machine - to - machine communications , without the involvement of wireless operators .however , conventional d2d technologies can not provide efficient interference management , security control and quality - of - service guarantee .recently , there is a trend towards operator - controlled d2d communications to facilitate profit making for operators as well as better user experience for devices .this paper considers a multi - hop lte - a network consists of devices deployed by multiple operators . in this network , an efficient approach to improve the end - to - end throughputis to enable cooperative sharing of idle devices among the operators for multi - path routing .the cooperation can increase throughput for the devices because a cooperative relay may substantially lead to improved network capacity .accordingly , a larger amount of user traffic demand can be supported , which will lead to higher aggregated revenue for operators . in this cooperation, each operator needs to decide on which operators to cooperate with to maximize profit and , given the cooperation behavior of operators .then , the devices from cooperative operators need to make decision on which devices to cooperate with to maximize throughput .we call the formation of this interrelated operator cooperation and device cooperation as a hierarchical cooperation problem , which is the main focus of this paper .the hierarchical cooperation gives rise to two major concerns .firstly , what is the stable coalitional structure desirable for all operators so that none of operators is willing to leave the coalition ? secondly , what is the stable network structure for cooperative devices to perform multi - path routing ?this paper addresses these two concerns by formulating a layered game framework to model the lte - a network with operators and devices being the players in the upper and lower layer , respectively .previous work has also considered game - theoretic framework with hierarchies / layers , e.g. , in the cognitive radio networks , in two - tier femtocell networks and in wrns .however , most of the works considered the competition relationship between different layers , which belongs to the stackelberg game concept . in the proposed layered game framework , different layers interact to improve the benefit of each other cooperatively .we adopt the concepts of an extended recursive core and nash network as the solutions for the proposed games in the upper layer and lower layer , respective . to the best of our knowledge ,this is the first work to introduce the application of extended recursive core in wireless communications .we consider an lte - a network consisting of a number of devices belonging to multiple operators .we denote the set of operators as , and the set of devices of operator as .the operators are willing to form overlapping coalitions to maximize their individual profits .an overlapping coalitional structure for a number of operators can be defined in a cover function as the set which is a collection of non - empty subsets of such that and the sub - coalitions could overlap with each other . is the total number of coalitions in collection .let denote the set of coalitions that operator belongs to . for multiple access at every hop, we consider an ofdma - based transmission . in an operator coalition , each relay device not only needs to support internal flow transmission demand , but also can serve as a relay for other devices from cooperative operators . due to the limited transmission power of each device , multi - hop relaying is adopted to route flow sessions from source devices to destination devices .since single path routing is too restrictive for satisfying traffic demand , we assume each flow session can be split for multi - path routing if necessary .figure [ model ] illustrates an example for the studied system model and the corresponding notations . in this example, the lte - a network is composed of devices deployed by different operators , i.e. , , , and . from fig .[ model ] , there is a multi - path flow sourced from to and a single - path flow sourced from to . from the chosen links , the final coalitional structure of the operators is .let denote the set of flow sessions from operator . denote the set of nodes of flow .the source and destination device of flow is represented as and , respectively .we denote the link between two devices and as . the channel gain on a link can be obtained from : where is the antenna related constant , is the path loss exponent , and is the distance between devices and . denotes the data rate on link attributed to a flow session .since , for d2d communications , a flow session from a source device may traverse multiple relay devices to reach its destination device , we consider the following two cases about a device .\1 ) if a device is the source or destination of flow session , i.e. , or , then where is the aggregated rate of flow session , and denotes the set of devices having direct link with .\2 ) if a device is an intermediate relay device of flow session , i.e. , and , then let denote the capacity of link .the aggregated data rate on each link can not exceed the link s capacity . thus we have the following constraint , let denote the maximal rate a flow session that is available on link , with the constraints in ( [ source ] ) , ( [ intermediate ] ) and ( [ rate ] ) , and the maximal aggregated rate of a flow session .we have .the aggregated rate of flow session is constrained by where is the rate demand of flow session .we formulate a game - theoretic framework , referred to as the layered coalitional game ( lcg ) , to model the decision - making process of hierarchical cooperation between the operators and devices .both operators and devices are assumed to be self - interested and rational .the operators aim to maximize their individual utility , while the devices attempt to maximize their end - to - end throughput with the help of relay devices from cooperative operators . in this lcg ,the operators need to decide on the coalitional structure and the devices need to make decisions to form a relay network structure , both in distributed ways , with the aim to improve their utilities and payoffs respectively . as both operators and devices only have limited information at their own layer , information exchange between both layers is required .the operators need to collect the payoff information from devices , to make the decision of coalitional structure formation .the decision of operators will then be provided to the devices for the purpose of network structure formation . in this regard , there could be multiple interactions between the operator layer and device layer . recognizing the behavior of operators and devices , we propose to use the overlapping coalition formation game ( cfg ) to model the behavior of operators and the coalitional graphical game ( cgg ) to characterize the interaction among devices , which will be introduced in section iii - b and section iii - c , respectively .since operators and devices have different objectives and concerns during the cooperation , our next step is to define the objective functions to capture the incentives for operators and devices . for a given operator coalition , we define the payoff of a device from operator , which performs flow transmission , as the end - to - end throughput of the flow from device to device , which is expressed as follows , where .a relay device aims to help a device on transmission to improve the throughput by most .therefore , to evaluate how much the relay device can help to improve the throughput of the device , we define the payoff of the relay device as the difference between the throughput of the device with the help of device and that without the help of device , which can be expressed as follows : [ rdevice ] v(\{j } ) & = v^(j)(\{i})-v^(/j)(\{i } ) + & = _ _ h _ h _l^(h ) ^h ^i = n(l^(h ) ) _ j _ i \{j } f_(i , j)(l^(h ) ) + & - _ _ h _ h _l^(h ) ^h ^i = n(l^(h))_j^ _ i f_(i , j)(l^(h ) ) where represents the payoff of device with the help of device and represents the payoff of device without the help of device . in the proposed lcg , operators are allowed to form overlapping coalitions to share their idle devices as relays with the aim to improve the aggregated throughput . through multi - path routing , each operator aims to improve the aggregated throughput as much as possible .thus , in return , higher revenue for providing the flow to meet the customer demand will be rewarded for each operator .we define the individual utility of the operator as the profit , i.e. , revenue minus the cost of devices in transmitting and forwarding a packet ( e.g. , due to energy consumption ) .let denote the rewarded revenue per unit throughput achieved per unit time , and denote the operation cost per device per unit time . in this case , we assume .then , given a partition , for an operator without cooperation , i.e. , , the utility function can be calculated as ( [ operator ] ) , the first term and the second term on the right side of ( [ operator ] ) represents the aggregated revenue for operator and the total device costs , respectively . while cooperation can lead to profit improvement for operators , it may also incur inherent coordination costs , such as packet overhead .let denote the coalition cost incurs to operator for being coaliton .the objective for operator cooperation through cooperative sharing of devices , is to maximize their aggregated profit , i.e. , revenue is subtracted by device operation cost and coalition cost . given the partition , we define the utility of an operator coalition as the profit of the coalition as follows , an overlapping cfg is formulated among operators whose interests are to satisfy its internal flows with as less cost as possible . due to interference ,the utility of any operator is affected by not only the behavior of others in the same coalition , but also that of operators from other coalitions .thus , the considered operator coalitional game is in a partition form since the aggregated utility of a coalition depends on the coalitional structure of all the operators in the network .we introduce the framework of an overlapping cfg in partition form with non - transferable utility to model the cooperation among operators .an overlapping cfg in partition form with non - transferable utility ( ntu ) is defined by a pair where is the set of players , and is a value function that maps every partition and every coalition to a real number that represents the total utility ( profit ) that players in coalition can obtain .the _ strategy _ of an operator is to form the coalitions to improve its individual utility defined by ( [ operator ] ) .note that different from non - overlapping cfg where players have to cooperate with all others in the same coalition and each player only stays in one coalition , in an overlapping cfg , each player is able to join multiple different coalitions .the solution of the overlapping cfg is the stable overlapping coalitional structure for operators , under which no one will deviate .to this end , we adopt the concept of _ extended recursive core _ , referred to as , as the solution . is an extended solution of coalition formation game , which allows coalitions to overlap , accounting for externalities across coalitions . in the proposed game ,the externalities are represented by the inter - coalition interference between devices .deviation is a key notion for the definition of the . as a consequence of deviation, a new partition will be formed .therefore , the deviation is equivalent to the formation of a new partition . in the proposed operator coalitional game ,let partition move to by deviation .* complete deviation : if there exists and such that for all , for all coalition such that , then the player set performs complete deviation .the players are called complete deviators . *partial deviation : if there exists containing only overlapping players such that for all , for all , then the player set performs partial deviation .the players are called partial deviators . in an overlapping cfg in a partition form ,if a coalition of players performs a complete deviation or partial deviation , this may affect the payoffs of the remaining players .for the remaining players , we then define the _ residual game _ as following : let be an overlapping cfg in partition form . if a subset of players has already organized themselves into a certain partition .a residual game is an overlapping cfg in a partition form defined on a set of players .the players in are called residuals .the residual game is an overlapping cfg in partition form on its own . in the residual game ,the players react to the deviation only on the set of remaining players including partial deviators which can play further deviation . given two payoff vectors , if , , and , , we write .let denote an _ outcome _ of the game , where is a utility vector resulting from a partition .let denote the of game , and denote the set of all the possible partitions of . can be found inductively in four main steps \1 ) _ trivial game _ : given a coalitional game , the of a coalitional game with is composed of the only outcome with the trivial partition : .\2 ) _ inductive assumption _ : given the for each game with players , the _ assumption _ about the residual game is defined as follows : \3 ) _ dominance _ : an outcome is dominated via a coalition if for at least one there exists an outcome such that .\4 ) _ generation _ : the of a game of players is the set of undominated outcomes .the concept of dominance expresses that , given a current partition and the respective payoff vector * x * , an undominated coalition represents a deviation from in such a way that the reached outcome is more rewarding for the players in coalition , compared to .thus , can be seen as a set of partitions , under which the players cooperate in self - organized overlapping coalitions that provides them with the highest payoff . during the coalition formation process of the operator coalitional game, in order to reach an outcome lies in , we let each operator iteratively joins and leaves the coalitions to ensures a maximum payoff ( i.e. , an undominated outcome ) . to prevent loop ,we introduce a variable couple _ history _ for each operator to record all the coalitions that it has ever joined and the corresponding utility .if the new coalition for operator to join has been recorded and the utility that it is about to get is the same as the _ history _ , then operator will maintain the current coalition set even if its utility will better off .once operator changes its coalition set , the new coalition set is included in _we propose the coalition formation algorithm for operators in * algorithm 1 * to reach the stable coalitional structures in .* initial state * + in the starting network , the operators are partitioned by with non - cooperative operators . * coalition formation process * + * phase 1 * _ network discovery _+ devices from the same operators perform the _ dynamic virtual link formation _algorithm specified in * algorithm 2*. based on the information feedback from device layer , each operator calculates the corresponding utility in non - cooperative case . * phase 2 * _ coalition formation _ the operators play their strategies sequentially in a random order . *repeat * 1 ) each operators sequentially engages in pair- wise negotiations with another operator to identify potential cooperator . during this process ,the devices from the operator pair perform _ dynamic _ _ virtual link algorithm_. based on the information feedback , each operator calculates its potential utility .2 ) based on the potential utility information and _ history _ , the pair of operators decides to form a new coalition if it ensures the utility improvement .3 ) the operators already that have cooperation with any operator in update their utility .based on the updated utility and _ history _ , they perform deviation with the operator(s ) in if it leads to utility improvement .* until * any further coalition formation does not result in utility improvement of at least one operator , i.e. , conver- gence to a stable partition in the .* phase 3 * _ cooperative sharing _the operators share their relay devices with cooperative operators for multi - path routing according to the final network graph .the convergence of the operator coalitional game is guaranteed due to the fact that the total number of possible partitions with overlapping coalitions is finite , the transition from a partition to another leads to the increase of individual utility , and the game contains mechanism to prevent the operators to re - visit a previously formed coalitional structure . as a result , each cooperation buildup and breakup will lead to a new partition . as the number of the partition can be visitedis finite , the game is guaranteed to reach a final partition , under which the utility of each operator can be no longer increased .furthermore the last partition lies in the because during the coalition formation process only the partitions that bring improvement of individual utility for each operator are formed , and in the final convergent partition , there are no dominated coalitions which the operators are better off by deviating from . in the cgg , the source devices need to play _ transmission strategy _ , while the relay devices need to play not only transmission strategy but also _ relay strategy_. the transmission strategy for a source device is to send a link establishment proposal to a relay which can help to improve its payoff defined in ( [ device ] ) .the relay strategy for a relay device is to accept or reject a link proposal from a transmitting device which increases its payoff defined in ( [ rdevice ] ) .once the relay device accepts the link establishment proposal , it will need to play transmission strategy like a source device .we consider that an device can have multiple incoming and outgoing flows simultaneously , constrained by ( 2 ) , ( 3 ) and ( 4 ) , and the maximum transmit power is limited to .we denote the strategy space which consists of all the strategies of device .when device plays strategy , while all other devices maintain their current strategies denoted by a vector , the resulting network graph is denoted by .all the devices are considered to be myopic in the sense that each device responds to improve the payoff given the current strategies of the other devices .that is , each devices plays myopically without the foresight of the future evolution of the network . based on this, we define the concept of _ best response _ for devices as follows : a strategy is a best response for a device , if , . based on the concept of best response, we introduce the myopic dynamics algorithm for the proposed cgg shown in * algorithm 2*. we define an iteration as a round of myopic plays during which each device chooses to play its current best response sequentially in a random order with the aim to maximize its payoff given the current strategies of others .the dynamic virtual link formation process may consist of one or more iterations , as the best strategy of each device may change over time .all the devices play best strategies based on their flow demands .each source device has a certain self - generated flow demand , while the flow demand of a relay device is equal to its incoming throughput . to meet the flow demand ,the device on transmission may propose a link establishment proposal to multiple relay devices .the iteration stops when either all the flow demands of devices are satisfied or none of the devices can unilaterally change its strategy to improve its payoff . in other words ,when the algorithm converges , it results in a network in which none of the devices can unilaterally improve its throughput .this is referred to as a nash network , which gives the stability concept of the final network structure . specifically , the nash network is defined as follows .[ nash ] a network graph with denoting the set of all notes , i.e. devices , and denoting the set of all edges , i.e , links between pairs of devices , is the nash network in its strategy space , , if no device can improve its payoff by a unilateral change in its strategy . in the nash network , all the links are chosen based on the best responses of devices and are thus in the nash equilibrium . in a network with finite number of devices ,the final network structure resulting from the proposed cgg is a nash network . *initial state * + in the starting network , each source device transmits directly to its destination device .* network structure formation process * + * repeat * * phase 1 * _ dynamic virtual link replacement _ the devices play their strategies sequentially in a random order .* repeat * 1 ) during each iteration , every device on transmis- sion performs pairwise negotiation with other idle devices from cooperative operators and calculates its potential payoff improvement under cooperation .2 ) after negotiation , each device plays its best response , based on its flow rate demand . *until * none of the devices can further improve its payoff by a unilateral change in its strategy. * phase 2 * _ feedback _ each device sends the information about the link back to its operator for a coalition formation decision . * until * acceptance of the convergent network structure by all the operators which perform * algorithm 1 * , i.e. , * algorithm 1 * converges . *phase 3 * _ multi - hop routing _ all the source and relay devices perform multi - path routing according to the final network structure .for simulation , we consider an lte - a network with a tdd - ofdma scheme , locating within a area . the bandwidth available in this networkis .the maximum transmit power of each device is .we calculate the capacity of each link according to shannon capacity .the noise level is . for the wireless propagation , as in , we set the path loss exponent and antenna related constant .four operators are considered in this lte - a networks . for each operator , there is one internal flow session . for each flow session ,the source device and destination device are randomly selected .the number of devices from each operator is varied from to .the results presented in this section are averaged over times of run with random location of devices .we first examine the convergence speed of the cgg at the device layer . to this end , we set the coalition cost to be . in this case , the grand coalition is always one of the stable coalitional structures .hence , the simulation with this setting is equivalent to performing the cgg given the coalitional structure of grand coalition at the operator layer .each flow is assigned with a random rate requirement within kb / s$ ] . in fig .[ average_iteration ] , we show the average and the maximum number of iterations required till convergence of the algorithm from the initial network structure ( i.e. , direct transmission ) , as a function of the number of devices. as expected , with the increase of the number of devices , more interactions among devices is required for the cgg to converge . observing from fig .[ average_iteration ] , we find that when the total number of devices varies from to , the average and maximum numbers of iterations vary from and to and , respectively .thus , on average , the convergence speed of the algorithm is satisfactory .we then examine the proposed lcg in a lte - a network with uniform distribution of devices .the revenue obtained in a time unit for successfully transmitting a unit throughput ( i.e. , ) is .the operation cost is considered to be the power consumption cost .for each device to transmit or forward flow in a time unit , the cost is _ per _ watt .we assume that when the operators make cooperation with another , a fixed cost is incurred to both of them .thus , the total coalition cost afforded by operator for being in coalition is .the total coalition cost for operator can be calculated by .we set . in fig .[ average_utility ] , we evaluate the efficiency of the proposed lcg by presenting the average aggregated utility of all operators achieved as a function of the number of total devices . for comparison , we also study an lcg variant ( labeled as lcg - variant " ) which substitutes the proposed cfg at the operator layer with a coalition formation game only enabling non - overlapping coalitions .we use the _ recursive core _ as the stability concept for the partition of operators and adopt a solution similar to the merge - only algorithm proposed in to find a stable partition .moreover , we use the result of non - cooperative case ( labeled as non - cooperative " ) , in which the operators work independently with each individual performing cgg among its own devices , as the lower bound performance. we can observe from the figure that cooperation brings significant performance gains over non - cooperative case .the proposed lcg employing overlapping cfg outperforms the lcg variant with non - overlapping cfg .this is because the overlapping cfg allows more freedom in coalitional structure formation for potential utility improvement .in addition , we can observe that the performance gap between lcg and the lcg variant increases with the number of devices in the network .the hierarchical cooperation in lte - a networks is a promising solution for high speed data transmission and wide - area coverage . in this paper , we have presented a game - theoretic framework to model the hierarchical cooperation problem .specifically , we have proposed a layered coalitional game ( lcg ) to model the cooperation behavior among the operators and devices in the different layers of lte - a networks .the concept of extended recursive core has been advocated as the solution of stable coalitional structures .we proposed an overlapping coalition formation game for operators to find a stable coalitional structure lies in the extended recursive core that benefits all the cooperative operators . while , a coalitional graphical game has been introduced for devices to form the stable network structures for multi - path routing .numerical results have shown that the proposed lcg yields notable gains relative to both the non - cooperative case and a lcg variant . the future work will characterize the performance gap between the proposed lcg and the optimal solutions obtained by centralized approaches .this work was supported in part by singapore moe tier 1 ( rg18/13 and rg33/12 ) , k. doppler , m. rinne , c. wijting , c. ribeiro , and k. hugl , device - to - device communications as an underlay to lte - advanced networks , " _ ieee communication magazine _ , vol .42 - 49 , dec . 2009 .y. xiao , g. bi , d. niyato and l. a. dasilva , a hierarchical game theoretic framework for cognitive radio networks , " _ ieee journal on selected areas in communications - cognitive radio series _ ,2053 - 2069 , november 2012 .x. kang , r. zhang , and m. motani , price - based resource allocation for spectrum - sharing femtocell networks : a stackelberg game approach , " _ ieee journal on selected areas in communications _ ,538 - 549 , 2012 .f. meshkati , h. v. poor , s. c. schwartz , and n. b. mandayam , an energy - efficient approach to power control and receiver design in wireless data networks " , _ ieee transaction on communication _ , vol .1885 - 1894 , november 2005 .s. guruacharya , d. niyato , m. bennis , and d. i. kim , dynamic coalition formation for network mimo in small cell networks , _ ieee transaction on wireless communication _ , vol .5360 - 5372 , oct . | device - to - device ( d2d ) communications , which allow direct communication among mobile devices , have been proposed as an enabler of local services in 3gpp lte - advanced ( lte - a ) cellular networks . this work investigates a hierarchical lte - a network framework consisting of multiple d2d operators at the upper layer and a group of devices at the lower layer . we propose a cooperative model that allows the operators to improve their utility in terms of revenue by sharing their devices , and the devices to improve their payoff in terms of end - to - end throughput by collaboratively performing multi - path routing . to help understanding the interaction among operators and devices , we present a game - theoretic framework to model the cooperation behavior , and further , we propose a layered coalitional game ( lcg ) to address the decision making problems among them . specifically , the cooperation of operators is modeled as an overlapping coalition formation game ( cfg ) in a partition form , in which operators should form a stable coalitional structure . moreover , the cooperation of devices is modeled as a coalitional graphical game ( cgg ) , in which devices establish links among each other to form a stable network structure for multi - path routing . we adopt the extended recursive core , and nash network , as the stability concept for the proposed cfg and cgg , respectively . numerical results demonstrate that the proposed lcg yields notable gains compared to both the non - cooperative case and a lcg variant and achieves good convergence speed . _ keywords- _ d2d communications , lte - advanced network , layered coalitional game , coalitional structure formation , multi - path routing , extended recursive core , coalitional graphical game , nash network . |
in the past decade quadcopters have been studied due to their relative simple fabrication in comparison to other aerial vehicles , which turns them into ideal platforms for modeling , simulation and implementation of control algorithms .the fact that they are unmanned vehicles naturally invites developers to explore tasks that require a high degree of autonomy .+ past works such as have set the base for developing quadcopter platforms from their construction to the automation techniques necessary to control the highly non - linear dynamics that characterize these vehicles .+ the scope of the quadcopter technology has changed over the years .the cost and sizes have been reduced , it is now a platform affordable for a broad type of public , from researchers to hobbyists .but beyond the economic revenue these vehicles generate , the manufacturers are searching for more autonomy , longer flight time , high data processing capabilities and adaptation to changing environments , hence the active research on quadcopters . + a fairly new type of quadcopters are the so called `` nanoquads '' that are of considerably low size and weight , making them an ideal platform for indoor usage .the project proposed here considers the study of a commercial platform of a nanoquadcopter called `` crazyflie 2.0 '' developed by bitcraze company .weighting only 27 grams and having 9.2 cm of length and width , this nanoquad has rapidly become one of the preferred platforms for quadcopter research .+ for indoor control of quadcopters different localization techniques can be employed , for example the vicon motion capture system is one of the preferred systems for precise localization and it has been used widely in recent quadcopter studies .a recent low - cost technology based on ultra - wide band radio modules has proven effective for indoor localization in robotics systems and specially in quacopters .its low costs are inviting developers to create their own implementations and the system is getting more precise and robust . in a few words ,the system measures the distance between two ultra - wide band modules , normally called anchor and tag , by measuring the time of flight of an electromagnetic wave .thus , by the simple relationship between time , distance and velocity ( in this case , the speed of light ) , then the distance can be easily determined . by having at least three anchors constantly calculating the distance between them and a certain tag , a triangulation allows to calculate the position in space of the tag , knowing beforehand the fixed position of each anchor with respect to a frame .+ the uwb system can be implemented using a two - way ranging protocol or a one - way ranging protocol . in two - way ranging , the tag communicates with each anchor individually following a sequence to go through all the anchors and calculate each distance . on the other hand , in one - way ranging the tag constantly broadcasts messages that are received by every anchor and by precisely synchronising the clocks of the anchors then the distance between each of them and the tag are calculated .one - way ranging is particular useful for multi - robot localization applications as there exists no bottle - neck in the number of tags the system can support . in particular , for this project the two - way ranging system developed in was used to test the control loop behavior using different localization techniques .this system was developed using the decawave dmw1000 ultra - wide band module which offers an accuracy of 10 - 20 centimeters in distance measurements .the main objectives of the research project were : 1 . develop the mathematical model that describes the dynamics of the crazyflie 2.0 quadcopter .2 . create a simulation environment for testing position and trajectory tracking control algorithms .implement , test and compare different control architectures .4 . evaluate the performance of a low - cost uwb - based localization system when integrated in the control loop .a set of small milestones were defined to help achieve the main objectives of the project : 1 .investigate past works to identify the physical and aerodynamical parameters of the crazyflie 2.0 . 2 .linearize the quadcopter s dynamics around hover state .3 . study and identify the control architecture inside the crazyflie s firmware .4 . design , simulate and implement an off - board position controller using data from the vicon positioning system .5 . conceive a second control system , from simulation to implementation , to track more demanding trajectories .compare the performance of both controllers with in - flight data .compare the performance of the lqt controller using both the vicon and the uwb systems .in this section a mathematical model of the crazyflie 2.0 is proposed .this study was the basis on which the simulation environment was built and an important component in the design of controllers .thus , it was important to dedicate enough time to understand how the system works and identify correctly some physical parameters that were relevant for the simulation to be useful in the real case scenario .before any dynamic study of the quadcopter begins , it is necessary to define the coordinate frames of the body of the quadcopter ( non - inertial frame ) as well as the inertial frame , also called `` world frame '' , which in the case of this project refers to the coordinate frame set by the external positioning system ( vicon / uwb ) . following the conventions set by the `` bitcraze '' company when designing their quadcopter , as seen in the body - fixed frameis defined . body - fixed frame and inertial frame . ] in the aeronautic systems , a popular axes convention is to define a positive altitude downwards , the y axis pointing towards the east and the x axis pointing towards the true north .these types of frames are called ned frames ( north , east , down ) .it was decided to follow the convention used in the crazyflie 2.0 firmware , meaning a positive altitude upwards , which defines an enu frame ( east , north , up ) .another remark is that the origin of the body - fixed frame matches with the center of gravity of the quadcopter .+ another important remark is knowing the flight configuration of the quadcopter as there are two of them : configuration `` + '' or configuration `` x '' .the difference between them is the orientation of the x - y frame in terms of the arms of the quadcopter , as shown in taken from the manufacturer s website and modified accordingly .`` + '' configuration at the left and `` x '' configuration at the right . ] in the modern conceptions of quadcopters the `` x '' configuration is prefered over the `` + '' configuration , mainly because in `` x '' it is easier to add a camera functionality as the quadcopter s arms will not be interfering with the images captured . by defaultthe crazyflie 2.0 is in x mode , so for the rest of this project and during the mathematical modeling it will be considered that the quadcopter is in this configuration .the dynamic equations of the quadcopter proposed here take into account certain physical properties that are not necessarily perfectly valid in the real platform that is being used in this work , but they are good approximations that simplify greatly the study and comprehension of this type of vehicles . hereare the hypothesis : 1 .the quadcopter is a rigid body that can not be deformed , thus it is possible to use the well - known dynamic equations of a rigid body .the quadcopter is symmetrical in its geometry , mass and propulsion system .the mass is constant ( i.e its derivative is 0 ). the mechanical classic laws of motion are valid in inertial systems , so to be able to translate these equations into the body frame it is necessary to define a rigid transformation matrix from the inertial frame to the body - fixed frame , in which only the rotational part is meaningful to the discussion and is given by three successive rotations : first a rotation of an angle around the axis , then a rotation of an angle around the intermediate axis and finally a rotation of an angle around the intermediate axis .once these three rotations are calculated , the resulting transformation matrix is defined as : $}\label{eq : euler}\ ] ] where , and represent the roll , pitch and yaw angles of the quadcopter s body .shows the direction of said angles in the crazyflie 2.0 body - fixed frame defined previously .euler angles in the quadcopter s body . ]the notation convention used during the mathematical analysis of the quadcopter s dynamics is exhibited in , where the state variables are defined ..[tab : notation]notation for vectors and states . [ cols="^,^,^",options="header " , ] the experimental data confirms the robustness of the control system while using a positioning system with around 100 times greater standard deviation noise than the initial system .the tuning of the kalman filter to adapt to this new source of noise was vital to ensure a good compromise between filtering and precision in the estimations . even though there is a clear advantage on using more precise technology , such as the vicon , the system could still track the desired trajectories with an acceptable degree of precision while using a more cheaper system such as the uwb .+ the fact that the position integral gains for the x and y positions had to be lowered while using the uwb system , was a necessary compromise to ensure less oscillations while following the desired trajectory .such a compromise did not arise while using the vicon system , with which the quadcopter remained stable and followed smoothly the trajectories for a wide range of gain values .+ if the control system could only use the uwb system , then a more detailed study of how to compensate the different sources of noise and biases should be made .for instance , the uwb system looses precision when the tag is close to one of the anchors , or if the tag is facing away from one of the anchors .all this subtleties , if taken in account while designing the control system , could lead to a better performance than the one obtained and presented in this work .+ finally , the video found in shows a summary of the project s simulation and experimental results .the study was set out to explore the dynamics of an open source nanoquadcopter named crazyflie 2.0 , as well as creating a simulation environment for control design and then testing it in the real platform .this type of unmanned aerial vehicles is becoming the preferred platform for testing control algorithms of diverse natures , thus the inherent importance of conceiving a mathematical model of the vehicle that can predict , up to some extent , how the system will evolve over time .hence , the project started by a modeling of the nanoquadcopter and an identification of certain physical parameters , based in previous work .working in parallel with the literature and the quadcopter s embedded firmware was the main key in describing the system behavior just as it is in the real platform , an important milestone for future work as the dynamics of a system is the heart of every simulation environment .+ the second phase of the project was building the simulation that served as the first test - bench of the control architectures proposed . using both the non - linear dynamics and the linearised state space realisation of the system ,the simulation created is a solid testing environment to conceive all types of control systems .it was incredibly useful during the first stages of the project to get a better understanding of how the system worked .in addition , the simulation was used for designing both the pid position controller and the lqt trajectory tracker .+ an important conclusion is that the initial belief that all dynamics were decoupled as suggested during the linear modeling was not entirely true in the non - linear system .as observed in the simulations , there exists some interference between movements that , for instance , does not allow the quadcopter to describe a perfectly straight line trajectory when there are more than one movement involved ( a yaw rotation for example ) . + the position pid tracker was tested for time - varying trajectories , such as circles and helices .even though the system could described these trajectories , there were some drawbacks and performance issues , for example not getting fast enough to the desired points which lead to errors in the desired trajectory .the fact that the task at hand was managed by a position tracker and not a trajectory tracker was the main reason of these discrepancies . to address the deficiencies of the pid controller ,a new control system was conceived using the lqt algorithm , which proved to have interesting characteristics while following step responses , mainly that it started moving before the command was asked in order to reduce the tracking error .the feature was possible thanks to the off - line calculation of the algorithm and the knowledge of the trajectory beforehand .+ the comparisons between the pid and the lqt controller indicate a clear superiority of the lqt in terms of reducing the trajectory tracking error , specially in the more demanding trajectories , in which the lqt algorithm reduced up to 4 times the rms errors obtained with the pid controller .directly related to the better tracking , the lqt incurred in higher levels of control effort than the pid , but it also eliminated the great command peaks seen in the motor time plots of the pid , thus getting rid of the undesired motor saturations that could lead to unstable states .+ there are two main drawbacks of the lqt algorithm with respect to the pid : the first one is the inability to specify trajectories for the heading ( yaw angle ) and the second one is the need to know the trajectory before its execution .taking in account these shortcomings , it is proposed as future work for this research to incorporate a method to control the yaw angle while keeping the good performance in the lqt algorithm , the author proposes a gain - scheduling method being the yaw angle the scheduling variable as a possible solution for this problem . as for the second drawback of the lqt algorithm , more researchshould be directed towards an on - line implementation thus making the controller useful in more complex tasks such as planning and execution missions in real - time .+ the gui created for trajectory generation proved to be a valuable asset to quickly test different types of trajectories , with varying difficulty .but the tool can be improved by adding physical constraints to the trajectory generation , as to assure the trajectory is feasible for the quadcopter to follow .future work in this area should explore feasible trajectory generation as proposed in works such as .+ the simulation versus experimental comparative time plots show that the simulation environment developed in this project was accurate to some extent , serving its purpose as a useful design tool for the controllers synthesized , but it had its limitations mainly due to unmodeled phenomena , which lead to the need of introducing high integral gains in the controllers to compensate the model errors and other perturbations of the system . as future work, it is suggested a more thorough model identification for the quadcopter , for example using numerical methods such as the closed - loop `` black box '' identification proposed in . + the kalman filter approach for estimating the linear velocities from the position data proved to be successful using both the vicon and the uwb , specially with the latter in which the data had 100 times greater standard deviation noise .the vicon versus uwb experiments suggest that in both cases the lqt tracked the desired position , but with obvious different levels of smoothness and precision .even though both performances were satisfactory in terms of the scope of this work , future research into improving the control system while using the uwb position system would be ideal .starting from an identification of different sources of added noise and biases of the uwb system , upto different filtering techniques that are more appropriate than the classic kalman filter proposed in this project are the author s recommendations to improve the control system performance .+ this work represents a solid base for future research using this platform , with enough explanation in the calculus for newcomers in the area to understand the basic functioning of the system .the simulation environment was developed in a fashion that corresponds exactly with the equations shown in the mathematical model , which helps in the quick understanding of how everything works and saves time in comprehending an otherwise complex system , plus it is easily customizable for future users to develop their own controllers .the project successfully fulfilled its ultimate goal of characterizing the provided quadcopter platform and doing all the steps needed to develop an efficient control system for trajectory tracking .10 hanna , w. ( 2014 ) . _ modelling and control of an unmanned aerial vehicle _( b.eng thesis , charles darwin university ) .subramanian , g. p. ( 2015 ) ._ nonlinear control strategies for quadrotors and cubesats _thesis , university of illinois at urbana - champaign ) .greitzer , e. m. , spakovszky , z. s. , & waitz , i. a. ( 2006 ) ._ thermodynamics and propulsion . _ mechanical engineering , mit .corke , p. ( 2011 ) ._ robotics , vision and control : fundamental algorithms in matlab ( vol ._ springer .hartman , d. , landis , k. , mehrer , m. , moreno , s. , & kim , j.(2014 ) _ quadcopter dynamic modeling and simulation ( quadsim ) v1.00 _ ( senior design project , drexel university ) hoenig , w. , milanes , c. , scaria , l. , phan , t. , bolas , m. , & ayanian , n. ( 2015 ) ._ mixed reality for robotics_. in intelligent robots and systems ( iros ) , 2015 ieee / rsj international conference on ( pp .5382 - 5387 ) .ieee elruby , a. y. , el - khatib , m. m. , el - amary , n. h. , & hashad , a. i. ( 2012 ) ._ dynamic modeling and control of quadrotor vehicle_. in fifteenth international conference on applied mechanics and mechanical engineering , amme ( vol .karwoski , k. ( 2011 ) ._ quadrocopter control design and flight operation_. ( internship final report , nasa usrp ) sonnevend , i. ( 2010 ) . _analysis and model based control of a quadrotor helicopter . _( bsc diploma work , pter pzmny catholic university , faculty of information technology , budapest , hungary ( supervisor : g. szederknyi ) ) habib , m. k. , abdelaal , w. g. a. , & saad , m. s. ( 2014 ) ._ dynamic modeling and control of a quadrotor using linear and nonlinear approaches_. ( m.s .thesis , the american university in cairo ) .landry , b. ( 2015 ) . _ planning and control for quadrotor flight through cluttered environments _( master s degree thesis , massachusetts institute of technology ) .dunkley , o. , engel , j. , sturm , j. , & cremers , d. ( 2014 ) ._ visual - inertial navigation for a camera - equipped 25 g nano - quadrotor_. in iros2014 aerial open source robotics workshop .xu , d. , wang , l. , li , g. , & guo , l. ( 2012 , august ) ._ modeling and trajectory tracking control of a quad - rotor uav . _ in proceedings of the 2012 international conference on computer application and system modeling .atlantis press .meyer , j. , sendobry , a. , kohlbrecher , s. , klingauf , u. , & von stryk , o. ( 2012 ) ._ comprehensive simulation of quadrotor uavs using ros and gazebo ._ in simulation , modeling , and programming for autonomous robots ( pp .400 - 411 ) .springer berlin heidelberg .suimez , e. c. ( 2014 ) .trajectory tracking of a quadrotor unmanned aerial vehicle ( uav ) via attitude and position control ( master s degree thesis , middle east technical university ) .oh , s. m. ( 2012 ) ._ modeling and control of a quad - rotor helicopter_. ( m.s .thesis , university of florida ) pounds , p. e. i. ( 2007 ) ._ design , construction and control of a large quadrotor micro air vehicle . _( doctoral dissertation , australian national university . ) tamami , n. , pitowarno , e. , & astawa , i. g. p. ( 2014 ) . _ proportional derivative active force control for x configuration quadcopter . _journal of mechatronics , electrical power , and vehicular technology , 5(2 ) , 67 - 74 .roo , m. ( 2015 ) ._ optimal event handling by multiple uavs . _( m.s . report , university of twente ) lehnert , c. , & corke , p. ( 2013 ) ._ av - design and implementation of an open source micro quadrotor_. ac on robotics and automation , eds .sabatino , f.(2015 ) ._ quadrotor control : modeling , nonlinear control design , and simulation . _( master s degree project , kth royal institute of technology ) .kader , s. a. , el - henawy , a. e. , & oda , a. n. ( 2014 ) ._ quadcopter system modeling and autopilot synthesis . _ in international journal of engineering research and technology ( vol .3 , no . 11 ( november-2014 ) ) .esrsa publications .naidu , d. s. ( 2002)._optimal control systems . _ crc press .mathworks(2015)._state estimation using time - varying kalman filter ._ retrieved may 16 , 2016 , from http://www.mathworks.com/help/control/getstart/estimating-states-of-time-varying-systems-using-kalman-filters.html hoffmann , g. m. , waslander , s. l. , & tomlin , c. j. ( 2008)._quadrotor helicopter trajectory tracking control ._ in aiaa guidance , navigation and control conference and exhibit ( pp .1 - 14 ) .mueller , m. w. , & dandrea , r. ( 2013)._a model predictive controller for quadrocopter state interception ._ in european control conference ( pp. 1383 - 1389 ) .mu , s. , zeng , y. , & wu , p. ( 2008)._multivariable control of anaerobic reactor by using external recirculation and bypass ratio ._ journal of chemical technology and biotechnology , 83(6 ) , 892 - 903 .huang , h. , hoffmann , g. m. , waslander , s. l. , & tomlin , c. j. ( 2009)._aerodynamics and control of autonomous quadrotor helicopters in aggressive maneuvering ._ in robotics and automation , 2009 .ieee international conference on ( pp .3277 - 3282 ) .sujit , p. b. , saripalli , s. , & sousa , j. b. ( 2014)._unmanned aerial vehicle path following : a survey and analysis of algorithms for fixed - wing unmanned aerial vehicles_. ieee control systems , 34(1 ) , 42 - 59 .bouabdallah , s. , noth , a. , & siegwart , r. ( 2004)._pid vs lq control techniques applied to an indoor micro quadrotor_. in intelligent robots and systems , 2004.(iros 2004 ) . proceedings .2004 ieee / rsj international conference on ( vol .2451 - 2456 ) .peraire , j. , & widnall , s. ( 2009 ) _ lecture l28 - 3d rigid body dynamics_. mit opencourseware ,dynamics fall 2009 .available online : http://ocw.mit.edu .bitcraze(2015 ) ._ crazyflie 2.0 assembly instructions_. retrieved august 3 , 2016 , from https://wiki.bitcraze.io/projects:crazyflie2:userguide:assembly bitcraze(2015 ) ._ crazyflie 2.0 main page_. retrieved august 5 , 2016 . from https://www.bitcraze.io/crazyflie-2/ vicon motion systems ( 2015 ) ._ vicon motion capture system main page_. retrieved august 5 , from https://www.vicon.com/ mueller , m. w. , hamer , m. , & dandrea , r. ( 2015 ) ._ fusing ultra - wideband range measurements with accelerometers and rate gyroscopes for quadrocopter state estimation_. in 2015 ieee international conference on robotics and automation ( icra ) ( pp .1730 - 1736 ) .rafrafi , w. , & le ny , j. ( 2016 ) ._ intgration dun systme radio bande ultra - large pour la navigation de robots mobiles_. ( master s degree thesis , cole polytechnique de montral ) .decawave(2015 ) ._ scensor dwm1000 module product page_. retrieved august 11 , from http://www.decawave.com/products/dwm1000-module luis , c. ( 2016 ) ._ trajectory tracking of a crazyflie 2.0 nanoquadcopter _ [ video file ] . retrieved august 13 , from https://youtu.be/c-sxovcyhjqthe firmware used during this project was `` release 2016.02 '' found in https://github.com/bitcraze/crazyflie-release/releases , with the following changes : .... # ifdef quad_formation_x int16_t r = control->roll / 2.0f ; int16_t p = control->pitch / 2.0f ; motorpower.m1 = limitthrust(control->thrust - r - p - control->yaw ) ; motorpower.m2 = limitthrust(control->thrust - r + p + control->yaw ) ; motorpower.m3 = limitthrust(control->thrust + r + p - control->yaw ) ; motorpower.m4 = limitthrust(control->thrust + r - p + control->yaw ) ; .... .... attitudecontrollercorrectattitudepid(state->attitude.roll , -state->attitude.pitch , state->attitude.yaw , setpoint->attitude.roll , setpoint->attitude.pitch , attitudedesired.yaw , & ratedesired.roll , & ratedesired.pitch , & ratedesired.yaw ) ; //bypass attitude controller if rate mode active if ( setpoint->mode.roll = = modevelocity ) { ratedesired.roll = setpoint->attituderate.roll ; } if ( setpoint->mode.pitch = = modevelocity ) { ratedesired.pitch = setpoint->attituderate.pitch ; } if ( setpoint->mode.yaw = = modevelocity ) { ratedesired.yaw = setpoint->attituderate.yaw ; } | the primary purpose of this study is to investigate the system modeling of a nanoquadcopter as well as designing position and trajectory control algorithms , with the ultimate goal of testing the system both in simulation and on a real platform . the open source nanoquadcopter platform named crazyflie 2.0 was chosen for the project . the first phase consisted in the development of a mathematical model that describes the dynamics of the quadcopter . secondly , a simulation environment was created to design two different control architectures : cascaded pid position tracker and lqt trajectory tracker . finally , the implementation phase consisted in testing the controllers on the chosen platform and comparing their performance in trajectory tracking . our simulations agreed with the experimental results , and further refinement of the model is proposed as future work through closed - loop model identification techniques . the results show that the lqt controller performed better at tracking trajectories , with rms errors in position up to four times smaller than those obtained with the pid . lqt control effort was greater , but eliminated the high control peaks that induced motor saturation in the pid controller . the lqt controller was also tested using an ultra - wide band two - way ranging system , and comparisons with the more precise vicon system indicate that the controller could track a trajectory in both cases despise the difference in noise levels between the two systems . cole polytechnique de montral electrical engineering department automation section + + + + * * + + + + luis , c. , & le ny , j. ( august , 2016 ) . _ design of a trajectory tracking controller for a nanoquadcopter_. technical report , mobile robotics and autonomous systems laboratory , polytechnique montreal . + + + author : supervisor : carlos luis jrme le ny |
multicarrier communications have recently attracted much attention in wireless and mobile applications .the orthogonal frequency division multiplexing ( ofdm ) has been employed as a multiplexing and a multiple access technique in a variety of wireless communication standards such as ieee802.11 wireless lan , ieee802.16 mobile wimax , and 3gpp - lte .also , the multicarrier code - division multiple access ( mc - cdma ) , a combined scheme of ofdm and cdma , has been proposed to enjoy the benefits of ofdm and cdma by allocating the spread data symbols to subcarriers .the popularity of multicarrier communications is mainly due to the robustness to multipath fading channels and the efficient hardware implementation employing fast fourier transform ( fft ) techniques .however , multicarrier communications have the major drawback of the high peak - to - average power ratio ( papr ) of transmitted signals , which may nullify all the potential benefits .a number of techniques have been developed for papr reduction of ofdm signals .in particular , a constructive and theoretical approach is to employ a coding scheme that provides low papr and good error correction capability for transmitted ofdm signals .the golay complementary sequences , which belong to a coset of the first - order reed - muller code , are a good example of the coding scheme .paterson also discussed several coding schemes for papr reduction of _ multicode _ cdma . in , he summarized the algebraic coding approaches for peak power control in ofdm and multicode cdma . for a summary of the other papr reduction techniques for ofdm ,see . to reduce the papr of multicarrier cdma ( mc - cdma ) signals , on the other hand , numerous studies have been focused on the power characteristics of spreading sequences .ochiai and imai presented statistical results of the papr in downlink mc - cdma , where multiple users are supported by walsh - hadamard or golay complementary spreading sequences . considering a single user mc - cdma , popovi presented the basic criteria for the selection of spreading sequences by studying the crest factors ( cf ) of various binary and polyphase sequences .similar studies can be found in with multiple access interference ( mai ) minimization . in mc - cdma supporting multiple users or code channels , the crest factors of various spreading sequences have been compared in and , where the walsh - hadamard spreading sequences showed the best papr properties , provided that a large number of spreading sequences are combined for the transmitted mc - cdma signals .more studies can be found in on the papr of various spreading sequences in mc - cdma .if mc - cdma assigns multiple spreading sequences to a single user , the multicode mc - cdma can be equivalently treated as the _ spread ofdm _ ( s - ofdm ) , where a data symbol of the user is spread across a set of subcarriers to enjoy frequency diversity .if the number of used spreading sequences is large , the walsh - hadamard spread ofdm can be viewed as a papr reducing scheme , compared to a conventional ofdm .also , an error correction code may be applied prior to walsh - hadamard spreading for improving the error rate performance or controlling the peak power of s - ofdm . to the best of our knowledge ,most of the efforts on papr reduction of mc - cdma and s - ofdm have been verified mainly by statistical experiments , not by thorough theoretical analysis . through the experiments ,the papr of the multicarrier signals has been statistically observed , but it has never been addressed whether it is theoretically bounded . in this paper , we propose a binary reed - muller coded mc - cdma system and study its papr properties . in the coded mc - cdma , the information data multiplexed from users is encoded by a reed - muller subcode and the codeword is then fully - loaded to walsh - hadamard spreading sequences .the coding scheme plays a role of reducing the papr of transmitted mc - cdma signals as well as providing the error correction capability .we first establish the polynomial representation of a coded mc - cdma signal for theoretical analysis of the papr . a recursive construction of boolean functions is then presented for the reed - muller subcodes , where the papr of the mc - cdma signal encoded by the subcode is proven to be theoretically bounded .the author of pointed out that the construction is equivalent to type - iii sequences in where he made a general and mathematical study for boolean functions with bounded papr , not considering the application to mc - cdma .we also discuss a connection between the code rate of the subcode and the maximum papr .simulation results show that the papr of the coded mc - cdma signal is not only theoretically bounded , but also statistically reduced .in particular , the coded mc - cdma solves the major papr problem of uncoded mc - cdma by dramatically reducing its papr for the small number of users . in conclusion, the reed - muller codes can be effectively utilized for peak power control in mc - cdma with small and moderate numbers of users , subcarriers , and spreading factors .the papr properties of the coded mc - cdma equivalently address those of the reed - muller coded s - ofdm which supports multiple data from a single user .throughout this paper , mc - cdma abbreviates _ multicarrier _ cdma not multicode cdma .this paper discusses a coded mc - cdma system employing binary codewords , binary spreading sequences , and bpsk modulation .hence , we focus our description on binary cases .the following notations will be used throughout this paper .* is a spreading factor or spreading sequence length . * and are the actual and the maximum numbers of users supported by a coded mc - cdma system , respectively , where . in the rest of this paper, we will use the context of _ users _ , where the users can be treated as data bits of a single user in s - ofdm or multicode mc - cdma .* is the codeword length of a code , where .* is the number of information bits that each user transmits in an ofdm symbol .* denotes the -bit information of the user , , while denotes the -bit uncoded data multiplexed from users and zero - tailed at the spreading process , .note .* denotes the coded output of by a code at the spreading process .note for .* denotes the bpsk modulation output of that experiences the spreading process .hence , . * denotes the -chip spreading sequence assigned for the coded bit of , while is a set of the spreading chips across all spreading sequences , where . is a orthogonal spreading matrix with , where is the row vector and is the column vector .* denotes the output data of length of the spreading process .figure [ fig : coded_sys ] illustrates a coded mc - cdma transmitter proposed in this paper .assume that users access to the coded mc - cdma system , where the user , , is actively transmitting the -bit information over an ofdm symbol .the information bit of is multiplexed across all s and then if , zeros are attached to form the -bit uncoded data , .then , is encoded by a code to generate the -bit codeword and its bpsk modulation . the coded bit of then spread by the -bit spreading sequence , , where a pair of the spreading sequences is mutually orthogonal . at the spreading process, the spread bits of length are linearly combined over spreading sequences to produce , where each element of can take an arbitrary value .obviously , the spreading process is equivalent to a _ transform _ of by the orthogonal spreading matrix , i.e. , , where is the row vector of .the blocks of the spread data of length experience an block interleaver for frequency diversity , and total bits are allocated to subcarriers by inverse fft ( ifft ) .the mc - cdma receiver accomplishes the reverse operation to recover the original information for the user , where the despreading process is equivalent to a transform by , the transpose of . from figure[ fig : coded_sys ] , the baseband transmission signal over an ofdm symbol duration is given by where .note that if the zero - tail processes and the encoders are removed from figure [ fig : coded_sys ] , then with is equivalent to a conventional uncoded mc - cdma signal in .the normalization factor is used in ( [ eq : s_t ] ) to ensure that the average power of is equal to that of the uncoded mc - cdma signal for users , which will be shown in section ii - d .let be a binary vector where , .a _ boolean function _ is defined by where and is obtained by a binary representation of , .note that the addition in a boolean function is computed modulo- . in ( [ eq : gbool ] ), the order of the monomial with nonzero is given by , and the highest order of the monomials with nonzero s is called the _ ( algebraic ) degree _ of the boolean function .associated with a boolean function , a binary codeword of length is defined by where . in other words ,the _ associated _ codeword of length is obtained by the boolean function while runs through to in the increasing order .the _ - order reed - muller code _ is defined by a set of binary codewords of length where each codeword is generated by a boolean function of degree at most . in other words ,each codeword in is the associated codeword of length in ( [ eq : assoc_gen ] ) where the boolean function has the degree of at most .the - order reed - muller code has the dimension of and the minimum hamming distance of . for more details on boolean functions and reed - muller codes ,see .the _ walsh - hadamard matrix _ is recursively constructed by ] denotes the ensemble average . using the orthogonality of spreading sequences ,the approach made in implies that the average power of the mc - cdma signal in ( [ eq : s_t ] ) is determined by = \frac{w}{k } \cdot \sum_{n=0 } ^{n-1 } \sum_{l=0 } ^{l-1 } \sum_{k=0 } ^{k-1 }|d_n ^{(k)}|^2 |c_l ^{(k)}|^2 .\ ] ] in particular , if and has the unit energy , i.e. , , then ( [ eq : avg_pwr ] ) becomes = \frac{w}{k } \cdot n \cdot k = nw\ ] ] which is equal to the average power of an uncoded mc - cdma signal where each of users transmits the -bit information over an ofdm symbol duration . in the following ,we define a polynomial _ associated with _ , similar to .[ def : assoc_poly ] in general , the coded mc - cdma signal in ( [ eq : s_t ] ) has a form of , where is the number of subcarriers over an ofdm symbol and takes an arbitrary value . with , the _associated _ polynomial is defined by from ( [ eq : avg_pwr2 ] ) and ( [ eq : s_z_org ] ) , the papr of is translated into establish the polynomial representation of a coded mc - cdma signal by presenting the associated polynomial introduced in definition [ def : assoc_poly ] . for simplicity , we first study the polynomial representation for , where each user transmits a single information bit with a single spreading process in an ofdm symbol . the general representation with is then discussed .with , a coded mc - cdma signal is denoted by then , the polynomial representation of is established by the following theorem .[ th : matrix_s ] the polynomial associated with in ( [ eq : s_t_0 ] ) is given by where .in particular , if and is a walsh - hadamard matrix , i.e. , , then where is the walsh - hadamard transform of . _ proof . _ in ( [ eq : s_t_0 ] ) , let where . then , with , the associated polynomial is then given by , which derives ( [ eq : s_z ] ) .if , then ( [ eq : s_z_had ] ) is immediate . [ co : s_z_sqr ] with , if a coded mc - cdma signal has the papr of at most , then from ( [ eq : s_z_papr ] ) . also , we have from ( [ eq : s_z ] ) where with . in ( [ eq : u_0 ] ) , if , the spread output is the walsh - hadamard transform of with a scaling factor in a walsh - hadamard spread mc - cdma , where .a similar property has been noticed in a spreading process of multicode cdma .in addition , has the following structure .[ lemma : cz ] let be a walsh - hadamard matrix and .then , where , where and .note that if , or if . _ proof . _ if , then thus , ( [ eq : cz ] ) is true for .assume ( [ eq : cz ] ) also holds for , i.e. , where .from the recursive construction of , we have where .thus , from ( [ eq : m_k-1 ] ) and ( [ eq : m_k ] ) , from ( [ eq : m1 ] ) and ( [ eq : m_k_true ] ) , ( [ eq : cz ] ) is true by induction . in definition 6 of , parker and tellambura defined , a set of normalized complex sequences of length by a tensor product , where .in fact , in lemma [ lemma : cz ] is a special case of with and , . in ( [ eq : s_t_0 ] ) , replacing by leads us to and its associated polynomial , i.e. , where . obviously , is also a coded mc - cdma signal where is loaded with a single spreading process over an ofdm symbol . in particular , if , then the spreading process is equivalent to the walsh - hadamard transform ( wht ) , which enables the efficient implementation of spreading and despreading processes .[ th : s_z_n ] with in ( [ eq : s_n_2 ] ) , the associated polynomial of a coded mc - cdma signal in ( [ eq : s_t ] ) is determined by in other words , is a polynomial obtained by _ interleaving _ s for . _ proof . _ in ( [ eq : s_t ] ) , for a given , is a signal assigned to the subcarriers while runs through to . from ( [ eq : s_n ] ) , it is straightforward that the associated polynomial of is given by where . compared to ( [ eq : s_n_2 ] ) , , and thus the associated polynomial is [ rm : cor1 ] since is a coded mc - cdma signal with a single spreading process over an ofdm symbol , corollary [ co : s_z_sqr ] is also valid for . precisely , if has the papr of at most , then and for , where with . using its associated polynomial in theorem [ th : s_z_n ] , we determine the papr bound of a coded mc - cdma signal with .[ th : papr_s_n ] in ( [ eq : s_n_2 ] ) , assume the maximum papr of is , i.e. , .then , the coded mc - cdma signal in ( [ eq : s_t ] ) has the papr of at most , i.e. , _ proof ._ from the associated polynomial in ( [ eq : s_z_n ] ) , where . from remark [ rm : cor1 ] , implies for every .thus , .therefore , the papr of is bounded by although the proof is straightforward and the bound seems not so tight , theorem [ th : papr_s_n ] gives us an insight that the maximum papr of coded mc - cdma signals increases as each user transmits more data bits ( ) in an ofdm symbol .therefore , should be as small as possible to remove the probability that the mc - cdma signal has the high papr .in this section , we develop a variety of subcodes of for a coding scheme of a coded mc - cdma in figure [ fig : coded_sys ] , where the codeword of length is associated with a boolean function of degree .we assume that the -bit codeword is _ fully - loaded _ to all the available walsh - hadamard spreading sequences of length , so .we analyze the papr properties of the fully - loaded , reed - muller coded , and walsh - hadamard spread mc - cdma signals .first of all , we study the papr for . then , the papr for is investigated . in the mc - cdma system with , where is a codeword of a reed - muller subcode . in this section ,we denote for simplicity .let be a codeword of the first - order reed - muller code .when it is employed as a coding scheme in a coded mc - cdma , the dimension is and the codeword length is . each codeword is associated with a boolean function of where the addition is computed modulo- . the papr of the mc - cdma signal encoded by a codeword in is determined in the following .[ th : papr_1 ] with and , let be a walsh - hadamard spread mc - cdma signal encoded by . then, the papr of is _ proof . _ from theorem[ th : matrix_s ] , the associated polynomial of is given by where is the walsh - hadamard transform of , and . by definition , where in ( [ eq : bool_1 ] ) , . from ( [ eq : exp_sum ] ) , where for given s .therefore , . for any ,the papr of is therefore from theorem [ th : papr_1 ] , we see that the first - order reed - muller code is a simple and effective coding scheme that provides the uniform power for the coded mc - cdma signals . however , it has a relatively low code rate , which vanishes as the code length increases . therefore , we need to develop high - rate coding schemes at the expense of the papr increases . from a seed pair of codes ,we present how to recursively construct a new code using the associated boolean functions .we also analyze the papr of the coded mc - cdma signals .[ th : recur ] let and be boolean functions of variables , where and are the codewords of length associated with and , respectively .assume that the code rate of each code is and , respectively .let and be the coded mc - cdma signals encoded by and , respectively , each of which has a form of ( [ eq : s_t_0 ] ) where and .assume that each signal has the papr of at most .consider a boolean function of variables defined by then , a codeword of length associated with has the code rate .let be a coded mc - cdma signal of ( [ eq : s_t_0 ] ) encoded by , where and .then , the papr of is _ proof . _ obviously , and , respectively .thus , the number of codewords is and the code rate is . in particular , if , then we keep the code rate while the codeword length doubles .let and be the bpsk modulation outputs of length from and , respectively . from ( [ eq : recur ] ) , it is straightforward that and , where ` ' denotes a concatenation .then , the associated polynomial is determined by where , , , and .let and .then , ( [ eq : recur_s_z ] ) becomes replacing the boolean function by leads us to the change of the above associated polynomial to , i.e. , then , latexmath:[\[\label{eq : recur_s_z_sum } if and have the papr of at most , then corollary [ co : s_z_sqr ] implies and , respectively .thus , ) and ( [ eq : b_f_b_g ] ) , where from the definition of and .thus , the papr of is the recursive construction of a boolean function has been originally discussed in for the papr of multicode cdma . in theorem[ th : recur ] , we showed that the construction of ( [ eq : recur ] ) also provides the bounded papr for multicarrier cdma .in general , if there is a seed code of length and size , then we can construct a new code of length and size by _ concatenating _ a pair of codewords from the seed .if the papr of each coded mc - cdma signal for the seed is at most , then each coded mc - cdma signal encoded by the new code provides the papr of at most .if each codeword in is associated with a boolean function of degree at most , then is a subcode of defined by a boolean function of degree , where the minimum hamming distance of is at least .construction [ cst : gen_code ] summarizes a recursive code construction for the application to mc - cdma .[ cst : gen_code ] for positive integers and , , let and , where . starting with and ,the boolean function of degree is constructed by the successive recursions of while runs through to . in ( [ eq : b_r_bool ] ) , the boolean function has the same form as , but may have different coefficients .let be a codeword of , associated with a boolean function of .then , has total codewords through the recursions . in a walsh - hadamard spread mc -cdma with and , the papr of encoded by is at most from theorems [ th : papr_1 ] and [ th : recur ] .finally , the code parameters of are summarized as follows . * dimension and code length , * code rate , * minimum hamming distance , * .first of all , we present a specific code example of length through a single recursion , where a pair of codewords in is employed as the seed .[ cst : b2 ] let be a codeword of that is associated with a boolean function defined by in a walsh - hadamard spread mc - cdma with and , the code parameters including the papr of a coded mc - cdma signal encoded by are summarized as follows . * dimension and code length , * code rate , * minimum hamming distance , * . with a single recursion ( ) in construction[ cst : gen_code ] , ( [ eq : bool_2 ] ) is straightforward by where and .[ rem : b2 ] by generalizing ( [ eq : bool_2 ] ) to we obtain a code associated with .obviously , the coding scheme in construction [ cst : b2 ] is a special case of for .it is not so hard to prove that a coded mc - cdma signal encoded by a codeword associated with has the papr of at most for any .while runs through to , we have distinct codewords in , more than the number of codewords in .if we compare the code rates of and , however , the code rate difference of is very small and approaches to as increases .meanwhile , the encoding and the decoding complexities of the generalized coding scheme are obviously larger than those of . for the little contribution to the code rate and the increase of the complexities from , we therefore consider as our coding scheme for low papr . by employing the coding scheme , the walsh - hadamard spread mc - cdma system with and is able to support maximum users or information bits from a single user in an ofdm symbol , providing the papr of at most .[ cst : b3 ] let a boolean function be defined by where .let be a codeword of that is associated with . in a walsh - hadamard spread mc- cdma with and , the code parameters including the papr of a coded mc - cdma signal encoded by are summarized as * dimension and code length , * code rate , * minimum hamming distance , * .similar to construction [ cst : b2 ] , the boolean function is immediate from the twice recursions of ( [ eq : recur ] ) for and with . with the coding scheme , the walsh - hadamard spread mc - cdma system with and is able to support maximum users or information bits from a single user in an ofdm symbol , providing the papr of at most .table [ tb : gen_recur ] lists the parameters of of length from construction [ cst : gen_code ] , where and .it elucidates a connection between the code rates of and the maximum papr of mc - cdma signals encoded by .obviously , we obtain a high rate reed - muller subcode at the cost of high papr for the coded mc - cdma signals ..the parameters of of length for some s [ cols="^,^,^,^,^ " , ] [ tb : code_rate ] in figure [ fig : coded_sys ] , the reed - muller subcode introduced in construction [ cst : gen_code ] is applied at each spreading process for encoding the information from a single or multiple users in the reed - muller coded and walsh - hadamard spread mc - cdma transmitter ( ) .precisely , a reed - muller subcode encodes a -bit input data at the spreading process , , to produce a codeword of length , which goes through walsh - hadamard spreading , interleaving , and ifft in the sequel . in the encoding process , the codeword is obtained by where is the generator matrix of , where .the recursion of boolean functions in ( [ eq : b_r_bool ] ) equivalently derives the recursion of generator matrices of where of length and is a matrix . by elementary row operations ,it is equivalent to while runs through to , the generator matrix is constructed by the recursions of ( [ eq : g_r_org ] ) or ( [ eq : g_r ] ) , where the initial matrix is the generator matrix of given by for the notations s of the generator matrix of the first - order reed - muller codes , see . in particular , we are able to determine and directly from the boolean expressions in ( [ eq : bool_2 ] ) and ( [ eq : bool_3 ] ) , respectively .the generator matrices of and are and matrices , respectively . with each of length , we have note that and in ( [ eq : g_23 ] ) have the different orders of rows with those generated by the recursions of ( [ eq : g_r_org ] ) or ( [ eq : g_r ] ) . in this paper , we use and in ( [ eq : g_23 ] ) .let . then, the generator matrices of and are and matrices , respectively . we briefly introduce some decoding techniques for reed - muller subcodes .the first - order reed - muller code can be decoded by the _ fast hadamard transform ( fht ) _ technique described in . in general ,the - order reed - muller code is decoded by the _ reed decoding algorithm _ . in particular ,if we consider or as a supercode of the union of cosets of , then we can accomplish the soft decision decoding by removing each possible coset representative from the received codeword and then applying the fht . for the encoding and decoding of golay complementary sequences ,see . in what follows, we discuss the papr of coded mc - cdma signals in a general case of .we restrict our attention to a walsh - hadamard spread mc - cdma employing or that provides the acceptable code rate as well as the low papr for the coded mc - cdma signals .we show that the maximum papr depends on the actual number of users supported by the mc - cdma .[ th : user ] assume that is employed in a walsh - hadamard spread mc - cdma system in figure [ fig : coded_sys ] , where .the maximum papr of the coded mc - cdma signal is then determined by similarly , if the system employs , then the maximum papr of is _ proof . _ in theorem [ th : papr_s_n ] , it is easy to see that , where and are given in ( [ eq : s_t_0 ] ) and ( [ eq : s_n_2 ] ) , respectively. therefore , , and when , , and are employed as the coding scheme , respectively . in of ( [ eq : g_23 ] ) , if , the first rows participate in the encoding process , while the other rows are ignored by zero tailing . sincea linear combination of the first rows generates a codeword of , it is obvious that if , then from theorems [ th : papr_s_n ] and [ th : papr_1 ] .if , on the other hand , from construction [ cst : b2 ] and theorem [ th : papr_s_n ] .therefore , ( [ eq : papr2_w ] ) is true for .similar to this approach , ( [ eq : papr3_w ] ) is also true for from the generator matrix of ( [ eq : g_23 ] ) and theorem [ th : papr_s_n ] . in general , if is employed as the coding scheme , the walsh - hadamard spread mc - cdma signals have the papr of at most from construction [ cst : gen_code ] and theorem [ th : papr_s_n ] . however , theorem [ th : user ] is not true for the mc - cdma signals if the generator matrix is recursively constructed by ( [ eq : g_r_org ] ) or ( [ eq : g_r ] ) .we need to reorder the rows of to achieve the maximum papr depending on the number of actual users as in theorem [ th : user ] . in section iv - b , we developed various reed - muller subcodes of length to control the peak power of mc - cdma signals with subcarriers in a systematic way .in fact , we may employ the coding scheme for a codeword of length , where , which encodes , a concatenation of uncoded data block .then , the codeword covers the entire subcarriers to control the peak power and to ultimately reduce the maximum papr of the coded mc - cdma signal . in this case , however , the code rate may be dramatically reduced for such a long codeword because the coding scheme is a subcode of the reed - muller code .this also enlightens a connection between the code rates and the maximum papr of coded mc - cdma signals . virtually treating a single user s data as multiple users one ,a coded mc - cdma system can be considered as an equivalent _ spread ofdm _ , where the single user s data is spread across a set of subcarriers to enjoy frequency diversity . by applying the coding schemes introduced in this section, the spread ofdm additionally has the benefits of low papr and good error correction capability .this section provides simulation results to confirm our theoretical analysis and presents some discussions on statistical results of papr of mc - cdma signals .the papr properties of a reed - muller coded and walsh - hadamard spread mc - cdma system are compared to those of a pair of uncoded systems . in the uncoded systems , the one employs walsh - hadamard ( wh ) spreading sequences ,while the other uses golay complementary ( gc ) spreading sequences each of which forms a row of a recursively constructed golay complementary spreading matrix . in our coded mc - cdma , we employ as the coding scheme which we believe is a good coding solution providing the acceptable code rates , the moderate complexity , and the low papr for the coded mc - cdma signals . for a fair comparison , we assume that all the mc - cdma systems transmit the same number of information bits in an ofdm symbol from active users .if the uncoded systems transmit information bits in an ofdm symbol , our coded system of code rate then needs to transmit _ coded _bits for the transmission of information bits . therefore, while the uncoded ones have subcarriers , the coded system needs to use subcarriers in an ofdm symbol , where is a spreading factor used in the uncoded systems . in the following ,our simulations employ of code rate or , where the coded mc - cdma uses or subcarriers in an ofdm symbol by employing the spreading sequences of length or . in our simulations , we measure the discrete - time papr of each mc - cdma signal from the idft ( inverse discrete fourier transform ) of the oversampling factor .also , we statistically measure the papr over ofdm symbols for randomly generated information bits .in figures [ fig : ccpd12 ] and [ fig : papr12 ] , users access to each mc - cdma system to transmit information bits per each user in an ofdm symbol .the uncoded mc - cdma systems use the spreading factor and subcarriers , where maximum users are supported , i.e. , . as the coded mc - cdma system also needs to support up to users ,we choose a code as its coding scheme , where , , and .thus , our coded mc - cdma uses the spreading factor and subcarriers to transmit information bits in an ofdm symbol from the active users . in the coded mc - cdma system ,each -bit codeword is fully - loaded to all the available walsh - hadamard spreading sequences regardless of . on the other hand ,the uncoded mc - cdma systems assign the spreading sequences to users _ on demand _ , so they are fully - loaded only if . or information bits in an ofdm symbol .the code rate of the coded mc - cdma is .,scaledwidth=100.0% ] figure [ fig : ccpd12 ] shows the complementary cumulative distribution functions ( ccdf ) of of each mc - cdma signal for and .it reveals that the coded mc - cdma is superior to the others when the number of active users is small .precisely , if , it reduces the papr achieving by more than db , compared to the uncoded systems .moreover , theorem [ th : user ] ensures that there exists no coded mc - cdma signal with papr db for , which implies that the coded mc - cdma also outperforms the uncoded ones in theoretical aspects . if , the coded mc - cdma has almost the same papr as the uncoded walsh - hadamard spread mc - cdma for achieving . even in this case, it is theoretically guaranteed that no coded mc - cdma signal has db , which may not be true in the uncoded systems .figure [ fig : ccpd12 ] also shows that most of the coded mc - cdma signals in the statistical experiments have much smaller papr than the theoretical maximum predicted by theorem [ th : user ] . and active number of users for mc - cdma systems where .the code rate of the coded mc - cdma is .,scaledwidth=100.0% ] figure [ fig : papr12 ] displays the papr of each mc - cdma achieving according to the number of active users .it is well known that the uncoded walsh - hadamard ( wh ) spread mc - cdma shows the high papr when the number of active users is small .the papr then decreases as the number of users increases .on the other hand , the uncoded golay complementary ( gc ) spread mc - cdma has the low papr for the small number of users . however , the papr gets higher than that of the uncoded walsh - hadamard spread mc - cdma as the number of users increases .figure [ fig : papr12 ] shows that the coded mc - cdma is a good alternative to the two uncoded systems by providing the smallest papr for almost all user numbers .moreover , theorem [ th : user ] assures that the maximum papr of the coded system is theoretically limited to db for , db for , and db for , respectively , where figure [ fig : papr12 ] provides the numerical evidences .therefore , it is theoretically guaranteed that there exists no coded mc - cdma signal with the papr higher than the maximum values for each user , which is not generally true in the uncoded systems .the theoretical and statistical results show that the coded mc - cdma dramatically reduces its papr for the small number of users , which effectively solves the high papr problem in the uncoded mc - cdma .figure [ fig : papr12 ] also shows that if the number of active users is large ( ) , the statistical papr is much smaller than the theoretical maximum predicted by theorem [ th : user ] . as a result, we claim that the coded mc - cdma provides the best statistical and theoretical solution for papr reduction for any number of users . in figures [ fig : ccpd1 ] and [ fig : papr1 ] , the mc - cdma systems support users where each user transmits information bits in an ofdm symbol .the uncoded mc - cdma systems use the spreading factor and subcarriers to transmit information bits in an ofdm symbol . to support up to users , the coded mc - cdma system employs , a code with and . in this case , is used as a mapping scheme mentioned in remark [ rem : rm_map ] . in our mc - cdma , each -bit uncoded data , , is _ transformed _ by the reed - muller mapping scheme for papr reduction .thus , it uses the spreading factor and subcarriers to transmit data bits in an ofdm symbol , which is the same as the uncoded systems .however , note that each -bit codeword of our mc - cdma is fully - loaded to all the available spreading sequences of length regardless of . or information bits in an ofdm symbol employing the spreading factor and subcarriers .the code rate of the coded mc - cdma is .,scaledwidth=100.0% ] figure [ fig : ccpd1 ] shows the results of of each mc - cdma for and .we observed that if , our reed - muller mapped mc - cdma system reduces the papr by about db to achieve , compared to the uncoded walsh - hadamard ( wh ) spread mc - cdma employing the same spreading factor and the same number of subcarriers .moreover , theorem [ th : user ] ensures that our system has no signal with papr db for .note that theorem [ th : user ] determines the maximum papr of db ( if ) , db ( if ) , and db ( if ) , respectively .thus , even if the statistical papr property of our mc - cdma is almost identical to that of the uncoded walsh - hadamard spread mc - cdma for , our system has no probability of signals with the papr higher than db , which is however unclear in the uncoded systems . and active number of users for mc - cdma systems where .all the mc - cdma systems use the spreading factor and subcarriers .the code rate of the coded mc - cdma is .,scaledwidth=100.0% ] similar to figure [ fig : papr12 ] , figure [ fig : papr1 ] displays the papr of each mc - cdma achieving according to the number of users .it shows that our mc - cdma provides the smallest papr for almost all user numbers .also , it numerically confirms that the theoretical maximums of papr in theorem [ th : user ] hold for each user number .figure [ fig : papr1 ] shows that if the number of active users is large , the statistical papr is much smaller than the theoretical maximum predicted by theorem [ th : user ] .for the small number of users , on the other hand , the coded mc - cdma solves the high papr problem of the uncoded mc - cdma by dramatically reducing its papr .along with figure [ fig : papr12 ] , the coded mc - cdma can be the best statistical and theoretical solution for papr reduction for any number of active users regardless of its code rate .a drawback of the reed - muller mapped mc - cdma is that it could be employed only for a small spreading factor due to the demapping complexity at the receiver .this paper has presented a coded mc - cdma system where the information data is encoded by a reed - muller subcode for the sake of papr reduction . in the system ,the codeword is then fully - loaded to walsh - hadamard spreading sequences , where the spreading and the despreading processes are efficiently implemented by the walsh - hadamard transform ( wht ) .we have established the polynomial representation of a coded mc - cdma signal for theoretical analysis of the papr .we have then developed a recursive construction of the reed - muller subcodes which provide the transmitted mc - cdma signals with the bounded papr as well as the error correction capability .we have also investigated a theoretical connection between the code rates and the maximum papr in the coded mc - cdma .simulation results showed that the papr of the coded mc - cdma signal is not only theoretically bounded , but also statistically reduced by the reed - muller coding schemes .in particular , it turned out that the coded mc - cdma could solve the papr problem of uncoded mc - cdma by dramatically reducing its papr for the small number of users .finally , the theoretical and statistical studies exhibited that the reed - muller subcodes are effective coding schemes for peak power control in mc - cdma with small and moderate numbers of users , subcarriers , and spreading factors .we believe this work gives us theoretical insights for papr reduction of mc - cdma and s - ofdm by means of an error correction coding .ieee standard 802.11 - 2007 , ieee standard for information technology - local and metropolitan area networks - specific requirements , part 11 - wireless lan medium access control ( mac ) and physical layer ( phy ) specifications .ieee802.16e-2005 , ieee standard for local and metropolitan area networks , part 16 - air interface for fixed and mobile broadband wireless access systems , amendment 2 : physical and medium access control layers for combined fixed and mobile operation in licensed bands .k. fazel , s. kaiser , and m. schnell , `` a flexible and high performance celluar mobile communications systems based on orthogonal multicarrer ssma , '' _ wireless personal communications _ , vol . 2 , pp . 121 - 144 , 1995 .a. e. jones , t. a. wilkinson , and s. k. barton , `` block coding scheme for reduction of peak to mean envelope power ratio of multicarrier communication schemes , '' _ electron ._ , vol . 30 , pp . 2098 - 2099 , 1994 .d. a. wiegandt , z. wu , and c. r. nassar , `` high - throughput , high - performance ofdm via pseudo - orthogonal carrier interferometry spreading codes , '' _ ieee trans ._ , vol . 51 , no . 7 , pp . 1123 - 1134 , july 2003 . m. g. parker , `` close encounters with boolean functions of three different kinds , '' _2nd international castle meeting on coding theory and applications _ , valladolid , september 2008 . also available at _ lecture notes in computer science ( lncs ) _ vol . 5228 , pp .137 - 153 , 2008 .m. g. parker and c. tellambura , `` golay - davis - jedwab complementary sequences and rudin - shapiro constructions , '' _ manuscript .available in `` http://www.ii.uib.no//constabent2.pdf '' .j. h. conway and n. j. a. sloane , `` soft decoding techniques for codes and lattices , including the golay codea and the leech lattice , '' _ ieee trans . inform . theory _it-32 , no . 1 ,41 - 50 , jan . 1986 | reed - muller codes are studied for peak power control in multicarrier code - division multiple access ( mc - cdma ) communication systems . in a coded mc - cdma system , the information data multiplexed from users is encoded by a reed - muller subcode and the codeword is fully - loaded to walsh - hadamard spreading sequences . the polynomial representation of a coded mc - cdma signal is established for theoretical analysis of the peak - to - average power ratio ( papr ) . the reed - muller subcodes are defined in a recursive way by the boolean functions providing the transmitted mc - cdma signals with the bounded papr as well as the error correction capability . a connection between the code rates and the maximum papr is theoretically investigated in the coded mc - cdma . simulation results present the statistical evidence that the papr of the coded mc - cdma signal is not only theoretically bounded , but also statistically reduced . in particular , the coded mc - cdma solves the major papr problem of uncoded mc - cdma by dramatically reducing its papr for the small number of users . finally , the theoretical and statistical studies show that the reed - muller subcodes are effective coding schemes for peak power control in mc - cdma with small and moderate numbers of users , subcarriers , and spreading factors . boolean functions , multicarrier code - division multiple access ( mc - cdma ) , orthogonal frequency - division multiplexing ( ofdm ) , peak - to - average power ratio ( papr ) , reed - muller codes , spreading sequences , walsh - hadamard sequences , walsh - hadamard transform . |
within the last decades , model predictive control ( mpc ) has grown mature for both linear and nonlinear systems , see , e.g. , or .although analytically and numerically challenging , the method itself is attractive due to its simplicity : in a first step , a new measurement of the current system state is obtained which is thereafter used to compute an optimal control over a finite optimization horizon . in the third and last step , a portion of this control is applied to the process and the entire problem is shifted forward in time rendering the scheme to be iteratively applicable .stability of the mpc closed loop can be shown by imposing endpoint constraints , lyapunov type terminal costs or terminal regions , cf . and . here , we study mpc schemes without these ingredients for which stability and , in addition , bounds on the required horizon length can be deduced , both for linear and nonlinear systems , cf . and .we follow the recent approach from extending to continuous time systems which not only guarantees stability but also reveals an estimate on the degree of suboptimality with respect to the optimal controller on an infinite horizon . in this work ,we show how the essential assumption needed to apply the methodology proposed in can be practically verified . then , based on observations drawn from numerical computations , implementable mpc algorithms with variable control horizons are developed which allow for smaller optimization horizons while maintaining stability or a desired performance bound . to overcome the lack of robustness implied by prolonging the control horizon and , thus , staying in open loop for longer time intervals , conditions are presented which ensure that the control loop can be closed more often .similar ideas were introduced in for a discrete time setting .last , the computational effort is further reduced by introducing slack which allows to violate our main stability condition a relaxed lyapunov inequality temporarily , cf . .the paper is organized as follows : in section [ section : preliminaries ] the problem formulation is given . in the ensuing section [ section : stability and performance bounds ] ,we summarize stability results from and propose a technique to verify the key assumption which is illustrated by an example .thereafter , we present algorithms which allow for shortening the optimization horizon by using time varying control horizons . before drawing conclusions , section [ section : slack stability ] contains results on how stabilitymay be guaranteed by using weaker stability conditions .let and denote the set of natural and real numbers respectively and the euclidean norm on , . a continuous function is called class -function if it satisfies , is strictly increasing and unbounded .a continuous function is said to be of class if for each we have that holds , and for each the condition is satisfied . within this work we consider nonlinear time invariant control systems where and denote the state and control at time .constraints can be included via suitable subsets and of the state and control space , respectively .we denote a state trajectory which emanates from the initial state and is subject to the control function by . in the presence of constraints ,a control function is called admissible for on the interval if the corresponding solution exists and satisfies , \quad \ ; \text{and } \ ;\quad \u(t ) \in \bu , t \in [ 0,t).\end{aligned}\ ] ] the set of these admissible control functions is denoted by . for an infinite time interval , called admissible for if holds for each and the respective set is denoted by . for systemwe assume an equilibrium to exist , i.e. holds .our goal is to design a feedback control law such that the resulting closed loop is asymptotically stable with respect to , i.e. there exists such that , , holds for all where denotes the closed loop trajectory induced by .the stabilization task is to be accomplished in an optimal fashion which is measured by a cost functional . to this end, we introduce the continuous running cost satisfying then , for a given state , the cost of an admissible control is the computation of a corresponding minimizer is , in general , computationally hard due to the curse of dimensionality , cf .hence , we use model predictive control ( mpc ) to approximately solve this task .the central idea of mpc is to truncate the infinite horizon , i.e. to compute a minimizer of the cost functional which can be done efficiently using discretization methods and nonlinear optimization algorithms , see , e.g. , or ( * ? ? ?* chapter 10 ) .furthermore , we define the corresponding optimal value function , . to obtain an infinite horizon control ,only the first portion of the computed minimizer is applied , i.e. we define the feedback law via for the so called control horizon .last , the optimal control problem is shifted forward in time which renders algorithm [ preliminaries : alg : mpc ] to be iteratively applicable . given : 1 .measure the current state 2 . compute a minimizer of and define the mpc feedback law via 3 .implement , shift the horizon forward in time by and goto ( 1 ) the closed loop state trajectory emanating from the initial state subject to the mpc feedback law from algorithm [ preliminaries : alg : mpc ] is denoted by .furthermore , denotes the control function obtained by concatenating the applied pieces of control functions , i.e. the resulting mpc closed loop cost are given by note that we tacitly assume that problem is solvable for all and the minimum is attained in each step of algorithm [ preliminaries : alg : mpc ] . for a detailed discussion of feasibility issueswe refer to ( * ? ? ?* chapter 8) .due to the truncation of the infinite horizon , stability and optimality properties of the optimal control may be lost . yet , stability can be shown if the optimization horizon is sufficiently long , cf .additionally , an optimization horizon length can be determined for which both asymptotic stability as well as a performance bound on the mpc closed loop in comparison to the infinite horizon control law hold .[ stability and performance bounds : thm : stability ] suppose a control horizon and a monotone bounded function satisfying for all to be given .if is chosen such that holds for \left [ 1 - e^{-\int_{t-\delta}^t b(t)^{-1 } dt } \right ] } , % \label{\hspace*{-0.1cm}\alpha_{t , \delta } : = 1 - \frac{e^{-\int_\delta^t b(t)^{-1 } dt}}{\left [ 1- e^{- \int_\delta^t b(t)^{-1 } dt } \right ] \left [ e^{\int_{t-\delta}^t b(t)^{-1 } dt } -1 \right ] } , \end{aligned}\ ] ] then the relaxed lyapunov inequality as well as the performance estimate are satisfied for all .if , additionally , there exist functions , such that hold for all , then the mpc closed loop is asymptotically stable for horizon length . a detailed proof of theorem [ stability and performance bounds : thm : stability ] is given in .we point out that for a given control horizon and a desired performance specification on the mpc closed loop , there always exists an optimization horizon such that is satisfied , cf . . interpreting exemplarily, the choice corresponds to asymptotic stability of the mpc closed loop whereas limits the cost of the mpc control to double the cost of the infinite horizon control .the crucial assumption which has to be verified in order to apply theorem [ stability and performance bounds : thm : stability ] is the growth condition . here, we demonstrate this by means of the following example : [ stability and performance : ex : generator ] consider the system dynamics of a synchronous generator given by with constants , , , , and , cf . . the equilibrium we wish to stabilizeis located at and the running costs are defined as where the parameter is used to penalize the taken control effort . due to physical considerations and are restricted to the interval ] .note that is only computed on a `` sufficiently '' large interval ] with , , and discretization stepsize , , and focus on the level set whose convex hull satisfies all physical constraints , see figure [ preliminaries : fig : set of initial values ] for an illustration of ..,scaledwidth=42.0% ] then , formula enables us to determine an optimization horizon such that the relaxed lyapunov inequality holds with . for the computed function this methodology yields asymptotic stability of the mpc closed loop for optimization horizon .note that the presented method verifies condition for control functions , and allows to conclude asymptotic stability of the closed loop via theorem [ stability and performance bounds : thm : stability ] , cf .* remark 2.7 ) .in section [ section : stability and performance bounds ] we showed how to ensure asymptotic stability for the proposed mpc scheme for a given . in this section, we investigate the impact of on the required optimization horizon length .considering example [ stability and performance : ex : generator ] , we compute , , and for , , cf . figure [ increasing the control horizon : fig : horizons]a . here, holds for whereas this stability criterion holds for significantly shorter optimization horizons if the control horizon is chosen equal to , that is for .hence , enlarging the control horizon seems to induce an improved performance index .\a ) for varying and different choices of .b ) development of depending on the control horizon .,title="fig:",scaledwidth=21.65% ] b ) for varying and different choices of .b ) development of depending on the control horizon .,title="fig:",scaledwidth=21.65% ] this numerical result motivates to investigate the influence of on . to this end, we fix the optimization horizon and compute for , cf .figure [ increasing the control horizon : fig : horizons]b , which leads to the following observations : firstly , a symmetry property seems to hold , i.e. .secondly , the performance estimates appear to increase up to the symmetry axis . both properties have been shown for systems which are exponentially controllable in terms of their stage costs , i.e. for an overshoot constant and a decay rate , cf . . using the computed function instead of exponential decaywe obtain and therefore better horizon estimates as shown in for a discrete time setting . despite the more general setting, symmetry still follows directly from formula .[ increasing the control horizon : cor : symmetry ] the performance estimate given by formula satisfies for , i.e. is symmetric with symmetry axis . unlike symmetry , we conjecture that there exists a counterexample negating monotonicity even if satisfies the additional condition ( * ? ? ?* inequality ( 21 ) ) which is , however , violated in example [ stability and performance : ex : generator ] .for such an example holds but is not monotone on ] . if the horizon length is increased to , figure [ increasing the control horizon : fig : horizons]a shows that for any chosen ] with .such a setting naturally arises in the context of digital control for sampled data systems with zero order hold .yet , we like to note that algorithm [ increasing the control horizon : alg : control horizon ] is not limited to the digital control case .additionally , we like to stress that monotonicity of in is the center of this algorithm . given : , with and 1 .measure the current state 2 .set and compute a minimizer of and .+ do 1 . if : set according to exit strategy and goto ( 3 ) 2 .set , and compute 3 .compute , i.e. + while 3 . implement , shift the horizon forward in time by and goto ( 1 ) algorithm [ increasing the control horizon : alg : control horizon ] combines two aspects : the improved performance estimates obtained for larger control horizons , and the inherent robustness resulting from using a feedback control law which benefits from updating the control law as often as possible . in order to illustrate this claimlet us consider the numerical example [ stability and performance : ex : generator ] again . from figure[ increasing the control horizon : fig : horizons]b we observed that asymptotic stability of the mpc closed loop can be shown for .indeed , algorithm [ increasing the control horizon : alg : control horizon ] only uses in each step independent of the chosen initial condition .hence , mpc with constant control horizon is performed safeguarded by our theoretically obtained estimates .consequently , no exit strategy is needed since step ( 2a ) is excluded .we compute by equation for all in order to test out the limits of algorithm [ increasing the control horizon : alg : control horizon ] . here, is the smallest optimization horizon such that is satisfied for all .however , using algorithm [ increasing the control horizon : alg : control horizon ] allows to ensure this conditions for for varying control horizon .figure [ increasing the control horizon : fig : ex : n5 ] shows the sets of states for which computed by is less than zero for the cases and .again , we see that if we allow for larger control horizons , then the performance bound increases , i.e. the set of state vectors for which stability can not be guaranteed shrinks . hence , for the considered example [ stability and performance : ex : generator ] algorithm [ increasing the control horizon : alg : control horizon ] allows to drastically reduce the horizon while maintaining asymptotic stability of the closed loop .\a ) for which stability can not be guaranteed for , with ( a ) and ( b ) .,title="fig:",scaledwidth=21.65% ] b ) for which stability can not be guaranteed for , with ( a ) and ( b ) .,title="fig:",scaledwidth=21.65% ] the downside of considering potentially large control horizons is the possible lack of robustness in case of disturbances . utilizing the introduced partition , we can perform an update of the feedback law at time via where we extended the notation of the open loop optimal control to to indicate which initial state is considered .such an update can be applied whenever the following lyapunov type update condition holds , see also for the discrete time setting . [ increasing the control horizon : thm : update ]let , and a partition , , of $ ] with be given .for suppose holds with for some .let be a minimizer of for some .if additionally holds , then the respective mpc feedback law can be modified by and the lower bound on the degree of suboptimality is locally maintained .we like to stress that if the stabilizing partition index is known , then proposition [ increasing the control horizon : thm : update ] allows for iterative updates of the feedback until this index is reached .hence , only step ( 3 ) of algorithm [ increasing the control horizon : alg : control horizon ] needs to be adapted , see algorithm [ increasing the control horizon : alg : update ] for a possible implementation . 1 .for do 1 .implement 2 .compute 3 .if holds : update via + shift the horizon forward in time by and goto ( 1 )usually , the relaxed lyapunov inequality is tight for only a few points in the state space .hence , if a closed loop trajectory visits a point for which is not an equality , then we can compute the occuring slack along the closed loop via this slack can be used to weaken the requirement by considering the closed loop instead of the open loop solution . for simplicity of exposition , we formulate the following result using constant , yet the conclusion also holds in the context of time varying control horizons as in algorithm [ increasing the control horizon : alg : control horizon ] .[ slack stability : thm : convergence ] consider an admissible feedback law , an initial value and to be given . furthermore , suppose there exist functions , such that and hold for all .if additionally from converges for tending to infinity , then the mpc closed loop trajectory with initial value behaves like an asymptotically stable solution .furthermore , the following performance estimate holds * proof * : let the limit of for be denoted by .then , for given , there exists a time instant such that holds for all . as a consequence ,we obtain since as well as hold , this inequality implies hence , boundedness of the integral on the right hand side can be concluded .now , due to positivity and continuity of we have . in turn, the latter ensures and , thus , for approaching infinity .inequality is shown by using in combination with to obtain then , since , the assertion follows .note that within theorem [ slack stability : thm : convergence ] we did not assume semipositivity but convergence of to conclude stability . here , we like to stress that implies a suboptimality index . clearly, both the stability and performance result shown in theorem [ slack stability : thm : convergence ] can be extended to assertions for all if converges for every or a uniform lower bound can be found , i.e. . apart from its theoretical impact , is also meaningful at runtime of the mpc algorithm .for instance , the condition can be checked at each time instant , , instead of , cf . .this is particularly useful since accumulated slack can be used in order to compensate local violations of , i.e. weakening the stability condition , as long as the overall performance is still satisfactory .if occurs within the mpc algorithm , the slack can also be used to form an exit strategy . to this end, we denote the performance of the mpc closed loop until time by now , if but , then stability is still maintained , yet the current performance index is worse than the desired bound .again we consider example [ stability and performance : ex : generator ] and analyze the mpc closed loop for with and .if we choose the initial value , then we observe from figure [ slack stability : fig : alpha ] that the local estimate drops below zero for , i.e. stability can not be guaranteed for .yet , computing according to shows that the relaxed lyapunov inequality is satisfied after two steps of the mpc algorithm .hence , using the slack information incorporated in allows to conclude asymptotic stability . and for and along a specific closed loop solution.,scaledwidth=42.0% ]we have shown a methodology to verify the assumptions introduced in under which stability of the mpc closed loop without terminal constraints can be guaranteed .additionally we presented an algorithmic approach for varying control horizons which allows to reduce the optimization horizon length .last , we provided robustification methods via update and slack based rules .this work was supported by the dfg priority research program 1305 `` control theory of digitally networked dynamical systems '' , grant .gr1569/12 - 1 , and by the german federal ministry of education and research ( bmbf ) project `` snimored '' , grant no .03ms633 g .p. giselsson .daptive nonlinear model predictive control with suboptimality and stability guarantees . in _ proceedings of the 49th conference on decision and control , atlanta , ga ,usa _ , pages 36443649 , 2010 .s. s. keerthi and e. g. gilbert .optimal infinite horizon feedback laws for a general class of constrained discrete - time systems : stability and moving horizon approximations ._ journal of optimization theory and applications _ , 57:0 265293 , 1988 . | : the stability analysis of model predictive control schemes without terminal constraints and/or costs has attracted considerable attention during the last years . we pursue a recently proposed approach which can be used to determine a suitable optimization horizon length for nonlinear control systems governed by ordinary differential equations . in this context , we firstly show how the essential growth assumption involved in this methodology can be derived and demonstrate this technique by means of a numerical example . secondly , inspired by corresponding results , we develop an algorithm which allows to reduce the required optimization horizon length while maintaining asymptotic stability or a desired performance bound . last , this basic algorithm is further elaborated in order to enhance its robustness properties . lyapunov stability ; predictive control ; state feedback ; suboptimal control |
`` resource / service as a utility '' once a dream , is now a reality we are living with .utility computing is not a new concept , but rather it has quite a long history . among the earliest referencesis by john mccarthy .last two decades of information technology ( it ) development has witnessed the specific efforts done to make this statement of john mccarthy a reality .utility computing is providing basics for the current day resource utilization .cluster , grids and now cloud computing have made this vision a reality .e - collaborations also called _ virtual organizations _ have been evolved with the technological and paradigm shift .cluster computing offered more centralized resource pool , while grid computing remaind in need via hardware and computation cycles offerings to the scientific community .grid computing models observed a deadlock after the introduction of cloud computing concepts .this situation was a result of missing economic business models , although some work was done on this issue even by the authors .based on pay - as - you - use criteria , cloud computing is still in early stage . however , the economic component of cloud computing is a central focus of actual research .research efforts are going on to establish the basis of cloud computing as every - thing - as - a - service paradigm .infrastructure providing resources as a utility must be dynamic , scalable and reliable .orchestration of resources across the globe , named as virtual organization ( vo)/virtual enterprize ( ve ) has been extensively deployed to achieve this target .change in the hardware and software technology , computing paradigms algorithm and procedures , incorporation of knowledge rather information and data , made the concepts of vo vague .though vo had been created utilizing the best technology known to that time , but the success was short lived .there are three main issues , which has to be considered in order to understand : * advancement in hardware / software technology .* birth of new computing paradigms . * changed nature of resources and requirements from end user .we are living in the age of transformation .a paradigm shift is one that effects the society as a whole . according to peter drucker such transformation place over fifty to sixty - year periods . in his book `` post - capitalist society '' , he outlines three earlier periods of dramatic changes in the western world . *the rise of medieval craft guilds and urban centuries . long distance trade ( thirteenth - century europe ) . * the renaissance period of gutenburg s printing press and lutheran reformation ( 1455 - 1519 ) . * the industrial revolution , starting with watt s steam engine ( 1776 - 1815 ) .existing technologies and paradigms do not vanish with the birth of new concepts rather they adopt what is positive and remove what is not required .technology and paradigm used to form vo have also faced this transformation .for example networking , distributed computing , cluster computing , grid computing , utility computing and now cloud computing , all are related and are improvements of the existing concepts .when technology changes or improves , paradigm needs an upgrade too .new methods and algorithms are created to support the hardware .another main factor is the requirements from the user community .the user community puts a demand on the technology and computing paradigm and they evolve accordingly .`` resources / services as a utility '' is main theme of collaboration . to achieve the goal(s ) , organizations and individuals gather all the resources available .the spectrum of availability has covered the whole globe . today , time and space are not a limit due to information and communication technology ( ict ) advancements .this revolution has an impact on the resources types .initial collaborations offered only storage and downloading ( p2p networks ) , computing cycles and storage space ( grid computing and cluster computing ) .main focus remained at hardware and software sharing , but vos for scientific research initiated another requirement , i.e. need of a human expert to guide the beginners in the said domain .expert becomes an integral part of collaborations . also , the two way contribution ( duplex ) motivated us to review and categorize the resource in the vicinity of vo .the categorization we presented is also vigilant to depict the general pattern of resources in any domain .vo is the right place for both technology and computing paradigm to merge and achieve the objectives . in the past two decadescollaborative computing has remained main concern of technology produced .optimization of time and heterogeneous resources by building vo is the key point of today s research directions .vision of a vo has evolved with the networking and distributed computing concepts .research community recognizes vo with different names , e.g. collaboratories , e - science or e - research , distributed work groups or virtual teams , virtual environments , virtual enterprize and online communities .initially , focus was to improve business by utilizing the ability to gather resources which are scattered across the dimensions of time , space and structure . with the advent of modern technology, vo has encompassed almost all fields of life .we can say that every human will be soon part of a vo .vo s concepts need to be revisited with this evolution in general .vos have been visioned from the business perspective in early 1990s .pervasiveness of technology and improvements in computing paradigms has extended the domain of vo to cover all the areas where individuals and organizations meet to achieve some goal ( formal virtual organization ) or without any specific common objective , e.g. social networks ( informal vo ) . to the best of our knowledge ,till now there are no standard procedures or patterns for how vo should be created and evolve to accommodate the changes in its integral parts or entities .lack of standards for vo motivated us to provide a standard vision of e - collaboration incorporating both paradigm and technology shift as a reference architecture ( ra ) to achieve common objective(s ) in any domain .our research efforts also introduced new concepts regarding resources and stakeholder of a vo . to provide a standard for vo , we consider the existing technologies and paradigms .service oriented architecture ( soa ) , web 2.0 and web 3.0 are the underlying technological platform , and computing paradigms include utility computing and cloud computing . during our research processwe studied the existing infrastructures available for vo . utilizing electronic collaborations for achieving common goalsis a tradition rather a requirement .distributed resources are gathered using an infrastructure and are exploited to obtain the said results . in it world such collaborationis known as vo .idea is to provide resources as a utility to the end user .service - oriented infrastructures need to act dynamically to fulfil the demands from organizations and businesses .we encountered the following addressable issues : * does existing electronic collaboration approaches follow a standard ? * can we define patterns without predefined standards ? * does existing infrastructures fulfil the requirements of participating entities ? *are the existing infrastructures dynamic and adaptable to the rapidly updating it and business world ? * can we design a generic platform to integrate resources from multiple domains using essential and optional parts ?our research aims to answer these questions .vo s creation process lacks standards / patterns / methods .we analyzed existing vos and the process of their creation through available documentation .we found the following answers to the above questions : * currently , there exist no specific standards for building vo or e - collaboration . *existing infrastructures are modified for specific domain needs and can not be generalized to all the domains . since, existing technology is used without following any standard for creating a vo , it is hard to foresee incoming demands from the participants .* we require a generic platform to integrate resources from a single domain or multiple domains . * defining a generic platform on the basis of everything - as - a - service ( xaas) concept is a solution .definition of participating components as : * * essential parts * * optional parts we present the following solutions for these obstacles : * generalized patterns for building a vo . * defining components of a vo . * providing new definitions and examples of _ resources _ and _ stakeholder _ in different domains and justifying them in real world . * presenting a reference architecture for virtual organization ( ravo ) which can be applied as a starting point for any community ( belonging to a single or multiple domains ) to collaborate . the layout of the paper is as follows : in the next section [ sec : vo ] we introduce the notion of virtual organization . in section [ sec : creatingvo ]we present the principal steps for building a virtual organization , which we map to the process of our blueprint design .the generic reference architecture for virtual organizations , which we call ravo , is laid out in section [ sec : ravo ] . for justification of our approach a proof - of - concept implementation based on ravo is presented in section [ sec : n2sky ] .finally the paper is closed with a conclusion of the findings .a virtual organization is a non - physical communication model with the purpose is of a common goal .it is built up from people and organizations with respect to geographical limits and nature .additionally , a virtual organization provides typically a collection of logical and physical resources distributed across the globe . from a conceptual point of view a _ virtual organization _ resembles a detailed non - physical problems solving environment .many definitions have been presented and many different terms arose , e.g. collaboratories , e - science or e - research , distributed work groups or virtual teams , virtual environments , and online communities .a virtual organization can comprise a group of individuals whose members and resources may be dispersed geographically and institutionally , yet who function as a coherent unit through the use of a cyber infrastructure .virtual organizations are typically enabled by and provide shared and , in most cases , real - time access to centralized or distributed resources , such as community specific tools , applications , data , instruments , and experimental operations , which was a key research areas of the authors in the past .the different types of virtual organizations depend upon mode of operation , goals they achieve and life span for which they exists .regardless of the objectives , virtual organizations possess some common characteristics .virtual organizations provide distributed access across space and time .structures and processes running a virtual organization are dynamic .email , video conferencing , telepresence , awareness , social computing and group management tools are used to enable collaboration among the participants .operational organizations are supported by simulation , database and analytical services . in daily lifewe come across many virtual organizations in terms of social networks as well ( e.g. facebook , myspace ) .it can be phrased that soon every human on this earth will considered to be a part of some virtual organization serving its purpose in the said organization .generally , a virtual organization provides a global problem solving platform .it is difficult to specify or restrict the domain for which they are serving .some advantageous roles played by virtual organizations are facilitator of access ( birn , lead , nanohub ) , cabig , lhc computing grid , enhancer of problem solving processes ( teragrid ) and key to competitiveness ( geon ) .virtual organizations have served in the field of earthquake engineering ( southern california earthquake center ( scec ) ) , cancer research ( cancer biomedical informatics grid ( cabig ) ) , climate research ( earth system grid ) , high - energy physics ( large hadron collider ) , and computer science .other communities are now forming virtual organizations to study system - level science .these virtual organizations are addressing problems that are too large and complex for any individual or institution to tackle alone .it simply is not possible to assemble at a single location all of the expertise required to design a modern accelerator , to understand cancer , or to predict the likelihood of future earthquakes .virtual organizations allow humanity to tackle previously intractable problems .the creation of a virtual organization is time consuming and should be a well planned activity . in this sectionwe will discuss virtual organization and technology from different perspectives .both aspects are required to support each other .technology provides the basic infrastructure for a virtual organization to exist . a virtual organization in turn places demands on information technology and shapes the evolution of technology . for the last decadethe virtual organization is one of the most discussed collaboration environment ; but still no standards exist . from this discussions we assume a step wise approach which is helpful in the creation of virtual organization .it can be separated in two phases which are detailed below .the definition of a virtual organization starts with a series of questions , which are very critical in order to proceed .these questions ( qx ) are listed in the following : * q1 : why to form a virtual organization ?what are the reasons of an organization to create a virtual organization ?* q2 : what is the motivation behind participation ? why should other persons , institutes , service providers , etc. want to participate in a virtual organization ?* q3 : what services are offered by a virtual organization ? * q4 : how are these services fared ? what is the type of the resources / business model ?* q5 : who are the intended users ? who will eventually use and get benefited from this virtual organization ? * q6 : what is the life of ( membership of ) a virtual organization ?are temporal alliance or permanent participation expected ?based on these q&a activity it is necessary to identify the building blocks of a virtual organization .gannon has identified main components of a virtual organization .these components are * _ common interest . _ the reason to form a virtual organization , * _ users . _ the participants of a virtual organization , * _ tools and services ._ this is a crucial part of a virtual organization , which maintains the overall working environment and saves the existing patterns to be reused in order to reduce time to solve similar problems .a virtual organization requires a collection of shared analysis tools ( e.g. visualization tools and provenance tools ) .tools can be integrated into specific virtual organization work flows and can be shared and reused .they are used to collect data and publish results . * _ data . _a virtual organization contains two types of data , generally categorized as meta data and operational data that is being operated by tools .the component identification process provides the basic building blocks .the designer of a virtual organization can decide what to be improved and further included in the design process .also , each component can be given a unique definition by the designer in context of a virtual organization being created .detailed information about creation , management and application area of virtual organizations is available in .according to gerrit muller there are two simultaneous trends , * increasing complexity , scope and size of the system of interest , its context and the organizations creating system . *increasing dynamics and integration : shorter time market , more interoperability , rapid changes and adaption in the field , in a highly competitive market , for example cost and performance pressure .these trends form basis for our proposed ra as well .vos are developed as distributed system at multiple locations , by multiple entities , consisting of multiple applications by multiple vendors , merging multiple domains for providing solutions to multiple problems .ra comes in scene where the multiplicity reaches a critical mass triggering a need to facilitate product creation and life cycle support in this distributed open world .we detail the ravo in the subsequent sections .we define ravo as `` an open source template that does not only depict the architectural patterns and terminology , but also defines the boundaries where heterogeneous resources from different domains merge collaboratively into a common framework '' .a ra has a life span and is dependent on the target architecture and possibly other ras . as guideline for our effortwe closely analyzed the ra presented by shaman ( european commission , ict-216736 ) , gerrit muller and nexof .ravo provides * a common lexicon and taxonomy . * a common ( architectural ) vision .* modularization and complementary context . * a layered approach(bottom - up ) .a common vision facilitates the participating entities to work as a team to achieve their decided goals .modularization helps to integrate different domains thereby decreasing the efforts and context information make the dynamic nature of the architecture consistent .we aim for developing a ra which allows for new forms of it infrastructure coping with new collaborative processing paradigms , as grid computing and cloud computing .thus we have to deliver an environment to allow for the new _ internet of services and things _ accommodating the novel service stack , as iaas , paas and saas .architecture is classified into different layers according to the service each layer provides .layered architecture is chosen because it helps to group different components ( logical and physical ) according to the degree of relatedness and required functionality .ravo is based on spi model .layered approach is used to achieve the goal of providing all the resources as a service .layers are distributed into 3 broad categories , iaas , paas , saas figure [ fig : csvo ] presents the framework for vo using the spi model .the layers are distributed into 3 broad categories , iaas , paas and saas thus resulting in xaas .[ [ section ] ] in context of ravo , saas is composed of a service layer .it contains domain specific applications ( dsa ) accessible by all users .dsas are the combination of several user interfaces and business models found in the vo layer .users , who only use the platform to solve their domain specific problems and do not contribute to the vo , find an entry point at this layer .* _ service layer _ : it has open source , downloadable software , categorized in domains .the service layer packages several services provided by the vo layer to be subscribable entities .these entities include generic functionality to query information from the problem domain as well as the means to perform data mining on the compound data created or provided by the combination of the services .[ [ section-1 ] ] in ravo two layers , namely vo layer and abstract layer , cover paas . * _ abstract layer _ : this layer is composed of essential tools which enable the whole framework to be exploited in a dynamic manner .the set of tools consist of provenance , workflow , graphical tools and any other domain specific tools which are used to enhance the reuse of the resources for a diverse set of problem solving activity .each tool provides its own functionality , its own user interface description , as well as an abstract api ( identical for each tool ) to access the resource in factory layer . *_ vo layer _ : this layer is the entry point for user .it provides the realization of the user interface description and defines a business model on top of the abstract layer to set usage cost according to usage statistics .participating entities can agree on a usage model and build a cost trust for selling their resources . in context of vo ,contributor / subject users ( who not only use the resources offered by a vo but also contribute to the vo ) are authorized to access the system on this layer .all have access to the system on paas layer .[ [ section-2 ] ] in ravo logical and physical resources are considered to be the part of iaas .this part consists of two sub - layers in ravo : factory layer and infrastructure enabler layer .only users with administrative rights have access to this layer . * _ factory layer _ :belongs to the iaas category and contains resources for ravo .resources are described as physical and logical resources .physical resources comprise of hardware devices for storage and computation cycles in a distributed manner .logical resources contain expert s knowledge that supports the problem solution activity thereby reducing time to reach the specified goal . * _ infrastructure enabler layer _ : allows access to the resources provided by the factory layer .it consists of protocols , procedures and methods to contact the desired resources for a problem solving activity .it acts as a glue or medium to reach the desired resources based on user request .[ [ section-3 ] ] all layers are providing their functionality in a pure service - oriented manner so we can say that ravo is xaas .vos is a broad category of distributed systems .it is envisioned as a combined effort of multiple entities ( organizations , people , hw , sw ) for achieving a goal .building ra for vo is effective in many ways .ravo forms basis for vos belonging to any domain .it improves the effectiveness by managing synergy , providing guidance for collaboration , generic framework , managing and sharing the architectural patterns .interoperability is the most critical aspect of collaborative computing and vos main feature .it determines the usability , performance and dependability of user level applications .integration cost and time are also important factors in context of interoperability .ravo supports interoperability by defining a negotiation model / trust for the participating entities thereby supporting the effective re - use of patterns .many ra focuses the technical architecture only .according to saf meeting conclusion , a ra should address , * technical architecture .* business architecture .* customer context .ravo well addresses these three aspects .it presents a technical architecture specifying the must participating modules , apis , protocols and platform to support vo .ravo offers a business model which is open according to the participating entities conditions for resource sharing .business model and customer context overlap .ravo explicitly defines roles of participating entities as _ subject _ , _ consumer _ , _ producer _ and _ administrators_. elaboration of roles makes it easy to dynamically update the business model as an entity changes the role .ravo supports feedback from the participating entities which is helpful in improving and maintaining the existing ra .these concepts are already detailed in ravo section .ra is a perceived image of existing technologies .designing ra is a challenging job because it needs sufficient proof to justify its need in the said context .ravo focuses on vos . to the best of our knowledge, there is no standard pattern or framework which can be used to create a vo from scratch .our vision is to provide the vo community a complete framework for identifying main components and abstract a life cycle to create vo from scratch .it grasps knowledge from existing structures such as nexof and shamans ( european commission , ict-216736 ) .guidelines are used to modify the requirements into an ra which supports creation , dynamic evolution and maintenance of a vo .viewpoint is defined as a specification of the conventions for constructing and using a view . a pattern or template from which to develop individual views by establishing the purposes and audience for a view and the techniques for its creation and analysis .view is a representation or description of the entire system from a single perspective .stakeholder is the viewer , who perceives the system according to her role .viewpoint has a name , stakeholders addressed by it and concerns to be addressed by the viewpoint , and the language , modeling techniques or analytical methods to be used in constructing a view based on the viewpoint . according to these definitions , viewpoints extracted from the concerns of the stakeholders are shown in figure [ fig : vp ] .these viewpoints are detailed in the following sections .stakeholders collaborate to form a vo .all participants of a vo have an objective ( personal or organizational ) to achieve via this collaboration .sub - viewpoints are , * domain definition : depends on the type of problem solution , target domain can be one or multiple .thus , stakeholders can be from one or multiple domains .* participation level : participation can be individual or at organizational . * duration : stakeholder remain part of the vo according to the membership duration agreed upon among the collaborative entities .it can vary depending on the type of vo , partial or permanent ( either participation is required for a specific part or throughout ) and a business model in case of profitable organizations . * types of contribution : it is decided by the role assigned to a specific stakeholder in the context of a vo . this viewpoint formulates the boundaries of a vo and its participants .all the participants must clearly put their requirement and goals while building collaboration .these requirements should reflect any assumption made on the architecture and the respective requirements stemming from the assumptions .once requirements are defined ( in form of a list , catalogue ) , vo has a vision to achieve and targets are set accordingly .trust governance viewpoint is very important to any collaboration specially for vos .keeping stakeholders and resources glued together to achieve a target is only achievable via strong trust .following sub - viewpoints are defined in this context : * trust / policy formation : experts and planners from participating entities prepare an agreed upon policy / model / contract .this policy defines the rules for participating and leaving vo , contributing and consuming resources , penalties for violation and measures to keep consistent and just to all the stakeholders .* objective catalogue : this viewpoint provides the list of all the contracts and agreements in a documented form necessary for authentication , authorization and stakeholder management . * reviewing policy : due to dynamic nature of vo , the policies and contracts are reviewed to be in the accordance of change in requirements , technological updates , removal and entrance of participants .* business perspective : this viewpoint is optional depending upon the type of vo .profitable vo have a business model for metering , billing in addition to authentication and user management .this view point provides details of participating components for realizing the vo on technological and system level .it is further divided into 4 sub - viewpoints which are briefed here as , * data : this viewpoint aims to depict the types of data utilized in collaboration .two broad identifications are found as _ meta data _ and _ operational data_. problem nature , domain and participating entities decide on the data source and security in collaborative efforts .data and relationship between different components can be represented using table , mat , uml class diagrams , activity diagrams and component diagrams .* applications and tools : this viewpoint describes the list of running applications and tools utilized in a problem solving activity .this view can be further divided according to requirement .roles of stakeholders also decide the access to different available tools and applications at multiple levels ( interface , infrastructure , platform and so on ) .distribution and relationship among applications , tools and components can be shown using uml component diagram .* resources : this viewpoint explains the list of resources ( table , list ) , their owners , availability , usage cost ( in case of profitable organization ) and access rights .we have to sub - viewpoints : * * subject : an important viewpoint which defines stakeholder which consume and contribute to the resources simultaneously . * * enabler : this viewpoint details the stakholders which are related to deployment , configuration , monitoring and lifestyle management .roles assigned in this viewpoint are developers , administrators , business providers , planners and experts . * log catalogs : this viewpoint keep track of activities which are carried out during problem solving activities .dynamic collaboration environments need to this record for the feedback and improvement .this viewpoint lists the best available technology currently deployed .if new technology is employed which is not listed then it should be added to the list later .it is very helpful keeping vo consistent with the upcoming demands from business and user requirements and advancement in new computing paradigms and methods .platforms used for collaboration have remained in a constant up - gradation .choice must be made on technology by giving weighage to qos , security , cost effective and timely solution to the end user .an important sub - viewpoint of technological aspect is virtualization .it provides the way to reuse hardware cost , respond dynamically and maximize resource utilization and easy relocation .virtualization viewpoint deals with logical resources rather than physical resources .all these viewpoints are shown in the diagram figure [ fig : vp ] these viewpoints can be represented using lists , tables , uml tools , and other requirement specification tools available .they are also extendable and organizations can add any further categories according to their goals .ravo is composed of multiple layers and each layer provides a set of components which are the building block of a vo .selection of these building blocks is subjected to various aspects ( i.e. life span , nature ( dynamic or static ) , type , formal , informal and so on ) .we define interfaces for these components by specifying parameters ( mandatory and optional ) , methods and necessary conditions for their executions .vo needs to keep specific information in general , when created .it possesses some characteristics , ( e.g. unique i d , date created , description about purpose domain etc ) .it also requires to maintain information about participating organizations and individuals .it is a must to maintain and update the information about the resource providers in a vo .organization offering resources , time period for which resources are made available and access rights are potential characteristics .[ [ query - interface . ] ] query interface .+ + + + + + + + + + + + + + + + ravo proposes query interface as a mandatory component at service layer .user is facilitated with remote or desktop access .query interface enables user to search for their problem solution in knowledge base .knowledge base contains history of problems solved previously . on successful query useris provided with appropriate output . in case of no matching solution ,query is processed and problem solutions is provided to user and knowledge base is updated .query interface must provide login facility , identify the query type , check for existing solutions and must maintain a tolerable response time .[ [ domain - specific - application - dsa . ] ] domain specific application ( dsa ) .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + dsa is a mandatory component dsa provide user with the ability to either download the applications and run on their own systems or use them at vo platform for problem solution .the range of applications depends on the domain and type of vo .stakeholder can share their applications paid or non - paid basis .sharing of application can be conditional ( e.g. fully or partially paid in case of profitable organization ) .information maintained about dsa must include how it is accessible ( online or offline ) , access rights and cost ( as defined in the contract / business model ) . [ [ data - mining - tools . ] ] data mining tools .+ + + + + + + + + + + + + + + + + + data mining tools are an optional component of ravo .they are a must for analytical and scientific research based vos .interface for data mining tools include tool description , access rights and manul / help .paas layer is composed of two layers , namely 3-vo layer and 2-abstract layer .component specification is detailed below .[ [ vo - trust . ] ] vo trust .+ + + + + + + + + vo layer consists of two main mandatory components .vo trust is the most important of all components .it is formed by combining different modules and performs multiple tasks .it is responsible for _ authentication _ and _ authorization _ of vo members .authorization is done on the basis of _ roles _ defined in the _ contract / business model_. vo trust have a mandatory emphcontract which consists of policies to achieve the goals of vo . in profitable or partially profitable vos business model is also mandatory component of vo trust . in ravo business modelis optional and depends on the type of vo , however contract is mandatory .access rights are defined in contract or business model .different methods are available to define the access rights . organization models and access rights are comprehended in . according tothe authors access rights might be subjected to _ organizational _ and _ direct _ change .all components of vo trust are synchronized to maintain the vo .each component is assigned a specific task and output of one component provides input to the other component .vo trust has a _ resource information _ component that acts like a _registry_. it keeps necessary details about all the resources available in vo . [ [ user - interface . ] ] user interface .+ + + + + + + + + + + + + + + user interface is a mandatory component of vo layer .it provides access to the platform services offered by vo .user is authenticated and authorized using login option . after authorization, user can formulate different queries and perform actions .these facilities are realized using a web portal .[ [ workflow - tools . ] ] workflow tools .+ + + + + + + + + + + + + + + abstract layer is a sub layer of paas layer .it includes different components .workflow tools is a mandatory component of this layer .workflow management is a critical aspect of a vo in any domain .it supports provenance management which plays vital role in monitoring and maintaining a vo .workflow can be interpreted in different forms ( e.g. graphical , textual , source code ) .interpretation mode is chosen on the level of audience a vo possess .workflow tools keep track of all the processes active in vo .process management can be included as a sub component of a workflow tools .dynamic adaption of in - process workflow is an essential part of any workflow management system .classification of approaches along their strength and limitations used for dynamic adaption in workflow systems are detailed in .flexibility criteria in process management to handle the foreseen and unforeseen behaviors are categorized in .workflow tools allow user to define workflows for a problem solving activity .the participants responsible at each stage of this activity are notified and are responsible for delivering the promised results .workflows are reusable and reduce redundancy and time for similar problems .information maintained consist of workflow i d , type , status , access rights , how it interprets the results and process management .workflows are used by provenance management to track the problem solving activity on user demand .[ [ provenance - tools . ] ] provenance tools .+ + + + + + + + + + + + + + + + + with the advent of financial computing systems , as well as of data - intensive scientific collaborations , the source of data items , and the computations performed during the incident processing workflows have gained increasing importance .provenance of a resource is a record that describes entities and processes involved in producing and delivering or otherwise influencing that resource . in a vo ,provenance forms a critical foundation for enabling trust , reproduction and autentication .provenance assertions are a form of contextual metadata and can themselves become important records with their own provenance .provenance tools are mandatory and included in abstract layer of ravo .provenance management is dependent on authorization , query management and workflow management .[ [ graphical - interface . ] ] graphical interface .+ + + + + + + + + + + + + + + + + + + + graphical interface is a mandatory component of abstract layer .it facilitates users to perform different task in vo web portal .it provides an understandable interface to interact with the vo .[ [ resource - management . ] ] resource management .+ + + + + + + + + + + + + + + + + + + + resource management is a mandatory component of abstract layer .it provides a mechanism to select and aggregate resources for a problem solving activity . depending upon the underlying technology ,vo developers can deploy different resource management tools .necessary information maintained depends on the resource type and interest of participating entities .basic information includes resource s unique identification , categorization as logical or physical , owner information , access rights and costs etc .iaas layer is composed of infrastructure enabler layer and factory layer . this layer from the fabric of ravo .all the resources are avaialble in factory layer and are exploited through infrastructure enabler layer . [[ infrastructure - enabler . ] ] infrastructure enabler .+ + + + + + + + + + + + + + + + + + + + + + + this module is depending on the underlying technology .qos , service level agreement ( sla ) , security , fault tolerance and disaster management are most important issues specifically in clouds .these aspects have to be implemented on the bases of terms and conditions presented by participating entities .financial aspect is another limitation for the implementation of these modules .any other desired aspects can be added to extend the infrastructure enabler layer .components are dependent on the decision of the developers .ravo identifies least basic and gives developers an open end to use them as mandatory or optional in their target vo .[ [ resource - catalogue . ] ] resource catalogue .+ + + + + + + + + + + + + + + + + + + this module is part of factory layer but not explicitly shown in ravo .it acts like a database for the resources .vo developers can include it at any layer according to their needs .ravo keeps it at the factory layer as a mandatory component .it contains information about resource management .[ [ expert . ] ] expert .+ + + + + + + expert represents the logical resource in ravo factory layer .an expert plays important role in problem solving activity .expert can be contacted online during the problem solving process or she can be accessed offline .vo must maintain detailed information about expert so that this feature can be fully exploited .[ [ data - service . ] ] data service .+ + + + + + + + + + + + + data services is a mandatory component of factory layer .it represents the physical resource in ravo .data stores are important scientific and research based vos .[ [ computational - services . ] ] computational services .+ + + + + + + + + + + + + + + + + + + + + + + computational services are mandatory component of factory layer .they also form the physical resources offered by a vo .as proof - of - concept of our approach we used ravo as a design blueprint for implementing a cloud based vo for neural network research , namely , n2sky .we based the development of n2sky on the blueprint provided by ravo and produced a concrete instance out of our proposed standard .this section compares n2sky with ravo to reveal the process of creation of n2sky .the comparison justifies and proves how ravo supported different development phases of n2sky .we explain n2sky as an instantiation of ravo but with concrete components .we divide this comparison in 3 levels .first , _ requirement analysis phase _ that defined boundaries of n2sky .second , _ component identification phase _ which made it easy to identify the components of n2sky and also choose between optional and mandatory components .third , _ implementation phase _ that reveals how technology independence , xaas and layered distribution of components made it helpful to implement the system .the stakeholders envisioned in ravo are also implemented as part of n2sky . in the previous section we detailed a series of questions which must be answered by the responsible authorities for creating a vo .n2sky utilizes this pattern for defining the requirements boundary of the system .these questions are answered in detail in an interview by engaged software engineering experts , whic are presented in section [ sec : evaluation ] .n2sky is a layered architecture instantiated from ravo .the n2sky is shown in figure [ fig : n2s ] .n2sky is also presented as an xaas , based on cloud spi model .it consists of 3 layers , namely saas , paas and iaas .these layers have sublayers similar to ravo .each layer has some components which are either mandatory or optional depending upon their participation in vo .figure [ fig : csvo ] shows ravo framework .a detailed , tabular comparison of ravo and n2sky components is given in figure [ fig : comp ] .section [ sec : ravo ] presented interface specification for components . here , we analyze how these interface specifications were used in n2sky .we compare the underlying framework ravo with its instantiation as n2sky , in a top - down fashion .we start with saas layer .saas layer of ravo consists of optional and mandatory components .choice of components and decision on their status ( mandatory and optional ) is open for the developers .the inclusion of components is dependent on the requirement definition by the stakeholders .saas layer has one layer , named service layer .here tables are included for the sake of comparison . * query interface : ravo proposes query interface as a mandatory component at service layer . in n2sky , query interface is also included as a mandatory component . *domain specific application ( dsa ) : dsa is a mandatory component .n2sky has a simulation service but at neural network layer ( sub layer of paas ) .n2sky includes dsa as nn specific applications .n2sky is planned to include nn specific applications .the simulation service provides the creation , training and simulation of neural objects which in turn are instances of nn paradigms .currently , simulation services are provided at nn layer of n2sky . * data mining tools : data mining tools are an optional component of ravo .n2sky has not included this option .n2sky also has one layer , named service layer ( similar to ravo ) .extended components included at service layer in n2sky are : * web portal : n2sky web portal is a mandatory component .* smaprtphone app .* hosted ui .paas layer is composed of two layers , namely vo layer and 2-abstract layer .component specification is detailed below . in n2sky paasconsists of 3-neural network layer and 2-abstract layer .[ [ vo - layer - comparison - with-3-neural - network - layer . ] ] vo layer comparison with 3-neural network layer .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in ravo vo layer has the following components : * vo trust : mandatory component of vo , which is responsible for enabling resources , defining policies to achieve a goal .it has several components and is extendable according to the need and requirement of stakeholders .n2sky has distributed trust component in to different modules . in n2sky ,neural network layer has a management service component to serve the purpose .other components are available at abstract layer namely , business model with slas and accounting . * user interface : user interface is a mandatory component for solving problem utilizing vo paas utility .it provides an interface to interact with the vo .n2sky also realizes this component as a part of web portal .extended component of n2sky : * hosted component : provides and interface for components hosting platform .* simulation service : already described in service layer comparison .it is a mandatory component that is part of neural network layer of n2sky .[ [ sub : alcompared ] ] abstract layer comparison .+ + + + + + + + + + + + + + + + + + + + + + + + + + ravo and n2sky both have this sub layer named 2-abstract layer. components of these layer in ravo and n2sky are compared . * resource management : resource management is a mandatory component of abstract layer .it provides a mechanism to select and aggregate resources for a problem solving activity . depending upon the underlying technology ,vo developers can deploy different resource management tools . in n2sky resource managementis achieved via mandatory registry component .* workflow tools : n2sky also has a workflow system under development . * provenance tools : provenance tools are proposed in ravo but they are not included in n2sky . * graphical interface : a mandatory components which facilitates interaction with vo easier and helps user to get results in an understandable format. it also assists user in formulating queries and browsing in vo environment . in n2sky graphical interfaceis implemented as a web portal described earlier .extended components supporting vo trust ( as proposed in ravo ) functionality :* controlling and accounting : this component along with sla component serves as a business model . in ravo business modelis optional .* usermanagement .* access control .* annotation service .* knowledge management : it refers to expert s knowledge of ravo defined at factory level .* component hosting platform .iaas layer is composed of 1-infrastructure enabler layer and 0-factory layer . this layer froms the fabric of ravo .all the resources are available in factory layer and are exploited through infrastructure enabler layer .infrastructure enabler layer in ravo brings an open choice for the developers for underlying technology .qos , service level agreement ( sla ) , security , fault tolerance and disaster management are aspects to be considered in particular .further extension can be done by developers .n2sky also have an infrastructure enabler layer .it contains following components .* data archive : implemented as a mandatory component of n2sky .* component replication service .factory level of ravo is also instantiated in n2sky .it has following components in ravo * resource catalogue : resource catalogue module is an extension of resource management component .it is a mandatory components .it keeps information about resources which is of interest to vo . in n2skythis task is achieved by registry component . *computational services : ravo offers computational services as a mandatory component . in n2sky this component is realized by component replication service .it is a mandatory component which act as n2sky paradigm archive service . * data services : this component of ravois realized by n2sky as a part of infrastructure enabler layer . *expert s knowledge : n2sky implements this component of ravo as knowledge management as a subcomponent of abstract layer . based on the experiences drawn from development of n2skywe could derive the following list of findings : * ravo provides strong theoretical grounds to clear the vision of vo developers and participants before they start building a community .* requirement analysis and component identification phases enable developers to list mandatory and optional components .the purpose is twofold .first , must parts of the vo are confirmed .second , optional parts leave room for future requirements and upgrades .* ravo framework is flexible and generic .components at different layers are moved or integrated with other parts as it eases the developing process .* ravo is technology independent and it gives freedom of choosing any suitable tools and programming languages . *ravo emphasis on providing graphical interface to ease the end user so that they can communicate and formulate their queries easily .the interface should not be complicated that only professionals can interact .* stakeholders and their roles are important to understand . pattern developed for ravo are used here extensively to design a business model for n2sky . for further justification of our approach also an informal evaluation was conducted .hereby two software developers ( which we name a and b ) , who were involved in the final development of voci , were interviewed .interview partner a is a senior researcher at our research group .his expertise include soas ( including work on mobile devices ) , with a specific focus on process aware information systems .he developed a light - weight modular process engine to fully support external monitoring and intervention .he further published in the field of restful service description , composition and evolution .he was interviewed regarding ravo as a staring point for developers in nn domain .interview partner b was a master student at university of vienna , chose ravo as the template to develop a cloud - based virtual organization for nns called n2sky .erwin mann has experience in implementing service - oriented architectures ( soas ) , service orchestration , to create workflows and in porting such systems to cloud - based environment .n2sky brings together both nn paradigm developers and users who deal with problems that are beyond conventional computing possibilities .n2sky provides a standardized description language for describing neural objects ( instances of neural paradigms ) called vinnsl .furthermore n2sky provides a business model for researchers and students but also for any interested customer .s core process is the simulation service including creation , training and evaluating of neural objects in a distributed manner in the cloud .both researcher gave their opinion after critical analysis of ravo .researcher a s abstraction of ravo in terms of `` q&a '' is helpful for the developers of vo in any domain .researcher b has applied ravo and developing an instance in the domain of nn .we deduce the following statements from this evaluation .* ravo best fits needs of community for developing a vo from scratch . * ravo supports evolution of existing systems in to a vo .n2sky is an example of such evolution .* ravo is presented in a layered fashion , with a choice of mandatory and optional components .layered approach make it easy to distribute the components in different layers and also developers are not bound to choose the exact distribution .ravo is a flexible and extendable framework .developer can change the components and move them to any desired layer .for example in n2sky , components have been moved to different layer as compared to ravo . * ravo is not technology dependent .both the researchers described their alternative choices which establishes the technological independence of ravo . * categorization of resources into logical and physical is a new dimension for vo developers .inclusion of human expertise as a resource supports the demanding nature of problem solving ability , thereby increasing the level of trust in users .* ravo presented a new concept of stakeholder , _subject_. a unique idea of how a stakeholder can become a resource in a vo .being consumer and producer at the same time is difficult to implement .ravo make it easier by introducing the stakeholder categorization .* ravo foresees a business model which is introduced in n2sky as a mandatory component .stakeholder s roles are integrated in business model to set the usage and cost policy . the interviews and the derived results are detailed in .the strong response from research community , motivated us to design a reference architecture for virtual organization ( ravo ) .reference architectures are system specific and provide a level of detail required to translate the required capabilities as derived from missions , operational concepts , and operational architecture views , into projects within a capable package , which will increase system quality and decrease system development costs .we developed a service - stack pattern to start a virtual organization from scratch and also presented a design blueprint for collaborative environments .emphasis is on flexible and simple interface for interaction from the user perspective .it supports addition of tools and application to the virtual organization environment .we give guidelines for building a domain specific virtual organization for computational intelligence community and can extend it according to our requirements .t. weishupl , f. donno , e. schikuta , h. stockinger , and h. wanek , `` business in the grid : big project , '' in _ grid economics & business models ( gecon 2005 ) of global grid forum _ , vol . 13 , ( seoul , korea ) , ggf , 2005 . t. weishupl and e. schikuta , `` towards the merger of grid and economy , '' in _ international workshop on agents and autonomic computing and grid enabled virtual organizations ( aac - gevo04 ) at the 3rd international conference on grid and cooperative computing ( gcc04 ) _ , vol . 3252/2004 of _ lecture notes in computer science _ , ( wuhan , china ) , p. 563570, springer berlin / heidelberg , 2004 .t. weishupl , c. witzany , and e. schikuta , `` gset : trust management and secure accounting for business in the grid , '' in _6th ieee international symposium on cluster computing and the grid ( ccgrid06 ) _ , ( singapore ) , p. 349356, ieee computer society , 2006 .w. mach , b. pittl , and e. schikuta , `` a business rules driven framework for consumer provider contracting of web services , '' in _15th international conference on information integration and web - based applications & services ( iiwas2013 ) _ , december 2013 .w. mach and e. schikuta , `` a generic negotiation and re - negotiation framework for consumer - provider contracting of web services , '' in _14th international conference on information integration and web - based applications & services ( iiwas2012 ) _ , ( bali , indonesia ) , acm , dec . 2012 .c. kesselman , i. foster , j. cummings , k. a. lawrence , and t. finholt , `` beyond being there : a blueprint for advancing the design , development , and evaluation of virtual organizations . , '' tech .may 2008 .e. schikuta and t. frle , `` vipios islands : utilizing i / o resources on distributed clusters , '' in _15th international conference on parallel and distributed computing systems ( pdcs02 ) _ , ( louisville , ky , usa ) , isca , 2002 .e. schikuta , t. frle , and h. wanek , `` vipios : the vienna parallel input / output system , '' in _ 4th international euro - par conference _ ,1470/1998 of _ lecture notes in computer science _ , ( southampton , uk ) , p. 953958, springer berlin / heidelberg , 1998 .p. brezany , t. mck , and e. schikuta , `` a software architecture for massively parallel input - output , '' in _ third international workshop applied parallel computing industrial computation and optimization ( para96 ) _ ( j. wasniewski , j. dongarra , k. madsen , and d. olesen , eds . ) , vol .1184 of _ lecture notes in computer science _ , ( lyngby , denmark ) , p. 8596, springer berlin / heidelberg , 1996 .10.1007/3 - 540 - 62095 - 8_10 .g. klimeck , m. mclennan , m. mannino , m. korkusinski , c. heitzinger , r. kennell , and s. clark , `` nemo 3-d and nanohub : bridging research and education , '' in _ nanotechnology , 2006 .ieee - nano 2006 .sixth ieee conference on _ , vol . 2 ,pp . 441444 , ieee , 2006 .c. catlett , w. allcock , p. andrews , r. aydt , r. bair , n. balac , b. banister , t. barker , m. bartelt , p. beckman , __ , `` teragrid : analysis of organization , system architecture , and middleware enabling new types of applications , '' _ hpc and grids in action , amsterdam _ , 2007 .u. nambiar , b. ludaescher , k. lin , and c. baru , `` the geon portal : accelerating knowledge discovery in the geosciences , '' in _ proceedings of the 8th annual acm international workshop on web information and data management _ , pp . 8390 , acm , 2006 .i. foster , `` service - oriented science : scaling escience impact , '' in _iat 06 : proceedings of the ieee / wic / acm international conference on intelligent agent technology _ , ( washington , dc , usa ) , pp . 910 , ieee computer society , 2006 .a. a. huqqani , p. beran , x. li , and e. s. ., `` n2cloud : cloud based neural network simulation application , '' in _ in proceedings of the international joint conference on neural networks 2010 ( wcci 2010 ) , barcelona , spain _ , 2010 . s. rinderle - ma and m. reichert , `` managing the life cycle of access rules in ceosis , '' in _enterprise distributed object computing conference , 2008 .12th international ieee _ , pp . 257266 , ieee , 2008 .r. hasan , r. sion , and m. winslett , `` introducing secure provenance : problems and challenges , '' in _ proceedings of the 2007 acm workshop on storage security and survivability _ , storagess 07 , ( new york , ny , usa ) , pp . 1318 , acm , 2007 .i. u. haq , e. schikuta , i. brandic , a. paschke , and h. boley , `` sla validation of service value chains , '' in _9th international conference on grid and cloud computing ( gcc10 ) _ , ( nanjing , jiangsu , china ) , p. 308313, ieee computer society , 2010 .i. u. haq , i. brandic , and e. schikuta , `` sla validation in layered cloud infrastructures , '' in _ economics of grids , clouds , systems , and services , 7th international workshop , gecon10 _ , vol .6296 of _ lecture notes in computer science _ , ( ischia , italy ) , p. 153164, springer berlin / heidelberg , 2010 .i. u. haq , a. a. huqqani , and e. schikuta , `` a conceptual model for aggregation and validation of slas in business value networks , '' in _ 3rd international conference on adaptive business information systems ( abis09 ) _ , ( leipzig , germany ) , mar .2009 . i. u. u. haq , a. paschke , e. schikuta , and h. boley , `` rule - based workflow validation of hierarchical service level agreements , '' in _ workshops at the grid and pervasive computing conference ( gpc09 ) _ , p. 96103, ieee computer society , 2009 .i. u. haq , a. huqqani , and e. schikuta , `` aggregating hierarchical service level agreements in business value networks , '' in _7th international conference on business process management ( bpm09 ) _ , ( ulm , germany ) , p. 176192, springer - verlag , 2009 . | `` united we stand , divided we fall '' is a well known saying . we are living in the era of virtual collaborations . advancement on conceptual and technological level has enhanced the way people communicate . everything - as - a - service once a dream , now becoming a reality . problem nature has also been changed over the time . today , e - collaborations are applied to all the domains possible . extensive data and computing resources are in need and assistance from human experts is also becoming essential . this puts a great responsibility on information technology ( it ) researchers and developers to provide generic platforms where user can easily communicate and solve their problems . to realize this concept , distributed computing has offered many paradigms , e.g. cluster , grid , cloud computing . virtual organization ( vo ) is a logical orchestration of globally dispersed resources to achieve common goals . existing paradigms and technology are used to form virtual organization , but lack of standards remained a critical issue for last two decades . our research endeavor focuses on developing a design blueprint for virtual organization building process . the proposed standardization process is a two phase activity . first phase provides requirement analysis and the second phase presents a reference architecture for virtual organization ( ravo ) . this form of standardization is chosen to accommodate both technological and paradigm shift . we categorize our efforts in two parts . first part consists of a pattern to identify the requirements and components of a virtual organization . second part details a generic framework based on the concept of everything - as - a - service . |
this article is concerned with a programme that has as its goal the development of a theory of quantum space - time . in this programme ,an outline of which will be given in more detail shortly , an important role is played by certain higher - dimensional analogues of spinors and twistors .it will be useful to begin , therefore , by remarking that there are two distinct notions of how one extends the concept of spinor into higher dimensions .this fundamental dichotomy arises in association with the fact that in four - dimensional space - time there is a local isomorphism between the lorentz group and the spin transformation group . in higher dimensions, however , this relation breaks down and as a consequence we are left with two concepts of spinors one for the group , and one for the group .the spinors associated with , where we allow also for various possible signatures in the quadratic form defining these orthogonal or pseudo - orthogonal transformations when we specialise to the real subgroup with , are the so - called cartan spinors .the study of cartan spinors has a long and interesting history , and there is a beautiful geometry associated with these spinors .there are also various specific cases of great interest for example , the cartan spinors associated with the group are penrose s twistors ; and the cartan spinors associated with are intimately linked with the cayley numbers ( octonions ) and the exceptional lie groups .there are also a number of interesting connections between cartan spinors and massless fields in higher dimensions .the spinors associated with , which are usually now called ` hyperspinors ' , have the advantage of being more directly linked with quantum mechanics .in fact , we shall show later that a naturally relativistic model for hyperspin arises when one considers ` multiplets ' of two - component spinors , i.e. expressions of the form and , where are standard spinor indices , and is an ` internal ' index . in the general case ( ) we then think of as an element of the tensor product space , where is the complex vector space of two - component spinors , and is an infinite - dimensional complex hilbert space .there is also a link , arising through a further extension of this idea , between hyperspinor theory and the theory of multi - twistor ( hypertwistor ) systems .indeed , we find that the theory of hyperspin constitutes a natural starting place for building up a theory of quantum geometry or , as we shall call it here , _quantum space - time_. in summary , we shall be taking the left - hand path in the following diagram : ( 300,100)(0,0 ) ( 155,80) ( 125,8) ( 240,8) ( 105,-5)hyperspinors ( 225,-5)cartan spinors ( 196,72)(-1,-1)50 ( 204,72)(1,-1)50 the hyperspinor route has the virtue that the resulting space - time has a rich causal structure associated with it , and as a consequence is unusually well - positioned to form the geometrical basis of a physical theory .to start , let us review briefly the role of two - component spinors in the description of four - dimensional minkowskian space - time geometry . in what follows we use bold upright roman letters to denote two - component spinor indices ,and we adopt the standard conventions for the algebra of two - component spinors . then we have the following correspondence between two - by - two hermitian matrices and space - time points , relative to some origin : more explicitly , in a standard basis this correspondence is given by we then have the fundamental relation from which it follows that two - component spinors are connected both with quantum mechanics and with the causal structure of space - time .it is a peculiar aspect of relativistic physics that there is this link between ( a ) the spin degrees of freedom of spin one - half particles , and ( b ) the causal geometry of four - dimensional space - time .let us pursue this idea now in a little more detail , and then extend it to higher dimensions .for the interval between a pair of points and in minkowski space - time we write from which it follows that where is the antisymmetric spinor .hence if we adopt the standard ` index clumping ' convention and write , , and so on , according to which a pair of spinor indices , one primed and the other unprimed , corresponds to a lower case space - time vector index , then we can write for the corresponding squared space - time interval , and thus we are able to identity as the metric of minkowski space .there are essentially three different situations that can arise for the interval , each of which represents a certain level of degeneracy .the first case is ; the second case is and ; and the third case is .each of these cases gives rise to a canonical form for the interval , with various sub - cases , which can be summarised as follows : * : * : * : _ causal structure of four - dimensional space - time_. the canonical form of the spinor decomposition of the four - vector associated with a space - time interval determines its causal properties . ] here it is understood that in case ( iii ) the spinors and do not coincide in direction .it is interesting to note that once the canonical form for is specified , then so is the causal relationship that it determines on space - time .this correspondence is illustrated in figure [ fig:1 ] .on the other hand , the specification of does not completely determine the spinors and .in general there is some freedom , and this is expressed by a group of transformations .in particular , if is null , then this freedom is the phase shift , and the relevant group is .if is time - like , then the group is , and if is space - like , the group is .the terminology ` hyperspinor ' is due to finkelstein . essentially the same concept ( although introduced for different purposes )is also touched on in .the idea of a hyperspinor is a simple one we replace the two - component spinors associated with four - dimensional space - time with -component spinors .thus we can regard hyperspin space as the vector space with some extra structure .in particular , in addition to the original hyperspin space we have three other vector spaces the dual hyperspin space , the complex conjugate hyperspin space , and the dual complex conjugate hyperspin space . the theory of hyperspin has been pursued by a number of authors , and the material we describe here builds on various aspects of this work .let us write and , respectively , for the complex -dimensional vector spaces of unprimed and primed hyperspinors .for hyperspinors we use italic indices to distinguish them from the boldface indices used exclusively for two - component spinors .it is assumed that and are related by an anti - linear isomorphism under the operation of complex conjugation .thus if , then under complex conjugation we have , where .the dual spaces associated with and are denoted and , respectively . if and , then their inner product is denoted .likewise if and then their inner product is .we also introduce the totally antisymmetric hyperspinors of rank associated with the spaces , , , and .these will be denoted , , , and , respectively .the choice of these antisymmetric hyperspinors is unique up to an overall scale factor .once a choice has been made for , then the other epsilon hyperspinors are determined by the relations where is the complex conjugate of .now let denote the skew tensor product space . using an analogous notation, we introduce the spaces , , and for each .then once the epsilon hyperspinors have been fixed we have a collection of maps of the form as a consequence , a wide range of algebraic theorems can be formulated , which are useful in calculations .for example , if then must be of the form for some .now we introduce the complex matrix space .an element is said to be _ real _ if it satisfies the ( weak ) hermitian property , where is the complex conjugate of .we shall have more to say about weak versus strong hermiticity conditions in relation to the idea of symmetry breaking .we denote the vector space of real elements of by .the elements of constitute what we call the real quantum space - time of dimension .we then regard as the complexification of .many problems in are best first approached as problems in , and hence sometimes although we refer to our operations are actually carried out in .let and be points in , and write for the corresponding separation vector , which is independent of the choice of origin . using the index - clumping convention we set , , , and for the separation of and in we write .there is a natural causal structure induced on such intervals by the so - called ` chronometric tensor ' . making use of the index - clumping convention, we define this fundamental tensor ( introduced by finkelstein ) by the following basic relation : the chronometric tensor , which is of rank , is totally symmetric and is nondegenerate in the sense that for any vector the condition implies .we say that and in have a ` degenerate ' separation if the chronometric form vanishes for .degenerate separation is equivalent to the vanishing of the determinant of the matrix , that is , the hyperspin space has dimension , this reduces to the usual condition for and to be null - separated in minkowski space . for , however , the situation is more complicated since there are various degrees of degeneracy that can arise between two points of quantum space - time , of which ` nullness ' ( in the minkowskian sense ) is only the most extreme . as an example , consider .in this case the quantum space - time has dimension nine , and the chronometric form is given by the different possibilities that can arise for the separation vector are as follows : * : * and : * and : * and : when the separation of two points of quantum space - time is degenerate , we define the ` degree ' of degeneracy by the rank of the matrix .null separation is the case for which the degeneracy is of the first degree , i.e. where is of rank one , and thus satisfies a system of quadratic relations of the following form : or equivalently this implies that can be expressed in the ` null ' form for some . in the case of degeneracy of the second degree , is of rank two and satisfies a set of cubic relations given by or equivalently in this situation can be put into one of the following three canonical forms : * , * , * . in case ( a ) , lies to the future of , and can be thought of as a degenerate future - pointing time - like vector . in case( b ) , can be thought of as a degenerate space - like separation . in case ( c ), lies to the past of , and is a degenerate past - pointing time - like vector .a similar analysis can be applied to degenerate separations of other ` intermediate ' degrees .if the determinant of the -by- weakly hermitian matrix is nonvanishing , and is thus of maximal rank , then the chronometric form is nonvanishing . in that casethe matrix can be represented in the following canonical form : with the presence of nonvanishing terms , where the hyperspinors are all linearly independent .let us write for the numbers of plus and minus signs appearing in the canonical form for the matrix given above .we call the ` signature ' of .the hyperspinors are determined by the specification of only up to an overall unitary ( or pseudo - unitary ) transformation of the form where , and the signature is nevertheless an invariant of . in the cases for which the signature is or we say that is future - pointing time - like or past - pointing time - like , respectively .then recalling the definition ( [ eq:10 ] ) for the associated chronometric form , we define the ` proper time interval ' between the events and by the formula in the case we then recover the standard minkowskian proper - time interval between the given events .a remarkable feature of the causal structure of quantum space - time is that the essential physical features of the causal structure of minkowski space are preserved . in particular, the space of future - pointing time - like vectors forms a convex cone .the same is true when we consider the structure of the associated momentum space , from which it follows that we can also give a good definition of what is meant by ` positive energy ' .now suppose that defines a smooth curve in for \subset{\mathbb r} ] .the resulting one - parameter family of surfaces determined by has the property that each is a 3-sphere . for a cosmologywe require in this case the relevant family of 4-planes is given by ( [ eq:102 ] ) with , the overall scale of being unimportant .for we set with as above in the case ._ friedmann - robertson - walker cosmologies_. in the case depicted here the chronology of the frw cosmology is generated by a pencil of projective 4-planes hinged on an axis in .the axis is chosen so that it does not impinge on any of the real points of the space - time . as a consequence the constant - time hypersurfaces in the cosmological model are compact . ] in each case we can consider the ` axis ' obtained by intersecting the elements of the given family of 4-planes . in the case , the axis itself does not intersect the associated real space - time , and as a consequence the resulting hypersurfaces of constant time are topologically 3-spheres ( see figure [ fig:10 ] ) . in the casethe axis ` touches ' the space - time at a point ( given by ) common to all of the intersection spaces .if we remove this point ( or treat it as a point at infinity ) , then the resulting constant - time surfaces are each topologically . in the case ,the common intersection region is a 2-sphere , and as a consequence an ` open ' cosmological model results in this case as well ._ chronological foliation of an hypercosmology_. the compactified quantum space - time is a real submanifold of the complex projective space .a pencil of -hyperplanes hinged on a complex axis intersects the space - time in a set of hypersurfaces .depending on the reality structure of the pencil , and its real subparameterisation , a variety of different possibilities can emerge for the global structure of the hypersurface family . ]thus we see that the algebraic geometry of twistor theory gives us an essentially ` unified ' point of view over the various standard cosmological models this approach can be pursued at greater length , giving rise to a geometrical characterisation of the different types of situations that can occur , depending , in particular , on the global structure and topology of the space - time , on the equation of state of the fluid representation in the energy tensor , and on the type of cosmological constant ( if any ) in the model .much more detail , along with specific examples for various choices of the equation of state , can be found in .it is interesting to note that more or less the same state of affairs prevails in higher dimensions ( see figure [ fig:11 ] for an example of a sixteen - dimensional class of ` quantum cosmologies ' analogous to the friedmann - robertson - walker models ) . in other words ,the choice of structure at infinity gives rise to various possible global structures for the quantum space - time , and in particular , to a chronometric form that is in general not flat , thus making a cosmological model . in the case of a standard four - dimensional cosmological model based on einstein stheory , the existence of structure at ( or ` beyond ' ) infinity has a bearing on the geometry of space - time alone . in the case of a quantum cosmology ,however , the structure at infinity also has implications for microscopic physics .for instance , whereas in the four - dimensional de sitter cosmology the relevant structure at infinity contains the ` invariant ' information of one dimensional constant ( the cosmological constant ) , in the higher - dimensional situation there are in general a number of such constants that may emerge as geometrical invariants of the theory . thus within a single geometric framework one has the scope for introducing structure ( or what amounts to the same thing the breaking of symmetry ) both on a global or cosmological scale , as well as on the microscopic scales of distance , time , and energy associated with the phenomenology of elementary particles .one might say that in these models the structure at infinity is playing the role of the higgs fields .one can even envisage the possibility of explaining , in a purely geometrical language , the basis of the remarkable coincidences involving various fundamental constants of nature that have puzzled physicists for many decades .as a prelude to our discussion of the idea of symmetry breaking in quantum space - time , we digress briefly to review the notions of weak and strong hermiticity .this material is relevant to the origin of unitarity in quantum mechanics . intuitively speaking, we observe that when the weak hermiticity condition is imposed on a hyperspinor representing a space - time event , then belongs to the real subspace . as such, the hyper - relativistic symmetry of quantum space - time is not affected by the imposition of this condition .if , however , we break the hyper - relativistic symmetry by selecting a preferred time - like direction , then we can speak of a stronger reality condition whereby an isomorphism is established between the primed and unprimed hyperspin spaces .we begin with the weak hermitian property .let denote , as before , an -dimensional complex vector space .we also introduce the associated spaces , , and . in general , there is no natural isomorphism between and .therefore , there is no natural matrix multiplication law or trace operation defined for elements of .nevertheless , certain matrix operations are well defined .for example , the determinant of a generic element is given by the weak hermitian property is also well - defined .in particular , if is the complex conjugate of , then we say that is weakly hermitian if .as we have observed , for many applications , weak hermiticity suffices .now we consider the strong hermitian property . in some situations theremay exist a natural map defined by the context of the particular problem .such a map is called a hermitian correlation . in this case, the complex conjugate of an element determines an element .for any element we define the operations of determinant , matrix multiplication , and trace in the usual manner .the determinant is and the hermitian conjugate of is .the hermitian correlation is given by the choice of a preferred element .then we write where is now called the complex conjugate of .when there is a hermitian correlation , we call the condition the strong hermitian property . thus once we break the relativistic invariance by introducing a preferred element that determines a hermitian correlation , we may carry out specific calculations in that frame . to gain a better understanding of this ,consider an event in complex minkowski space defined by its separation from the origin .the complex conjugate of is , so is real if .let be a fixed time - like vector satisfying .with respect to this choice of , the trace of is defined by once is fixed , we may represent in terms of pauli matrices , according to which admits a matrix representation of the form ( [ eq:2 ] ) .the time variable is then given by and the minkowski metric is given by the determinant ( [ eq:3 ] ) .we can also define a commutator for a pair of space - time position vectors and by setting the geometrical meaning of ( [ eq:113 ] ) is that corresponds to the ordinary 3-space cross - product of the projection of the vectors and onto the space - like hyperplane orthogonal to that passes through the origin .analogously , we can discuss the ` spectral ' properties of vectors with respect to a given choice of and a given choice of origin in space - time .the two eigenvalues of are then given by .thus two such vectors and are isospectral if and only if they lie on a common sphere about the origin lying in the given space - like hyperplane .similar remarks apply to higher - dimensional quantum space - times .now we proceed to introduce a natural mechanism for symmetry breaking that arises in the case of a standard ` flat ' quantum space - time endowed with the canonical structures associated with reality and infinity .we shall make the point in particular that the breaking of symmetry in quantum space - time is intimately linked to the notion of quantum entanglement . according to this point of view the introduction of symmetry - breaking in the early stages of the universecan be understood as a phase transition , or a sequence of phase transitions , the ultimate consequence of which is an approximate disentanglement of a four - dimensional ` classical ' space - time . in practical termsthe breaking of symmetry is represented in our framework by an ` index decomposition ' . in particular ,if the dimension of the hyperspin space is not a prime number , then a natural method of breaking the symmetry arises by consideration of the decomposition of into factors .the specific essential assumption that we shall make at this juncture will be that the dimension of the hyperspin space is _even_. then we write , where , and set where is a standard two - component spinor index , and will be called an ` internal ' index .thus we can write , where is a standard spin space of dimension two , and is a complex vector space of dimension .the breaking of the symmetry then amounts to the fact that we can identify the hyperspin space with the tensor product of these two spaces .we shall assume , moreover , that is endowed with a strong hermitian structure , i.e. we shall assume that there is a canonical anti - linear isomorphism between the complex conjugate of the internal space and the dual space .if , then we write for the complex conjugate of , where .we see therefore that is a complex hilbert space and indeed although here we consider for technical simplicity the case for which is finite , one should have in mind also the general infinite dimensional situation . for the other hyperspin spaces we write respectively .these equivalences preserve the duality between and , and between and ; and at the same time are consistent with the complex conjugation relations between and , and between and . hence if then under complex conjugation we have , and if then .in the case of a quantum space - time vector we have a corresponding structure induced by the identification when the quantum space - time vector is real , the weak hermitian structure on is manifested in the form of a standard weak hermitian structure on the two - component spinor index pair , together with a strong hermitian structure on the internal index pair .in other words , the hermitian condition on the space - time vector is given by one consequence of these relations is that we can interpret each point in quantum space - time as being a space - time valued operator .the ordinary classical space - time then ` sits ' inside the quantum space - time in a canonical manner namely , as the locus of those points of quantum space - time that factorise into the product of a space - time point and the identity operator on the internal space : thus , in situations where special relativity is a satisfactory theory , we regard the relevant events as taking place on or in the immediate neighbourhood of this embedding of minkowski space in . this picture can be presented in more geometric terms as follows .the hypertwistor space in the case admits a segr embedding of the form many such embeddings are possible , though they are all equivalent to one another under the action of the overall symmetry group . if the symmetry is broken and one such embedding is selected out , then following the conventions discussed earlier we can introduce homogeneous coordinates and write for the hypertwistor . herethe greek letter denotes an ordinary twistor index and denotes an internal index .these two indices , when clumped together , constitutes a hypertwister index . the segr embedding consists of those points in for which we have a product decomposition of the associated hypertwistor given by the idea of symmetry breaking that we are putting forward here is related to the notion of disentanglement in standard quantum mechanics ( cf .gibbons 1992 ; brody & hughston 2001 ) . that is to say , at the unified levelthe degrees of freedom associated with space - time symmetry are quantum mechanically entangled with the internal degrees of freedom associated with microscopic physics .the phenomena responsible for the breakdown of symmetry are thus analogous to the mechanisms of decoherence through which quantum entanglements are gradually diminished .some readers may raise the objection that surely it is impossible to unify the unitary symmetries of elementary particle phenomenology with the symmetries of space - time ( cf ., e.g. , ). it should be noted , however , that our approach is not to attempt to embed a relativistic symmetry group in a higher - dimensional unitary group , but rather to embed the unitary group in a higher - dimensional relativistic symmetry group .our methodology is consistent with the point of view put forward by penrose that for a coherent unification of general relativity and quantum mechanics , the rules of quantum theory must undergo ` profound modification ' .the compactified complexified quantum space - time can be regarded as the aggregate of projective -planes in .now generically a in will not intersect the segr variety such a generic -plane corresponds to a generic point in .the -planes that correspond to the points of compactified complexified minkowski space can be constructed as follows . for each line in consider the subvariety where .for any algebraic variety we define the _ span _ of to be the projective plane spanned by the points of .we say a point in the ambient space lies in the span of the variety if and only if there exist points in for some with the property that lies in the -plane spanned by those points .the dimension of the span of satisfies ; however , the value of depends on the geometry of . the linear span of the points in , for any given , is a -plane .this is the in that represents the point in corresponding to the line in .the aggregate of such special -planes , defined by their intersection properties with the segr variety , constitutes a submanifold of , and this submanifold is compactified complexified minkowski space .thus we see that once the symmetry of quantum space - time has been broken in the particular way we have discussed , then ordinary minkowski space can be identified as a submanifold .let us now consider the implications of our symmetry breaking mechanism for fields defined on quantum space - time .as an example , let be a scalar field on quantum space - time .after we break the symmetry by writing , we consider a taylor expansion of the field around the embedded minkowski space - time .specifically , for such an expansion we have where and therefore , the order zero term has the character of a classical field on minkowski space , and the first order term can be interpreted as a ` multiplet ' of fields , transforming according to the adjoint representation of the internal symmetry group ._ symmetry breaking and matter formation in a hypertwistorial quantum cosmology_. in its earliest stages the universe is highly symmetrical .eventually , symmetry is broken , and a four - dimensional ` classical ' cosmology freezes out , becoming largely disentangled from the reminder of the quantum space - time .the formation of matter during the disentanglement process may not be confined to the four - dimensional subspace .if such a scenario prevails , the bulk of this ` dark material ' is likely to remain outside the four - dimensional subspace , though nevertheless having an impact on its dynamics . ] in this connection we note that the symmetry breaking mechanism that we have proposed here has yet another representation namely , the expression of a hypertwistor as a multi - twistor system , i.e. as a multiplet of penrose twistors .the physical and geometrical characteristics of such -twistor systems have been analysed at great length by a number of authors ( see , e.g. , and references cited therein ) , and it is interesting therefore to see the direct link with hypertwistor theory and quantum space - time geometry .it is fitting also to make a tribute here to the work of zoltan perjs , whose extensive contributions to relativity theory include , in particular , a number of important studies concerning the properties of -twistor systems and their symmetries .it is tempting to speculate that even in a more dynamic context some version of the symmetry breaking mechanism provided here will manifest itself . in this picturewe would envisage the earliest stages of the universe as being highly symmetrical , rather in the spirit of penrose s weyl curvature hypothesis , with no appreciable distinction between the conventional space - time degrees of freedom and the internal degrees of freedom associated with quantum theory .nevertheless , the causal geometry of the universe remains well defined , and it is interesting to ask whether there might be some scenario within the rather rich causal structure of a quantum space - time that would allow us to account for the so - called ` horizon problem ' . in any event , once symmetry breaking takes place and this may happen in stages , corresponding to a successive factorisation of the underlying hypertwistor space then it makes sense to think of ordinary four - dimensional space - time as becoming more or less disentangled from the rest of the universe , and behaving in a way that is to some extent autonomous .nonetheless , we might reasonably expect its global dynamics , on a cosmological scale , to be affected by the distribution of mass and energy elsewhere in the quantum space - time as well ( see , e.g. , figure [ fig:12 ] ) .the embedding of minkowski space in the quantum space - time given by ( [ eq:11.4 ] ) implies a corresponding embedding of the poincar group in the hyper - poincar group .this can be seen as follows .the standard poincar group in consists of transformations of the form and the hyper - poincar transformations in are of the form with the identification , the general hyper - poincar transformation in the broken symmetry phase can be expressed in the form thus the embedding of the poincar group as a subgroup of the hyper - poincar group is given by bearing this in mind , we now construct a class of maps from the general even - dimensional quantum space - time to minkowski space .it turns out that under rather general physical assumptions such maps are necessarily of the form where is a density matrix . as usual , by a density matrix we mean a positive semi - definite hermitian matrix with unit trace .thus the maps arising here can be regarded as quantum expectations . in particular ,let satisfy the following conditions : ( i ) is linear and maps the origin of to the origin of ; ( ii ) is poincar invariant ; and ( iii ) preserves causal relations. then is given by a density matrix on the internal space .[ theo:2 ] the general linear map from to preserving the origin is given by where is weakly hermitian .now suppose that we subject to a poincar transformation of the form ( [ eq:12.2 ] ) , and require the corresponding transformation of should be of the form ( [ eq:12.1 ] ) .if satisfies these conditions then we shall say that the map is poincar invariant .clearly , poincar invariance holds if and only if for all , for all , and for all .thus we have for all , and for all .equation ( [ eq:12.8 ] ) implies that is of the form for some .then ( [ eq:12.9 ] ) implies that must satisfy the trace condition .finally we require that if and are quantum space - time points with the property that the interval is future - pointing then is also future pointing , where this is the requirement that should be a ` causal ' map .this condition implies that must be positive semi - definite .in particular , if is future - pointing then it must be of the form consider therefore the case for which is null .then we require that the expression should be future - pointing ( or vanish ) for any choice of .in particular , we require that the vector should be future - pointing if is of the form for any choice of and .this means that the inequality holds for all , which shows that is positive semi - definite .since we have shown that the trace of is unity , it follows that is a density matrix .this result shows how the causal structure of quantum space - time is linked with the probabilistic structure of quantum mechanics .the concept of a quantum state emerges when we ask for consistent ways of ` averaging ' over the geometry of quantum space - time in order to obtain a reduced description of physical phenomena in terms of the geometry of minkowski space .we see that a probabilistic interpretation of the map from a general quantum space - time to minkowski space arises as a consequence of elementary causality requirements .we can thus view the space - time events in as representing quantum observables , the expectations of which correspond to points of .dcb gratefully acknowledges financial support from the royal society .the work described here is based , in part , on ideas and suggestions arising in discussions with e. j. brody .the authors are grateful to participants at the xixth max born symposium , institute of theoretical physics , wroclaw , poland , for helpful comments .borowiec , a. 1993 _ g_-structure for hypermanifold , in _ spinors , twistors , clifford algebras and quantum deformations _( sobtka castle , 1992 , z. oziewicz , b. jancewicz , & a. borowiec , eds . ) _ fund .theories phys ._ * 52 * , 75 - 79 , dordrecht : kluwer .hughston , l.p .1990 a remarkable connection between the wave equation and pure spinors in higher dimensions , in _ further advances in twistor theory , vol .i : the penrose transform and its applications _ ( l. j. mason & l. p. hughston , eds . )harlow : longman .hurd , t. r. 1995 cosmological models in , in _ further advances in twistor theory , vol .ii : integrable systems , conformal geometry and gravitation _ ( l. j. mason , l. p. hughston & p. z. kobak , eds . ) harlow : longman .penrose , r. 1995 twistors for cosmological models , in _ further advances in twistor theory , vol .ii : integrable systems , conformal geometry and gravitation _ ( l. j. mason , l. p. hughston & p. z. kobak , eds . ) harlow : longman .pirani , f. a. e. 1965 introduction to gravitational radiation , in lectures on general relativity : 1964 brandeis summer institute in theoretical physics , vol . 1 ( s. deser & k. w. ford , eds . ) englewood cliffs , nj : prentice - hall . | the purpose of this paper is to present a model of a ` quantum space - time ' in which the global symmetries of space - time are unified in a coherent manner with the internal symmetries associated with the state space of quantum - mechanics . if we take into account the fact that these distinct families of symmetries should in some sense merge and become essentially indistinguishable in the unified regime , our framework may provide an approximate description of or elementary model for the structure of the universe at early times . the quantum elements employed in our characterisation of the geometry of space - time imply that the pseudo - riemannian structure commonly regarded as an essential feature in relativistic theories must be dispensed with . nevertheless , the causal structure and the physical kinematics of quantum space - time are shown to persist in a manner that remains highly analogous to the corresponding features of the classical theory . in the case of the simplest conformally flat cosmological models arising in this framework , the twistorial description of quantum space - time is shown to be effective in characterising the various physical and geometrical properties of the theory . as an example , a sixteen - dimensional analogue of the friedmann - robertson - walker cosmologies is constructed , and its chronological development is analysed in some detail . more generally , whenever the dimension of a quantum space - time is an even perfect square , there exists a canonical way of breaking the global quantum space - time symmetry so that a generic point of quantum space - time can be consistently interpreted as a quantum operator taking values in minkowski space . in this scenario , the breakdown of the fundamental symmetry of the theory is due to a loss of quantum entanglement between space - time and internal quantum degrees of freedom . it is thus possible to show in a certain specific sense that the classical space - time description is an emergent feature arising as a consequence of a quantum averaging over the internal degrees of freedom . the familiar probabilistic features of the quantum state , represented by properties of the density matrix , can then be seen as a by - product of the causal structure of quantum space - time . address = blackett laboratory , imperial college , london sw7 2bz , uk address = department of mathematics , king s college london , the strand , london wc2r 2ls , uk |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.