article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
measurement of the anisotropy of the cosmic microwave background ( cmb ) is proving to be a powerful cosmological probe .proper statistical treatment of the data likelihood calculation is complicated and time - consuming , and promises to become prohibitively so in the very near future . here, we introduce approximations for this likelihood calculation which allow simple and accurate evaluation after the direct estimation of the power spectrum ( ) from the data . although it is possible to produce constraints on cosmological parameters directly from the data , using the power spectrum as an intermediate step ( _ e.g. _ ) has several advantages .the near - degeneracy of some combinations of cosmological parameters ( _ e.g. _ , ) implies the surfaces of constant likelihood in cosmological parameter space are highly elongated , making it difficult for search algorithms to navigate .the power spectrum space is much simpler than the cosmological parameter space since each multipole moment ( or band of multipole moments ) is usually only weakly dependent on the others , alleviating the search difficulties .although one still has the problem left of estimating nearly degenerate cosmological parameters from the resulting power spectrum constraints , the likelihood given the power spectrum constraints is much easier to compute than the likelihood given the map data .proceeding via the power spectrum also facilitates the calculation of constraints from multiple datasets . without this intermediate step , a joint analysis may often be prohibitively complicated .aspects particular to each experiment ( _ e.g. _ , offset removals , non - trivial chopping strategies ) make implementation of the analysis sufficiently laborious that no one has jointly analyzed more than a handful of datasets in this manner . reducing each dataset to a set of constraints on the power spectrum can serve as a form of data compression which simplifies further analysis .indeed , most studies of cosmological parameter constraints from all , or nearly all , of the recent data have used , as their starting points , published power spectrum constraints , [ ; see also and for joint analyses with other datasets ] .since the power spectrum constraints are usually described with orders of magnitude fewer numbers than the pixelized data , we refer to this compression as `` radical '' . are there any disadvantages to proceeding via the power spectrum ? to answer this question ,let us consider the analysis procedure .most analyses of cmb datasets have assumed the noise and signal to be gaussian random variables , and to date there is no strong evidence to the contrary ( although for a different view , see ) .the simplicity of this model of the data allows for an exact bayesian analysis , which has been performed for almost all datasets individually .the procedure is conceptually straightforward : maximize the probability over the allowed parameter space .most often , we take the prior probability for the parameters to be constant , so this is equivalent to maximizing the likelihood , . because we have assumed the noise and signal to be gaussian , this latter is just a multivariate gaussian _ in the data _ ; the theoretical parameters enter into the covariance matrix .fortunately , if the theoretical signal is indeed normally distributed and in addition the signals are statistically isotropic , the power spectrum encodes all of the information about the model , and all of the constraints on the parameters of the theory can be obtained from the probability distribution : _i.e. _ , the likelihood as a function of some ( cosmological ) parameters , , is just the likelihood as a function of the power spectrum determined from those parameters : ) ] is the power spectrum of the full data ( noise plus beam - smoothed signal ) ; we have written it as a different symbol from above to emphasize the inclusion of noise and again use script lettering to refer to quantities multiplied by .now , the likelihood is maximized at and the curvature about this maximum is given by so the error ( defined by the variance ) on a is note that in this expression there is once again indication of a bias if we assume gaussianity : upward fluctuations have larger uncertainty than downward fluctuations .but this is not true for where is defined so that .more precisely , .since is proportional to a constant , our approximation to the likelihood is to take as normally distributed .that is , we approximate ( up to a constant ) where where is the weight matrix ( covariance matrix inverse ) of the , usually taken to be the curvature matrix . we refer to eq .[ eqn : gausslike ] as the offset lognormal distribution of .somewhat more generally we write for some constant , which for the case at hand is .it is illustrative to derive the quantity in a somewhat more abstract fashion .we wish to find a change of variables from to such that the curvature matrix is a constant : that is , we want to find a change of variables such that is not a function of .we immediately know of one such transformation which would seem to do the trick : where the indicates a cholesky decomposition or hermitian square root . in general, this will be a horrendously overdetermined set of equations , equations in unknowns .however , we can solve this equation in general if we take the curvature matrix to be given everywhere by the diagonal form for the simplified experiment we have been discussing ( eq .[ eqn : fullcurv ] ) . in this case , the equations decouple and lose their dependence on the data , becoming ( up to a constant factor ) the solution to this differential equation is just what we expected , with correlation matrix where and . please note that we are calculating a constant correlation matrix ; the in the denominator of this expression should be taken at the peak of the likelihood ( _ i.e. _ , the estimated quantities ) .we emphasize that , even for an all - sky continuously and uniformly sampled experiment ( for which eq .[ eqn : fullcurv ] is exact ) , this gaussian form , eq .[ eqn : gausslike ] , is only an approximation , since the curvature matrix is given by eq .[ eqn : fullcurv ] only at the peak .nonetheless we expect it to be a better approximation than a naive gaussian in ( which we note is the limit of the offset lognormal ) .we also note that there is another approximation involved in this expression : by choosing just a single independent change of variable for each we assume that the correlation structure is unchanged as you move away from the likelihood peak .often a very good approximation to the curvature matrix is its ensemble average , the fisher matrix , .below , unless mentioned otherwise , we use the fisher matrix in place of the curvature matrix . however , we will see that in our application to the saskatoon data , the differences between the curvature matrix and fisher matrix can be significant .in this subsection we consider an alternate form for the likelihood function that is sometimes a better approximation than the offset - lognormal form .the approximation is exact in the limit that the observations can be decomposed into modes that are independent , with equal variances .for example , for a switching experiment in which the temperature of pixels are measured , with the same noise at each point , and such that each pixel is far enough from the others that there is no correlation between the points , the likelihood can be written as \ ] ] where is such that and are the pixel temperatures .the independent pixel idealization was very close to the case for the ovro experiment , and , as we show in section [ sec : upperlim ] , the calculated likelihood is well approximated by this equation .the maximum likelihood occurs at a signal amplitude which is related to the data by .if we define , , and then we can rewrite eq .[ eqn : equalindepmode ] in a form that will be useful for relating it to the previous offset- lognormal form : .\end{aligned}\ ] ] note that if we consider only a single then the above form applies to eq .[ eqn : fullskylike ] as well , with and .we know this should be the case since the likelihood of eq .[ eqn : fullskylike ] ( for a single ) is also one for independent modes ( ) with equal variances ( ) .if we fix and for each mode ( _ e.g. _ , band of ) , we refer to this as the `` equal variance approximation . ''also note that the first term in the expansion of eq .[ eqn : equalindepmode2 ] in is , which with the identification , , is the offset - lognormal form .thus when the modes have equal variance and are independent , then the offset lognormal form is simply the first term in a taylor expansion of the equal - variance form .an advantage of the full form is that the asymptotic form linear in for large signal amplitudes holds ( and thus gives a power law rather than exponential decay in ) , whereas the offset lognormal is dominated by the .an advantage of the offset - lognormal form is that it does not require the existence of equal and independent modes .figure [ fig : dmrlike ] shows that for the range of relevance for the likelihoods for dmr , the offset lognormal and the equal / independent variance likelihood approximations are quite close over the dominant 2-sigma falloff from maximum .we have found this to generally be true . for either form ,three quantities need to be specified , the noise - related offset , and . given , determined from the maximum likelihood and can be determined from the curvature of the likelihood .one could also specify the amplitudes at three points , _e.g. _ , at the maximum and the places where falls by , the upper and lower one - sigma errors if the distribution were fit on either side by a gaussian . forcing the approximation to pass through these points enforces values of , and . in section [ sec : bands ] , we apply these approximations to power spectrum estimation from current data for which the practice has been to quote a signal amplitude with upper and lower one - sigma errors , say , and . often these are bayesian estimates , determined by choosing a prior probability for and integrating the likelihood. sometimes the points are given , which are slightly easier to implement in fitting for and . since the tail of eq .[ eqn : equalindepmode2 ] is quite pronounced , resulting in a dramatic asymmetry in between the up and down sides of the maximum even in the variable , we have found that just using the second derivative of the likelihood or the fisher matrix approximation to it to fix is not as good as assuming the offset lognormal and requiring that the functional forms match at the upper point .thus , if the error is from the curvature or fisher matrix , then we prefer the choice ^{-1 } \, , \ \sigma_z = { \sigma_{{\cal c}}\over { \widehat{{\cal c}}}+x } = { 1\over \sqrt{{\cal f}^{(z)}}}\ ] ] rather than the curvature form , or the ^{-1} ] , where is the weight per solid angle of the experiment . in terms of the total weight , , of the experiment , . a more detailed approximation for a particular experiment might be possible , but as we will see below, this expression does extremely well in reproducing the full non - gaussian likelihood .we have calculated the maximum - likelihood power spectrum and its error ( fisher ) matrix using the quadratic estimator procedure of . with knowledge of the cobe / dmr beam along with the noise properties of the experiment, we can calculate the necessary quantity . for cobe / dmr, we have an average inverse weight per solid angle of ( equivalent to an rms noise of on pixels ) . with these numbers ,we show the full likelihood in comparison to the `` naive gaussian '' approximation , as well as our offset lognormal ansatz .while the naive gaussian approximation consistently overestimates the likelihood below the peak and underestimates it above the peak , the lognormal form reproduces the full expression extremely well in both regimes .the gaussian form of the offset lognormal form makes using the power spectrum estimates for parameter estimation very simple : we evaluate a in the quantity rather than ( although the model is now nonlinear in the spectral parameters ) .again , we see how well our offset lognormal ansatz performs ; it reproduces the peak and errors on the parameters .in particular , it eliminates the `` cosmic bias '' discussed above , finding essentially the correct amplitude , for each shape probed , unlike the naive gaussian in , which consistently underestimates the amplitude . of course , far from the likelihood peak , even the offset lognormal form misrepresents the detailed likelihood structure since no gaussian correctly represents the softer tails of the real distribution , which goes asymptotically as the power law ; the offset lognormal approximation is asymptotically lognormal with a much steeper descent ; the equal - variance form can in principle reproduce the asymptotic form better. this behavior can be important for the case of upper limits , _i.e. _ , when the likelihood peak is at .we discuss this special case in section [ sec : upperlim ] .we wish to generalize this procedure to the case of experiments that are not capable of estimating individual multipole moments and/or chopping experiments . by chopping experiments , which have been the norm until very recently ,we mean those that rather than report the temperature at various positions on the sky , report more complicated linear combinations , with a sky signal given by for some beam and switching function ; and are the spherical - harmonic transforms of and the temperature , respectively .this induces a signal correlation matrix given by here , the window function matrix , , generalizes the beam of a mapping experiment and is given by ( this should not be confused with the `` window function , '' given by . ) moreover , for many experiments , the noise structure can be considerably more complicated , and may not be reducible to a simple noise correlation function or power spectrum ( that is , correlations in the noise may not just be functions of the distance between points ) ; instead , we may have to specify a general noise matrix how can we generalize our previous procedure to account for this more complicated correlation structure? we will take the general offset lognormal form of the likelihood , a gaussian in , as our guide .we have already noted that in the case of incomplete sky coverage , or inhomogeneous noise , eq .[ eqn : dotrick ] has no solution .thus we are only searching for a reasonable ansatz to try for .we begin by noting that represents the noise contribution to the error . for the full - sky casethe ratio is the ratio of the signal contribution to the error to the noise contribution to the error : since and .writing in eq .[ eqn : cloverxl ] in terms of fisher matrices allows us to generalize to arbitrary experiments . before writing down the general procedure, we must introduce a little more notation . instead of estimating every , we estimate the binned power spectrum , in bins labeled by .let be the contribution to the signal covariance matrix from bin , _i.e. _ , . the fisher matrix for ( whose inverse gives the covariance matrix for the uncertainty in ) is given by where is the total covariance matrix , .of course , in the limit of no noise , and in the limit of no signal , .. [ eqn : cloverxl ] generalizes to evaluation of the denominator of eq .[ eqn : getxb ] is sometimes difficult , as practical shortcuts in the calculation of the window function matrix may make it singular or give it negative eigenvalues . to avoid this calculation, we sometimes generalize the expression for by noting that for a homogeneously sampled , full - sky map : and therefore replace eq .[ eqn : getxb ] with ( for the all - sky , uniform noise case , thus defined will be independent of ; in a realistic experiment this will no longer hold . in practice, we expect that the correlation matrix at the likelihood peak would give the best value for . ) alternatively , we sometimes use eq .[ eqn : getxb ] but make use of the approximation which is exact for maps in the limit . to summarize , our offset lognormal ansatz is to take as gaussian distributed , with calculated from eq .[ eqn : getxb2 ] and covariance matrix given by the inverse of alternately , we can use these same quantities in the equal - variance form of eqs .[ eqn : equalindepmode][eqn : chi2approxparams ] .most observational power spectrum constraints to date are reported as `` band - powers '' rather than as estimates of the power in a power - spectrum bin , as we have been assuming above .these band - powers are the result of assuming a given shape for the power spectrum and then using one particular modulation of the data to determine the amplitude . with fixed , the band - power , , is given by with given by the trace of the window function matrix .to find , replace with in eq .[ eqn : getxb2 ] .note that in order to compare this observationally - determined number , , with other theories , specified by different with different shapes and amplitudes , we need a means for calculating the expected value of , , given an arbiratry .it has been often assumed that for arbitrary is also given by the right - hand side of eq .[ eq : bandpowerdef ] .however , this is only strictly true in the case of a diagonal signal covariance matrix .the generalization to the case of non - diagonal signal matrices is discussed in and .nothing we have derived so far restricts us to the likelihood as a function of _ per se _ ; any other measure of amplitude will also have a likelihood in this form .that is , we write a general amplitude as for some arbitrary filter or filters , , and as usual .this filter could be , for example , one designed to make the uncertainties in the uncorrelated , as in the following section .what is the likelihood for this amplitude , rather than at a single ?we first change variables from to as in eq .[ eqn : changevar ] .if we choose window functions that do not overlap in , the inverse fisher matrix then becomes if the original ( ) fisher matrix has the simple form of eq . [ eqn : fullcurv ] , then we see that we can just filter the individual terms in any of the ensuing equations with the same and our ansatz will still hold . explicitly , we would expect the variable \ ] ] to be distributed as a gaussian for the offset lognormal form ; for the equal - variance form the generalization is clear .we can apply this to a particularly useful set of linear combinations , the so - called orthogonal bandpowers .if we have a set of spectral measurements in bands with a weight matrix , we can form a new set of measurements which have a diagonal error matrix by applying a transformation like .the power represents any matrix such as the cholesky decomposition or hermitian square root which satisfies .these linear combinations will have the property that ( note the similarity to the calculations of section [ sec : solution ] ) . for calculating the `` naive '' quantity , ( hats here refer to observed quantities ) these orthogonalized bands do nt change the results .however , because the error and fisher matrices are both diagonal our likelihood ansatz can be applied very cleanly , since the off - diagonal correlations are zero and so we might expect them to represent the exact shape of the likelihood around the peak more accurately .now , we will take the quantity to be distributed as a gaussian with correlation matrix .this again involves the further approximation that the correlation structure far from the peak remains given by that near the peak ( encoded in the curvature matrix of the distribution at the peak ) . from the previous subsection , we further know that if we have a set of quantities appropriate for approximating the likelihood ( as in eq .[ eqn : getxb2 ] ) , then we should be able to set .( note that we can also use these quantities in the equal - variance approximation , which does not otherwise have a simple multivariate generalization . )we use these orthogonalized bandpower results for the cosmological parameter estimates using the sk data in the following sections .we apply this ansatz to the saskatoon experiment , perhaps the apotheosis of a chopping experiment .the saskatoon data are reported as complicated chopping patterns ( _ i.e. _ , beam patterns , , above ) in a disk of radius about around the north celestial pole . the data were taken over 1993 - 1995 ( although we only use the 1994 - 1995 data ) at an angular resolution of fwhm at approximately 30 ghz and 40 ghz .more details can be found in and . the combination of the beam size , chopping pattern , and sky coverage mean that saskatoon is sensitive to the power spectrum over the range .the saskatoon dataset is calibrated by observations of supernova remnant , cassiopeia a .leitch and collaborators have recently measured the flux and find that the remnant is 5% brighter than the previous best determination .we have renormalized the saskatoon data accordingly .we calculated for this dataset in .we combine these results with the data s noise matrix to calculate the appropriate correlation matrixes ( in this case , the full curvature matrix ) for saskatoon and hence the appropriate ( eq . [ eqn : getxb2 ] ) and thus our approximations to the full likelihood . in figure[ fig : sklike ] , we show the full likelihood , the naive gaussian approximation , and our present offset lognormal and equal - variance forms .again , both approximations reproduce the features of the likelihood function reasonably well , even into the tails of the distribution , certainly better than the gaussian approximation . they seem to do considerably better in the higher- bands ; even in the lower bands , however , the approximations result in a _ wider _ distribution which is preferable to the narrower gaussian and its resultant strong bias .moreover , we have found that we are able to reproduce the shape of the true likelihood essentially perfectly down to better than `` three sigma '' if we simply _ fit _ for the ( but of course this can only be done when we have already calculated the full likelihood precisely what we are trying to avoid ! ) . for existing likelihood calculations, this method can provide better results without any new calculations ( see appendix [ app : recipe ] for our recommendations for the reporting of cmb bandpower results for extant , ongoing , and future experiments ) .we also show the usefulness of the offset lognormal form in cosmological parameter determination in figure [ fig : skcontours ] , for which we use the orthogonalized bandpowers discussed above .as expected , it reproduces the overall shape of the likelihood function quite well , although it does better for a fixed shape ( in this case ) .we have found that the shape of the power spectrum used with each bin of can have an impact on the likelihood function evaluated using this ansatz .similarly , a finer binning in will reproduce the full likelihood more accurately .although the maximum - likelihood amplitude at a fixed shape ( ) does not significantly depend on binning or shape , the shape of the likelihood function along the maximum - likelihood ridge changes with finer binning and with the assumed spectral shape .as an aside , we mention several complications that we have noted in the analysis of the saskatoon data . because of the complexity of the saskatoon chopping strategy, we have found that the signal correlation matrix , is not numerically positive definite ; removing the negative eigenvalues can change the value of by as much as 5% in some bins .this should be taken as an estimate of the accuracy of our spectral determinations due to these numerical errors .we have also found that the fisher matrix , which we usually use as an estimate of the ( inverse ) error matrix for the parameters , can differ significantly from the true curvature matrix .this difference can be especially marked in low- bins for which the sample and/or cosmic variance can be considerable , potentially resulting in large fluctuations in this error estimate as well . in the saskatoon plots here, we use the actual curvature matrix in place of the fisher matrix . in a forthcoming paper , we will address these and other issues of implementation of the quadratic estimator for .as we will see in the following , these concerns become less important when combining saskatoon with cobe / dmr , since the results are mostly dependent on the broad - band power probed by each experiment .moreover , we expect that these difficulties are considerably more likely in the case of chopping experiments , for which our expression for , eq .[ eqn : getxb ] is somewhat ad hoc .most future cmb results will be for `` total - power '' ( _ i.e. _ , mapping ) experiments , and the satellites map and planck will be ( nearly ) all - sky , like cobe / dmr , for which the offset lognormal form has proven most excellent . in any case , even with present - day data , our ansatz provides a far better approximation to the full likelihood than a simple gaussian in as was used for some global analyses of current cmb data such as ; ; ;;; ) .one of the problems we hoped to solve with better approximations to the likelihood functions than gaussian was how to treat the valuable data with upper limits or very weak detections . in particular ,the data from ovro and suzie is useful for constraining open universe models with power spectra that do not fall off rapidly enough at high .although the gaussian form does not work well here , the offset lognormal does much better and the form of section [ sec : indmode ] works very well , as is shown for ovro and sp in the top panel of fig .[ fig : ovrolike ] . + the likelihood for the suzie results is also shown .the authors plotted the likelihood for the amplitude for several different models , which we have fit from the published figure . although reported as an upper limit , the likelihood is peaked at positive power , but zero power is only rejected at .we note as an aside that a simple flat bandpower ( ) is not quite sufficient to contain all of the information in the suzie data : the likelihood function changes slightly for models with different shapes , defined as in eq .[ eq : bandpowerdef]most of the physically motivated models ( _ e.g. _ , scdm , , etc . ) have roughly the same bandpower curves , but a flat bandpower gives a slightly different one as shown in the figure , and extreme open models are more similar to the flat - bandpower case than to scdm .we also note that our equal - variance approximation performs slightly better for the flat bandpower , while the scdm model is fit better by the offset lognormal . in any case, we again find that in all of these cases our approximations fit the likelihoods much better than any naive gaussian approach would .as a further example and test of these methods , we can combine the results from saskatoon and cobe / dmr in order to determine cosmological parameters . for this example, we use the orthogonal linear combinations as described in the previous section . in figure[ fig : dmrskcontours ] we show the likelihood contours for standard cdm , varying the scalar slope and amplitude . as before, we see that the naive procedure is biased toward low amplitudes at fixed shape ( ) , but that our new approximation recovers the peak quite well .the full likelihood gives a global maximum at , and our approximation at , while the naive finds it at , outside even the three - sigma contours for the full likelihood .we can also marginalize over either parameter , in which case the full likelihood gives , ; our ansatz gives , ; and the naive gives , .( note that even with the naive we marginalize by explicit integration , since the shape of the likelihood in parameter space is non - gaussian in all cases . )above , we have discussed many different approximations to the likelihood . herewe discuss finding the parameters that maximize this likelihood ( minimize the ) .we then apply our methods to estimating the power in discrete bins of .this application provides another demonstration of the importance of using a better approximation to the likelihood than a gaussian .the likelihood functions above depend on which may in turn depend on other parameters , , which are , _e.g. _ , the physical parameters of a theory . if we write the parameters as we can find the correction , , that minimizes by solving where is the curvature matrix for the parameters . if the were quadratic ( _ i.e. _ , gaussian ) then eq .[ eqn : quadest ] would be exact . otherwise , in most cases , near enough to its minimum , is approximately quadratic and an iterative application of eq .[ eqn : quadest ] converges quite rapidly .the covariance matrix for the uncertainty in the parameters is given by .this is just an approximation to the newton - raphson technique for finding the root of ; a similar techniqe is used in quadratic estimation of .as our worked example here , we parameterize the power spectrum by the power in to bins , . within each of the bins ,we assume to be independent of .we have chosen the offset lognormal approximation .the completely describes the model : where is the weight matrix for the band powers .we have modeled the signal contribution to the data , , as an average over the power spectrum , , times a calibration parameter , . for simplicity, we take the prior probability distribution for this parameter to be normally distributed . since the datasets have already been calibrated , the mean of this distribution is at .the calibration parameter index , , is a function of since different power spectrum constraints from the same dataset all share the same calibration uncertainty .we solve simultaneously for the and ; _ i.e. _ , together they form the set of parameters , , in eq .[ eqn : quadest ] . for those experiments reported as band - powers together with the trace of the window function , ,the filter is taken to be note that this is an approximation which neglects some effects of off - diagonal signal correlations .for saskatoon and cobe / dmr , our are themselves estimates of the power in bands . for these casesthe above equation applies , but with set to a constant within the estimated band and zero outside .the estimated bands have different ranges than the target bands . instead of the curvature matrix of eq .[ eqn : chi2curv ] we use an approximation to it that ignores a logarithmic term . includingthis term can cause the curvature matrix to be non - positive definite as the iteration proceeds .the approximation has no influence on our determination of the best fit power spectrum , but does affect the error bars .we have found that the effect is quite small .we now proceed to find the best - fit power spectrum given different assumptions about the value of , binnings of the power spectrum and editings of the data .see appendix [ app : bandpowers ] for a tabulation of the bandpower data we are using . we have determined the only for cobe / dmr , saskatoon , sp89 , ovro7 , suzie and toco .( although not included in table [ tab : bandpowers ] , the highly constraining measurement near by toco98 is also well - fit by the offset lognormal form . ) to test the sensitivity to the unknown we found the minimum- power spectrum assuming the two extremes of ( corresponding to lognormal ) and ( corresponding to gaussian ) .these two power spectra are shown in fig .[ fig : lnclvscl ] . note that both power spectra were derived using our measured values ; only the unknown values were varied .the variation in the results would be much greater if we let these values be at their extremes . inwhat follows the unknown are set to zero . sometimes when the entire likelihood function is unavailable , enough information is given to allow an estimation of the for individual experiments that give single bandpower estimates .for example , we may be given where the likelihood is a maximum and where the likelihood has fallen by , the latter giving an estimate of asymmetric 1 sigma error bars .we can then estimate the error and the value of in the offset lognormal form from : \label{eq : xs } \, , \end{aligned}\ ] ] where defines .note that is indeterminant for , but this gaussian limit is approached from the large direction .it yields an error on of , the required gaussian error .although the determination of the effective error is quite stable as , this is not so for , where small errors in can lead to big changes .the constraint implies , which can still be violated though .even if this does occur , the lognormal form may still be a reasonable fit if one adjusts and .on the other hand , if the probability is skewed towards small amplitudes , as can happen when there is another signal such as a dust component that has been marginalized , the lognormal fit can never work . most often , the reason we can not get good values of from asymmetric error bars is that the limits quoted in the literature are the bayesian integral 16% and 84% probability ones , with an assumed prior ( _ e.g. _ , uniform in the bandpower amplitude ) .we need to know the likelihood shape to estimate from those .( one could adopt the lognormal and fit for these integral quantities , a step we have not taken . ) for example , we have found that adopting eq .( [ eq : xs ] ) leads to negative values for for tenerife , max4 and max5 values taken from the literature . except for the cases noted, we always use s for the experiments where we have determined them well , those given in table c2 .when we must resort to the eq .( [ eq : xs ] ) procedure to estimate them , we usually set all s to zero if they are negative .for the positive values , we have tested the eq .( [ eq : xs ] ) values with our results and find that it makes little difference in the compressed bandpower results .our results have some sensitivity to binning , partly because the visual representation does not describe the correlation between bins .this is especially so for the last bin for which upper limits play a large role .a procedure often used to control such variations is to introduce a prior probability $ ] for the . in this section , we have implicitly adopted a uniform prior , which is the least prejudiced one to adopt in that only the data decides what the bandpowers should be .however , we have also experimented with other priors , such as a gaussian priors , with and a `` maximum entropy '' prior , .\nonumber\\&&\end{aligned}\ ] ] here is some target value , associated with an assumed prior model , is an adjustable variance in the gaussian case and is an adjustable parameter in the maximum entropy case .for example , the maximum entropy prior is designed to avoid negative , relaxes to where there is little data and ensures some degree of smoothness in the determinations . however , especially in regions where the input shape for is changing rapidly ( _ e.g. _ , the damping tail ) , this procedure can give the wrong impression .given the current state of the data , we prefer to show a few binnings to demonstrate sensitivity. when the data improves , it will be reasonable to try other priors , for example those that exert penalties if the data does not give continuity in .& power & standard error & correlation + 2 & 4 & 721 & 255 & -0.08 + 5 & 7 & 566 & 205 & -0.11 + 8 & 10 & 799 & 242 & -0.08 + 11 & 15 & 842 & 233 & -0.11 + 16 & 39 & 907 & 313 & -0.17 + 40 & 99 & 1027 & 331 & -0.32 + 100 & 169 & 3205 & 610 & -0.16 + 170 & 249 & 6216 & 1040 & -0.15 + 250 & 399 & 2394 & 1106 & -0.62 + 400 & 999 & 3328 & 1218 & -0.31 + 1000 & 2999 & 0.0 & 783 & + the ( of eq .[ eqn : chisq ] ) for the fit in table 1 is 62 for 63 degrees of freedom .thus the scatter of the band powers is consistent with the size of their error bars ; it provides no evidence for contamination , mis - estimation of error bars , or severe non - gaussianity in the probability distribution of the underlying signal . in choosing a particular binning, there is a tradeoff to be made between preserving shape information and reducing both error bars and correlation . from table[ tab : thepowspec ] one can see the extent to which the bins are correlated .the nearest - neighbor bin is by far the dominant off - diagonal term . for this particular binning ,all others are ten percent or less , except for the ninth bin eleventh bin correlation which is 0.2 .the lower bands have the smallest correlations as we would expect from dmr .there are some very strong correlations in the higher bands .fortunately , from fig . [fig : binning ] we see that some general features are quite robust under different binnings . namely , the spectrum is flat out to or , there is a rise to or and then a drop in power beyond .although the data clearly indicate a peak , it is difficult to locate the position to better than . in the top panelthe rise to the doppler peak has been binned more finely than the others .this is a particularly difficult area to resolve with current data : the correlation between the sixth and seventh bins is .we also see , from figure [ fig : editing ] , that the general picture does not depend on one single dataset though the error bars do get significantly larger when the saskatoon dataset is ignored .also , if we were to ignore ovro , the highest bin would have error bars larger than the graph .the upper limits from suzie and ovro constrain a region of the power spectrum otherwise unconstrained and put some pressure on models with small - scale power , such as open models . due to the high interest in these limits , and the difficulty in interpreting the error bars in these figures ( once again due to their non - gaussianity ) we have attempted to display their constraints on the spectrum in an additional , independent manner .we do so by using the published bounds on gaussian auto - correlation functions ( gacfs ) .the gacf is given by where is the amplitude of the real - space correlation function at zero lag and is called the gaussian coherence angle . for various choices of the coherence angle ,the data were used to set limits on .the curves in fig .[ fig : final ] trace the peak of the gacf with at the 95% confidence upper limit as is varied . as we have emphasized earlier with the band - powers , it can often be misleading to interpret the covariance matrix of the parameters ( derived from the fisher matrix ) as indicating the 68% confidence region , since the 68% confidence region may extend beyond the region of validity for the quadratic approximation to .even in such a case though , the quadratic procedure still may be useful just for finding the minimum , which might be a good point to begin further investigation of the surface without the quadratic approximation .non - gaussianity can be especially severe when the parameters are cosmological parameters .we have argued that cosmological parameters should be constrained from cmb datasets via an initial step of determining constraints on the power spectrum .these power spectrum constraints can themselves be viewed as a compressed version of the pixelized data .we call this process radical compression since the resulting dataset is orders of magnitude smaller than the original. one must be careful in using this compressed data to take into account the non - gaussian nature of their probability distributions ; ignoring the non - gaussianity while attempting to constrain parameters results in a bias .the offset lognormal and equal variance approximations capture its salient characteristics .they are both specified by the mode , variance and the noise contribution to the variance ( ) .use of these forms allows for a very simple type treatment of the band - power data without the bias .while we have found these approximations to the likelihood functions to be quite adequate for dealing with the data we have explored to date , and have given quite general arguments for why the tails behave as they do , our checks have not been exhaustive .for example , some quoted cmb anisotropy results are skewed to lower rather than higher amplitudes ( perhaps due to fitting out foregrounds or systematics ) , a situation that the offset lognormal can not fit .just as one computes the curvature about the maximum likelihood , so one can consider computing a skewness that would encapsulate such behavior , but we will leave the search for further likelihood function approximations to further exploration . we have shown that the offset lognormal approximation applied to a two - parameter ( and ) family of cdm models works very well .we have also used this form to find the maximum - likelihood binned power spectrum , given the band - power data .the resulting graphs provide a visual representation of the power spectrum constraints that is , in our opinion , far superior to plotting all the band - power data on top of each other . the exercise of estimating the binned power spectrum immediately raises the question of how well it would work to estimate cosmological parameters using these as a `` super - radically compressed dataset '' .we plan to pursue this question in future work .although our examples have focused on using our approximations in order to derive parameter constraints from more than one dataset , we believe they may also prove useful for estimating cosmological parameters from single , very powerful datasets , such as those that are expected to come from a number of experiments over the next decade .we must note though that once a dataset has sufficient `` spectral resolving power '' and dynamic range there is another approach that can be used to remove the cosmic bias .this alternative approach was suggested in and and successfully applied to simulated map data in .the idea is to exploit the fact that we expect there to be no fine features in the cmb power spectrum and therefore use some smoothed version of the estimated power spectrum to calculate the fisher matrix .heuristically one expects this to remove the bias , since upward - fluctuating points no longer receive less weight than downward - fluctuating points .although this smoothing technique is quite likely to be successful , we point out that , unlike our ansatz , it relies on an assumption of the smoothness of the power spectrum .one of our main objectives with this paper is to provide observers with a method for presenting their results that will allow efficient combination with the results of others in order to create a joint determination of cosmological parameters .the method is fully described in appendix [ app : recipe ] . in this appendixwe also discuss complications due to overlapping sky coverage and upper limits .ahj would like to thank the members of the combat collaboration , especially p.g .ferreira , s. hanany and j. borrill , for discussions and advice .lk would like to thank andrew hamilton for a useful conversation .ahj acknowledges support by nasa grants nag5 - 6552 and nag5 - 3941 and by nsf cooperative agreement ast-9120005 .bond , j.r . ,efstathiou , g. , and tegmark , m. 1997 , , 291 , l33 .allen , b. et al.1997 , , 79 , 2624 baker , j.c .et al.1998 , , submitted .bartlett , j.g ., douspis , m. , blanchard , a. , le dour , m. , 1999 , , submitted .astro - ph/9903045 .bennett , c.l . ,banday , a.j .gorski , hinshaw , g. , jackson , p.d . , keegstra , p. , kogut , a. , smoot , g.f . , wilkinson , d. , and wright , e.l .1996 , , 464 , l1 , and 4-year cobe / dmr references therein .bond , j.r .1994 , , 74 , 4369 .bond , j.r . andjaffe , a.h .1998a , in microwave background anisotropies , proceedings of the xvi rencontre de moriond , ed .bouchet , f.r .( paris : editions frontieres ) .bond , j.r . andjaffe , a.h .1998b , phil .r. soc .a , to appear .bond , j.r . ,jaffe , a.h . and knox , l.e . 1998 , , in press ; astro - ph/9708203 .bond , j.r ., pogosyan , d. , and souradeep , t. 1998 , class .gravity , in press .bunn , e.f . and sugiyama , n. 1995 , , 446 , 49 .bunn , e.f . and white , m. 1997 , , 480 , 6 .cheng , e.s .et al.1994 , , 422 , l37 .cheng , e.s .et al.1997 , , 488 , l59 .church , s.e .et al.1997 , , 4 84 , 523 .clapp , a.c .et al.1994 , , 433 , l57 .debernardis , p. et al.1994 , , 422 ,l33 . de oliveira - costa , a. et al.1998 , astro - ph/9808045 .devlin , m.j .et al.1994 , , 430 , l1 .devlin , m. et al.1998 , astro - ph/9808043 .efstathiou , g. , bridle , s.l ., lasenby , a.n . , hobson , m.p . and ellis , r.s . 1997 , , submitted .ferreira , p.g . ,magueijo , j. , and gorski , k.m .1998 , , accepted ; astro - ph/9803256 .gaier , t. et al.1992 , , 398 , l1 .ganga , k. , page , l. , cheng , e.s . ,meyer , s.s .1994 , , 432 , l15 .griffin , g.s ., nguyn , h.t . , peterson , j.b . , and tucker , g.s . 1998 , in preparation .gutteriez de la cruz , s.m .et al . , , 442 , 10 .gundersen , j.o .et al.1993 , , 413 , l1 .gundersen , j.o .et al.1995 , , 443 , l57 .hamilton , a.j.s . 1997a , astro - ph/9701008 .hamilton , a.j.s .1997b , astro - ph/9701009 .hancock , s. , rocha , g. , lasenby , a.n . , and gutierrez , c.m .1998 , , 294 , l1 .hancock , s. et al.1994 , nature , 367 , 333 .herbig , t. et al.1998 , astro - ph/9808044 .hobson , m.p . and magueijo , j. 1996 , , 283 , 1133 .jaffe , a.h . ,knox , l. , and bond , j.r .1998 , in proceedings of the eighteenth texas symposium on relativistic astrophysics , ed .frieman , j. , olinto , a. , and schramm , d. , in press .jungman , g. , kamionkowski , m. , kosowsky , a. , and spergel , d.n .1996 , , 76 , 1007 .kneissl , r. and smoot , g. 1993 , cobe note 5053 .knox , l. 1999 , , in press .knox , l. 1995 , , 52 , 4307 .knox , l.e . andjaffe , a.h .1998 , in preparation .leitch , e. 1998 , caltech phd . thesis .lim , m.a .et al.1996 , , 469 , l69 .lineweaver , c.h .1997 , astro - ph/9702042 .lineweaver , c.h .1998a , , submitted ; astro - ph/9805326 .lineweaver , c.h . 1998b , astro - ph/9803100 .lineweaver , c.h . 1998c , astro - ph/9801029 .lineweaver , c.h . andbarbosa , d. 1998 , , 496 .lineweaver , c. and smoot , g. 1993 , cobe note 5051 .masi , s. et al.1996 , , 463 , l47 .meinhold , p. and lubin , p. 1989, , 370 , l11 .miller , a.d.et al.1999 , , astro - ph/9906421 .myers , s.t . ,readhead , a.c.s . , and lawrence , c.r . 1993 , , 405 , 8 .netterfield , c.b . ,devlin , m.j ., jarosik , n. , page , l. , and wollack , e.j .1997 , , 474 , 47 ., spergel , d.n . , and hinshaw , g. 1998 , astro - ph/9805339 .platt , s.r . ,kovac , j. , dragovan , m. , peterson , j.b . , and ruhl , j.e . 1997 , , 475 , l1 .ratra , b. et al.1997 , astro - ph/9710270 .ruhl , j.e .et al.1995 , , 453 , l1 .schuster , j. et al.1993 , , 412 , l47 .scott , p.f.s .et al.1996 , , 461 , l1 .seljak , u. 1997 , astro - ph/9710269 .tanaka , s.t .et al.1996 , , 468 , l81 .tegmark , m. 1997 , , 55 , 5895 ; astro - ph/9611174 .tegmark , m. 1999 , , 514 , l69 .tegmark , m. and hamilton , a. 1998 , in proceedings of the eighteenth texas symposium on relativistic astrophysics , ed .frieman , j. , olinto , a. , and schramm , d. , in press .tegmark , m. , taylor , a. , and heavens , a. 1997 , , 480 , 22 .torbet , e. et al.1999 , astro - ph/9905100 .tucker , g.s . ,griffin , g.s ., nguyn , h.t . , and peterson , j.b . 1993 , , 419 , l45 .tucker , g.s . , gush , h.p . ,halpern , m. , shinkoda , i. , and towlson , w. 1997 , , 475 , l73 .watson , r. et al.1992 , nature , 357 660 .wilson , g. et al.1999 , astro - ph/9902047 .wollack , e.j . ,devlin , m.j ., jarosik , n. , netterfield , c.b . , page , l. , and wilkinson , d. 1997 , , 476 , 440 .we start with the full - sky likelihood , either in the form of eq .[ eqn : fullskylike ] or in terms of , eq . [ eqn : equalindepmode2 ] .we transform this form to signal - to - noise eigenmodes ( _ e.g. _ , ;; ) . here, the data in the signal - to - noise eigenmode basis are , with diagonal covariance , we allow the amplitude , , for the signal contribution to vary , since the eigenmodes only depends upon the shape of the signal covariance , itself dependent on the input power spectrum . if we define the signal - to - noise transformation with power spectrum , then eq .[ eq : snmodedef ] is valid for all .the eigenvalue for mode is , with units of ( signal - to - noise) . because the gaussian variables , , are statistically independent , the likelihood is made up of independent contributions , \ , , \label{eqn : stonlike}\end{aligned}\ ] ] where a `` hat '' refers to the quantity at the likelihood maximum .introducing the number of modes with a given value of , , and the cumulative number of modes , this can be written in the suggestive form \nonumber \\ & & + { \chi^2_\lambda\over 1 + { \widehat\sigma^2_{\rm th } } \lambda } \left(e^{-\left[z(\lambda ) - { \widehat z}(\lambda)\right]}-1\right)\bigg]\ , .\label{eqn : dgstonlike}\end{aligned}\ ] ] here is the per degree of freedom for modes of the . on average , approaches ,in which case the form is a sum of terms like eq .[ eqn : equalindepmode ] .the variables are analogous to the form we have been using if is interpreted as a special case of the offset .the integral is to be interpreted in the stieltjes sense , as a sum over the discrete spectrum , .consider what happens asymptotically with increasing .the modes with contribute , so is the leading behavior .as goes up , more eigenmodes may contribute , decreases faster , modifying the tail .we recommend reporting future ( and , if possible , past ) cmb results in a form that will render them amenable to this `` radically - compressed '' analysis .thus , experimenters and phenomenologists ought to provide estimates of * , the power spectrum in appropriate bins ; * , the curvature or covariance matrix of the power spectrum estimates ; and * , the quantity such that is approximately distributed as a gaussian .we here provide an outline of the steps needed to provide the appropriate information .current listings of publicly - available results will be posted at .please contact the authors to have results included .+ 1 ) _ divide the power spectrum into discrete bins ._ to prevent significant loss of shape information , the bins should not be too large .however , there may be a problem with making the bins too small .the closer we are to the case of well - determined , independent bins , the better our ansatz is expected to work .thus bins should be large enough to keep relative error bars smaller than 100% and bin to bin correlations small .the shape - dependence resulting from coarse bins can be reduced by the use of a filter function for each band ( see ) , as was done with the msam ( medium scale anisotropy measurement ) three - year data .+ 2 ) _ find the power in each bin that maximizes the likelihood and evaluate the curvature matrix at this point ._ this can be done using your favorite likelihood search algorithm . for cobe / dmr and saskatoonwe have used the iterative scheme described in .our current implementation does not include a transformation to s / n - eigenmode space . in a forthcoming paper , we will provide detailed information on the implementation of this quadratic estimator and an appropriate sample set of programs .+ 3 ) _ estimate for each of the bins ._ if the likelihood is calculated explicitly , this can simply be done by numerical fitting to the functional form of our ansatz , eqs .[ eqn : gausslike ] and following , or eq . [ eqn : bandxb ] . if the likelihood peak is determined by the iterative quadratic scheme or some other method which also calculates the curvature matrix ( or , less - preferably , fisher matrix ) , the appropriate formulae from sec .[ sec : solution ] ( for total - power mapping experiments ) or sec .[ sec : chop ] ( eq .[ eqn : getxb2 ] for chopping experiments ) .+ 4 ) do not alter the curvature matrix by folding in the calibration uncertainty in any way . report the calibration uncertainty separately .1 ) _ overlapping sky coverage . _ power spectrum constraints will be correlated if they are from datasets with overlapping sky coverage and sensitivity to similar angular scales .we have no general theory of these correlations . proper combination of overlapping datasets appears to require a joint likelihood analysis to produce their combined constraints on the power spectrum .2 ) _ upper limits . _ for datasets that can only provide an upper limit to the power spectrum amplitude , a simple option would be to calculate the full likelihood directly and simply fit to one of the two forms for but with a negative or very small ( and such that ) .this is what we have done for fig .[ fig : ovrolike ] which demonstrates that both of our approximate forms work fairly well , especially the full form of section [ sec : indmode ] .although the results in fig . [ fig : ovrolike ] look quite impressive , they say nothing about how well the window function tells us which regions of the power spectrum are being constrained by the data . in other words , does the trace of the window function make a good filter function ?therefore , we present the following alternative method for reporting upper limits which includes a prescription for creation of a filter function . the data can be reported as amplitudes of signal - to - noise eigenmodes and their eigenvalues ( see appendix [ app : snmode ] ) .one need only report the modes with the largest eigenvalues .the number of modes that it is necessary to report is likely to be quite small .the likelihood of , where , is then : \ ] ] where is the amplitude of the mode , is its eigenvalue , and is the power spectrum used to define the s / n - modes .of course , we want the likelihood to be a function of , _e.g. _ , a binned power spectrum .it is , via : where we have assumed a flat power spectrum ( ) , is related to the window function as in eq .[ eqn : win2filt ] or , better , derived from the fisher matrix as described in .it is straightforward to calculate the derivatives of this with respect to in order to combine upper limits with detections and perform the search procedure described in section [ sec : bands ] .lrrrrl firs & 425 & 927.8 & 440.7 & ? ?& ; + tenerife595 & 1328 & 1164 & 705.9 & ? ? & + bam & 3190 & 870.3 & 478.5 & ? ? & + sp91 - 6225 - 63 & 3598 & 892.2 & 382.8 & ? ? & ; + sp94 - 62 - 4ch & 3395 & 837.5 & 384.6 & ? ? & + sp94 - 62 - 3ch & 40114 & 1632 & 584.5 & ? ? & + pyth96i - ii - iii & 5299 & 2916 & 1351 & ? ? & ; + pyth96iii & 132237 & 3364 & 1565 & ? ? & ; + qmap_q & 79143 & 2704 & 520.0 & ? ? & + qmap_ka1 & 60101 & 2209 & 604.5 & ? ? & + qmap_ka2 & 99153 & 3481 & 760.5 & ? ? & + toco97_3 & 4581 & 1600 & 720 & 500 & + toco97_4 & 70108 & 2040 & 600 & 100 & + toco97_5 & 90138 & 4900 & 850 & 0 & + toco97_6 & 135180 & 7850 & 1300 & 0 & + toco97_7 & 170237 & 7170 & 1300 & 3000 & + sp89 & 87247 & 0.0 & 1459 & 1830 & + argo & 69144 & 1060 & 613.0 & ? ? & ; + max4av & 89249 & 2586 & 876.9 & ? ? & ; + max5av & 89249 & 1511 & 573.8 & ? ? & ; + ovro22 & 362759 & 3127 & 813.1 & ? ? & + cat1 &349473 & 2583 & 1512 & ? ? & + cat2 & 559709 & 2401 & 1584 & ? ? & + cat1 - 98 & 349473 & 3937 & 2322 & 15700 & + cat2 - 98 & 559709 & 0.0 & 5031 & 15700 & + ovro & 11472425 & 72.4 & 380.3 & 367 & + suzie & 13663000 & 354.3 & 753.4 & 122 & the numbers in table [ tab : bandpowers ] were used to form part of the weight matrix in eq .[ eqn : chisq ] : where is from the `` standard error '' column of the table .these standard errors are derived from published likelihood maxima ( ) , 68% confidence upper limits ( ) and lower limits ( ) . since the upward and downward excursions from the mode to the upper and lower limits are usually different, there is some freedom in assigning a single standard error .we define as an average of these excursions : /2.\ ] ] if the published number is linear instead of quadratic , then , etc .and the above equation still applies .we have also tried producing from averaging the inverse square of the upward and downward deviations , and found no significant difference in the results ( power in bands changes by less than 10% of the error bar ) .we also found not much difference in the results depending on how we treated calibration uncertainty .most experiments report their upper and lower limits with calibration uncertainty included . only for saskatoon ,msam , qmap , toco97 and toco98have we included calibration uncertainty by treating it as an independent parameter ( in eq .[ eqn : chisq ] ) .missing from the table are detections from the white dish experiment .the white dish dataset was compressed to two band - power detections with sensitivity in the range to .a recent reanalysis , results in upper limits which are sufficiently loose that including them would make no difference in our power spectrum determination .both these analyses use only a small subset of the available data ; a complete analysis will probably provide detections .
|
powerful constraints on theories can already be inferred from _ existing _ cmb anisotropy data . but performing an exact analysis of available data is a complicated task and may become prohibitively so for upcoming experiments with pixels . we present a method for approximating the likelihood that takes power spectrum constraints , _ e.g. _ , `` band - powers '' , as inputs . we identify a bias which results if one approximates the probability distribution of the band - power errors as gaussian as is the usual practice . this bias can be eliminated by using specific approximations to the non - gaussian form for the distribution specified by three parameters ( the maximum likelihood or mode , curvature or variance , and a third quantity ) . we advocate the calculation of this third quantity by experimenters , to be presented along with the maximum - likelihood band - power and variance . we use this non - gaussian form to estimate the power spectrum of the cmb in eleven bands from multipole moment ( the quadrupole ) to from all published band - power data . we investigate the robustness of our power spectrum estimate to changes in these approximations as well as to selective editing of the data .
|
reallocation of resources to achieve mutually better outcomes is a central concern in multi - agent settings .a desirable way to achieve ` better ' outcomes is to obtain a pareto improvement in which each agent is at least as happy and at least one agent is strictly happier .pareto improvements are desirable for two fundamental reasons : they result in strictly more welfare for any reasonable notion of welfare ( such as utilitarian or leximin ) .secondly , they satisfy the minimal requirement of individual rationality in the sense that no agent is worse off after the trade . if a series of pareto improvements results in a pareto optimal outcome , that is even better because there exists no other outcome which each agent weakly prefers and at least one agent strictly prefers .we consider the setting in which agents are initially endowed with objects and they also have additive preferences for the objects . in the absence of endowments , achieving a pareto optimal assignment is easy : simply assign every object to the agent who values it the most . on the other hand , in the presence of endowments , finding a pareto optimal assignment that respects individual rationality is more challenging . the problem is closely related to the problem of testing pareto optimality of the initial assignment . a certificate of pareto dominance gives an assignment that respects individual rationality and is a pareto improvement . in fact , if testing pareto optimality is np - hard , then finding an individually rational and pareto optimal assignment is np - hard as well . in view of this , we focus on the problem of testing pareto optimality . in all cases where we are able to test it efficiently , we also present algorithms to compute individually rational and pareto optimal assignments .[ [ contributions ] ] contributions + + + + + + + + + + + + + we first relate the problem of computing an individually rational and pareto optimal assignment to the more basic problem of testing pareto optimality of a given assignment .we show for unbounded number of agents , testing pareto optimality is strongly conp - complete even if the assignment assigns at most two objects per agent .we then identify some natural tractable cases .in particular , we present a pseudo - polynomial - time algorithm for the problem when the number of agents is constant .we characterize pareto optimality under lexicographic utilities ( i.e. , lexicographic preferences ) and we show that pareto optimality can be tested in linear time . for dichotomous preferences in which utilities can take values or , we present a characterization of pareto optimal assignments which also yields a polynomial - time algorithm to test pareto optimality .in the ordinal setting , we consider two versions of pareto optimality : _ possible pareto optimality _ and _ necessary pareto optimality_. for both properties , we present characterizations that lead to polynomial - time algorithms for testing the property for a given assignment . [ [ related - work ] ] related work + + + + + + + + + + + + the setting in which agents express additive cardinal utilities and a welfare maximizing or fair assignment is computed is a very well - studied problem in computer science . although computing a utilitarian welfare maximizing assignment is easy , the problem of maximizing egalitarian welfare is np - hard .algorithmic aspects of pareto optimality have received attention in discrete allocation of indivisible goods , randomized allocation of indivisible goods , two - sided matching , and coalition formation under ordinal preferences .since we are interested in pareto improvements , our paper is also related to housing markets with endowments and ordinal preferences .recently , examined restricted pareto optimality under ordinal preferences . study the complexity of deciding whether there exists a pareto optimal and envy - free assignment when agents have additive utilities .they also showed that testing pareto optimality under additive utilities is conp - complete .we show that this result holds even if each agent has two objects . proved pareto optimality of an assignment under lexicographic utilities can be tested in polynomial time . in this paper , we present a simple characterization of pareto optimality under lexicographic utilities that leads to a linear - time algorithm to test pareto optimality . consider necessary pareto optimality as pareto optimality for all completions of the responsive set extension , and present some computational results when necessary pareto optimality is considered _ in conjunction _ with other fairness properties .reallocating resources to improve fairness has also been studied before .we consider the setting in which we have a set of agents , a set of objects , and the preference profile specifies for each agent her complete , transitive and reflexive preferences over .agents may be indifferent among objects .let and denote the symmetric and anti - symmetric part of , respectively .we denote by the equivalence classes of an agent . those classes partition into sets of objects such that agent is indifferent between two objects belonging to the same class , and she strictly prefers an object of to an object of whenever .each agent may additionally express a cardinal utility function consistent with : and we will assume that each object is positively valued , i.e , for all and .the set of all utility functions consistent with is denoted by .we will denote by the set of all utility profiles such that for each .when we consider agents valuations according to their cardinal utilities , then we will assume additivity , that is for each and .an assignment is a partition of into subsets , where is the bundle assigned to agent .we denote by the set of all possible assignments .an assignment is said to be _ individually rational _ for an initial endowment if holds for any agent .an assignment is said to be _ pareto dominated _ by another if ( i ) for any agent , holds , ( ii ) for at least one agent , holds .an assignment is _ pareto optimal _ iff it is not pareto dominated by another assignment .finally , whenever cardinal utilities are considered , the _ social welfare _ of an assignment is defined as .[ example - basic ] let , , and the utilities of the agents be represented as follows . since , we can say that .an example of an assignment is in which , , and .in this section we assume that each agent expresses a cardinal utility function over , where for all and .we will consider pareto optimality and individual rationality with respect to additive utilities .the following lemma shows that the computation of an individually rational and pareto - improving assignment is at least as hard as the problem of deciding whether a given assignment is pareto optimal : [ lemma : compute - to - test ] if there exists a polynomial - time algorithm to compute a pareto optimal and individually rational assignment , then there exists a polynomial - time algorithm to test pareto optimality .we assume that there is a polynomial - time algorithm to compute an individually rational and pareto optimal assignment .consider an assignment for which pareto optimality needs to be tested .we can use to compute an assignment which is individually rational for the initial endowment and pareto optimal . by individual rationality for all .if for all , then is pareto optimal simply because is pareto optimal .however if there exists such that , it means that is _ not _ pareto optimal .a pareto optimal assignment can be computed trivially by giving each object to the agent who values it the most . proved that a problem concerning coalitional manipulation in sequential allocation is np - complete ( proposition 6 ) .the result can be reinterpreted as follows .[ th : lang ] testing pareto optimality of a given assignment is weakly conp - complete for and identical preferences . computing an individually rational and pareto optimal assignment is weakly np - hard for .one may additionally require the _ balanced _ property , i.e. , each agent gets as many objects as she initially owned .both the theorem above and the corollary above can be extended to that case easily .if there are an unbounded number of agents , then testing pareto optimality of a given assignment is strongly conp - complete .next , we show that the problem remains strongly conp - complete even if each agent receives exactly 2 objects .[ th : testpo - conpc ] testing pareto optimality of a given assignment is strongly conp - complete for an unbounded number of agents even if each agent receives exactly 2 objects .we relegate the proof of theorem [ th : testpo - conpc ] to the appendix .note that theorem [ th : testpo - conpc ] is the best possible np - hardness result that we can obtain according to the number of objects received by each agent because if initially each agent has exactly one object in assignment , then our problem can be solved in linear - time .we now identify conditions under which the problem of computing individually rational and pareto optimal assignments is polynomial - time solvable .[ lemma : dp ] if there is a constant number of agents , then the set of all vectors of utilities that correspond to a feasible assignment can be computed in pseudo - polynomial time .consider the following algorithm ( by we denote with occurrences of ) .[ algo1v1 ] ; let be the maximal social welfare that is achievable ; then , at any step of the algorithm , the number of vectors in can not exceed .hence the algorithm runs in .now , , and since is constant , the algorithms runs in pseudopolynomial time .we can prove by induction on that a vector of utilities can be achieved by assigning objects to the agents if and only if belongs to after objects have been considered .this is obviously true at the start of the algorithm , when no object at all has been considered . now , suppose the induction assumption is true for . if belongs to after iteration , then belongs to after iteration iff obtained from by adding to the utility of some agent , that is , if can be achieved by assigning objects .if there is a constant number of agents , then there exists a pseudo - polynomial - time algorithm to compute a pareto optimal and individually rational assignment .we apply the algorithm of lemma [ lemma : dp ] , but in addition we keep track , for each , of a partial assignment that supports it : every time we add to , the corresponding partial assignment is obtained from the partial assignment corresponding to , and then mapping to . if several partial assignments correspond to the same utility vector , we keep an arbitrary one . at the end, we obtain the list of all feasible utility vectors , together with , for each of them , one corresponding assignment . for each of them ,check whether there is at least one in that pareto dominates it , which takes at most , and we recall that is polynomially large .the assignments that correspond to the remaining vectors are pareto optimal .we say that utilities are _ lexicographic _ if for each agent , .by , we will mean . in order to test the pareto optimality of an assignment , we construct a directed graph .the set of vertices contains one vertex per object belonging to .furthermore , for any vertex of associated to an object , the set of edges contains one edge for any object belonging to such that , where is the agent who receives the good in .for example , figure [ figlex ] illustrates such graph for the assignment provided by example [ example - lex ] . in figure[ figlex ] , dotted edges represent indifferences ( when ) and plain edges represent strict preferences ( when ) .it follows from that pareto optimality of an assignment under lexicographic utilities can be tested in polynomial time .we provide a simple characterization of a pareto optimal assignment under lexicographic utilities .the characterization we present also provides an interesting connection with the possible pareto optimality that we consider in the next section .[ th : lexi - opt ] an assignment is not pareto optimal wrt lexicographic utilities iff there exists a cycle in which contains at least one edge corresponding to a strict preference . assume that there exists a cycle which contains at least one edge corresponding to a strict preference .then , the exchange of objects along the cycle by agents owning the objects corresponds to a pareto improvement .assume now that is not pareto optimal and let be an assignment which pareto dominates .for at least one agent , .so there exist at least one object in .let be the owner of in .since preferences are lexicographic , in compensation of the loss of , agent must receive an object in which is at least as good as according to her own preferences .let be the owner of in and so on .since is finite , there must exist and such that the sequence forms a cycle , i.e. , . if ] , to agent .it is obvious that this assignment is at least as good as for all the agents .so pareto dominates . by following the same reasoning as above, we can state that there exists a sequence of objects such that and for any ] such that then we consider the assignment derived from by reassigning any object , with ] such that for the cycle founded in .indeed otherwise after a finite number of steps we should have for all , which leads to a contradiction with pareto dominates .so there exists a cycle in with at least one edge corresponding to a strict preference .it is clear that the graph can be constructed in linear time for any assignment .furthermore , the search of a cycle containing at least one strict preference edge in can be computed in linear time by applying a graph traversal algorithm for any strict preference edge in . therefore testing if a given assignment is pareto optimal can be done in linear time when utilities are lexicographic .[ example - lex ] let , , and the following ordinal information about preferences corresponding to the lexicographic utilities in example [ example - basic ] ( as a consequence of theorem [ th : lexi - opt ] , ordinal preferences are enough information to check pareto optimality ) . let be the initial assignment .the construction of theorem [ th : lexi - opt ] gives us that it is pareto dominated by , hence it is not pareto optimal . for assignment in example [ example - lex].,height=75 ] in this sectionwe assume the agents use at most two utility values for the objects .we say that the collection of utility functions is _ bivalued _ if there exist two numbers such that for every agent and every object , .( the result would still hold if each agent has a different pair of values , provided that for all . )this means that for every agent , the set of objects is partitioned into two subsets and ( with possibly ) .given an assignment , let , and .we provide a first requirement for an assignment to pareto dominate another one : [ lemm2 ] if an assignment is pareto dominated by an assignment then . for contradictionwe assume that . in that case .so , which contradicts the assumption that pareto dominates .[ lemm1 ] if an assignment is not pareto optimal then there exists an assignment such that ( i ) and ( ii ) and .assume that is not pareto optimal .then there exists an assignment which pareto dominates .we claim that we can reassign the objects in in order to obtain an assignment such that ( i ) and ( ii ) hold . for any agent initialize to . in order to obtain ( i ) , while there exists such that , we choose and we assign to agent in .note that may belong to another agent in but nevertheless the total number of object assigned in never decreases .furthermore , after at most steps , condition ( i ) will hold because an object can be reassigned at most once .finally , lemma [ lemm2 ] implies that and .therefore .let and .note first that because otherwise would be pareto optimal .if then condition ( ii ) holds . otherwise , if and such that then assign to in and condition ( ii ) holdsotherwise if and such that then reassign object to in and condition ( ii ) holds ( note that ( i ) remains true ) .finally we show that other cases never occur .indeed otherwise we would have and and .this would mean that .therefore we would have .but pareto dominates implies .therefore which implies .we can now bound the social welfare of by which is a contradiction with pareto dominates .based on the lemma , we obtain the following characterization of pareto optimality in the bivalued case .[ prop2util ] an assignment is pareto dominated iff there exists an assignment such that ( i ) and ( ii ) and . one implication has already been proved in lemma [ lemm1 ] . to prove the second implication we assume first that there exists such that ( i ) and ( ii ) holds .let be as described as in ( ii ) .for any , let such that .let such that .let .note that by definition because we partition into subsets such that and . finally , let be the assignment such that .it is clear that and .so is pareto dominated by . under bivalued utilities , there exists a polynomial - time algorithm for checking pareto optimality and finding a pareto improvement , if any . according to theorem [ prop2util ] ,a pareto improvement can be computed by focusing on the assignment of top objects for the agents .we describe an algorithm based on maximum flow problems to obtain such assignment . for any ,let be a directed graph which models the search of a pareto improvement for agent as a flow problem .the set of vertices contains one vertex per agent and per object , plus a source and a sink . to ease the notation, we do not discriminate between the vertices and the agents or objects that they are representing , therefore , we note .the set of edges and their capacities are constructed as follow : * for any and such that there is an edge with capacity 1 . * for any there is an edge with capacity 1 .* for any there is an edge with capacity , and there is an edge with capacity .it is easy to show that there exists a flow of value iff there exists an assignment such that any agent receives at least top objects and agent receives top objects . so by theorem [ prop2util ] , there exists a pareto improvement of iff there exists such that and there exists a flow of value in . therefore finding a pareto improvement can be performed in polynomial time by solving at most maximum - flow problems . in each paretoimprovement the number of top objects increases by at least one so there can be at most pareto improvements . note that we can also find a pareto optimal pareto improvement in polynomial time as well : in each pareto improvement the number of top objects increases by at least one so there can be at most pareto improvements .[ example - two ] let , , , , , and . is depicted in figure [ fig : flownetwork ] .the flow of value 5 ( boldface ) gives the assignment , which pareto - dominates .in this section , we consider the setting in which the agents have additive cardinal utilities but only their ordinal preferences over the objects is known by the central authority .this could be because the elicitation protocol did not ask the agents to communicate their utilities , or simply because they do nt know them precisely . in this case, one can still reason whether a given assignment is pareto optimal with respect to some or all cardinal utilities consistent with the ordinal preferences .an assignment is _ possibly pareto optimal _ with respect to if there exists such that is pareto optimal for .an assignment is _ necessarily pareto optimal _ with respect to if for any the assignment is pareto optimal for .we first note that necessary pareto optimality implies possible pareto optimality .secondly , at least one necessarily pareto optimal assignment exists in which all the objects are given to one agent .we focus on the problems of testing possible and necessary pareto optimality . in order to characterize possible pareto optimality, we first define _ stochastic dominance ( sd ) _ which extends ordinal preferences over objects to preferences over sets of objects ( and even over fractional allocations in which agents can get fractions of items ) .we say that an allocation _ stochastically dominates _ an allocation , denoted by , iff for all . in the case of fractional allocations, denotes the units of items give to for items in .the sd relation is equivalent to the _ responsive set extension _ , which also extends preferences over objects to preferences over sets of objects .formally , for agent , her preferences over are extended to her preferences over as follows : iff there exists an injection from to such that for each , .since is a partial order , we say a preference is a _ completion _ of if it is a complete and transitive relation over sets of objects that is consistent with .we say that an assignment is _ sd - efficient _ if it is pareto optimal with respect to the sd relation of the agents , and _ rs - efficient _ if it is pareto optimal with respect to the rs set extension relation of the agents . under ordinal preferences ,an agent prefers one allocation over another with respect to responsive set extension iff she prefers it with respect to stochastic dominance .thus , a ( discrete ) assignment is rs - efficient iff it is sd - efficient .we say that strictly rs - dominates if pareto dominates with respect to rs .[ th : posspo - equi ] an assignment is possibly pareto optimal iff it is sd - efficient iff it is rs - efficient iff there exists no cycle in which contains at least one edge corresponding to a strict preference . by the ordinal welfare theorem , a fractional assignment is possibly pareto optimal iff it is sd - efficient ( among the set of fractional assignments ) .furthermore , a discrete assignment that is sd - efficient among all discrete assignments is also sd - efficient among all fractional assignments because sd - efficiency of depends on the non - existence of a cycle with a strict edge in the underlying graph .hence , we obtain the equivalences .since the characterization in theorem [ th : lexi - opt ] also applies to rs - efficiency and possible pareto optimality , hence possible pareto optimality can be tested in linear time .the argument in the proof above also showed that possible pareto optimality is equivalent to pareto optimality under lexicographic preferences .we point out that a possibly pareto optimal assignment may not be a necessarily pareto optimal assignment .[ example - npo ] consider two agents with identical preferences .every assignment is possibly pareto optimal ; however the assignment in which agent gets and 2 gets is not necessarily pareto optimal since it is not pareto optimal for the following utilities .next we present two characterizations of necessary pareto optimality .the first highlights that necessary pareto optimality is identical to the necessary pareto optimality considered by .[ th : car - ext ] an assignment is necessarily pareto optimal iff it is pareto optimal under all completions of the responsive set extension .if an assignment is not pareto optimal under certain additive preferences , it is by definition not pareto optimal under this particular completion of responsive preferences .assume that an assignment is not pareto optimal under some completion of the responsive set extension .then there exists another assignment in which for all or and , and for some , or and . for both the cases ,if the allocations are incomparable with respect to responsive set extension , then there exists an object such that . in that case , consider a utility function in which for all and .for , .for characterizing necessarily pareto optimal assignments , we define a _one - for - two pareto improvement swap _ as an exchange between two agents and involving objects , and such that .[ thm : char_nec_po ] an assignment is necessarily pareto optimal iff 1 . it is possibly pareto optimal and 2 . it does not admit a one - for - two pareto improvement swap . we first show that if an assignment does not satisfy the two conditions , then it is not necessarily pareto optimal .possible pareto optimality is a requirement for the assignment to be necessarily pareto optimal . to see that the second condition is also necessary , we have to show that if admits a one - for - two pareto improvement swap then is not necessarily pareto optimalthis is because the swap could indeed be a pareto improvement for these two agents with the following utilities : and .these utilities are compatible with the ordinal preferences of these agents , because of the assumption ( and irrespective to the ordinal preferences of ) .conversely , to show that conditions ( i ) and ( ii ) are sufficient for the assignment to be necessarily pareto optimal , suppose for a contradiction that ( 1 ) is not necessarily pareto optimal and ( 2 ) does not admit a one - for - two pareto improvement swap .we will then show that there is an assignment that strictly rs - dominates , implying that can not be possibly pareto optimal . from ( 1 ) andtheorem [ th : car - ext ] , we have ( 3 ) there is another assignment and a collection of additive utility functions such that pareto dominates with respect to . without loss of generalitywe may assume that each agent receives a nonempty bundle in .regarding * the structure * of , first we observe that the lack of one - for - two pareto improvement swaps implies that every agent is assigned to some ( or none ) of her top objects and possibly to one additional object that she ranks lower .formally , let denote a set of s top objects she is assigned to in , i.e. , . then , where is either a single object or the empty set .we show that must hold for every agent .suppose not , then there is an agent for which . by the definition of it is straightforward that if then , and if then , a contradiction . furthermore , for every agent , if then for any object we have . otherwise ,if there was an agent with such that , then would imply .now we construct a so - called * pareto improvement sequence * with respect to and , which consists of a sequence of agents with possible repetitions and a set of distinct objects such that * , , and ; * , , and ; * * , , and . and with strict preference for at least one agent .the presence of the above pareto improvement sequence would imply the existence of an assignment that rs - dominates , obtained by letting the agents exchange their objects along the sequence , i.e. , with .this would contradict our assumption that is possibly pareto optimal .we first define three types of agents , and a * one - to - one mapping * over some of the objects they are indifferent between in and . in the set we put all the agents with either no or with . each agent in this set must be indifferent between all objects in ( i.e. , these object are in a single tie in s preference list ) by the following reasons . implies . by the definition of it follows that any object in is weakly preferred to any object in by .however , from ( 3 ) we have , which implies that , which can only happen if is indifferent between any two objects in .let be any one - to - one mapping from to .next , let contain every agent who has object such that there is an object with . in this case must be indifferent between all objects in .indeed , implies . by the definition of any object in is weakly preferred to any object in by . on the other hand , and implies , leading to the conclusion that must be indifferent between all objects in .therefore can map to in and to .thirdly , let contain every agent with object such that for every , .note that there is at least one agent in , the one who gets strictly better off in , as otherwise , if there was an object such that , then would imply . finally , we shall note that if is empty then , so either is indifferent between and , in which case is in with , or strictly prefers to and then belongs to .to summarize , so far we have that for any and we associate an object such that .furthermore , for any and we have that .we build a pareto improvement sequence as a part of a sequence involving agents with corresponding objects starting from any with . for every ,let be the agent who receives in .if then let , and if then let .we terminate the sequence when an object is first repeated .this repetition must occur at some agent in , since for any agent the objects in are in a one - to - one correspondence with those in with .let the first repeated object belong to , say , for indices .we show that the sequence is a pareto improvement sequence . to see this ,let us first consider an agent .whenever appears in the sequence as she receives object and in return she gives away , where is indifferent between and .now , let that appears as .she receives object and in return she gives away , where by the definition of .since appears in this sequence only once , it is obvious that .finally , regarding , receives and she gives away , where .so we constructed a pareto improvement sequence , and therefore is not possibly pareto optimal , a contradiction .in example [ example - npo ] , is not necessarily pareto optimal because it admits a one - for - two pareto improvement swap : , , and .it also shows that although an assignment may not be necessarily pareto optimal there may not be any assignment that pareto dominates it for _ all _ utilities consistent with the ordinal preferences .the characterization above also gives us a polynomial - time algorithm to test necessary pareto optimality .we have studied , from a computational point of view , pareto optimality in resource allocation under additive utilities and ordinal preferences .many of our positive algorithmic results come with characterizations of pareto optimality that improve our understanding of the concept and may be of independent interest .future work includes identifying other important subdomains in which pareto optimal and individually rational reallocation can be done in a computationally efficient manner .haris aziz is funded by the australian government through the department of communications and the australian research council through the ict centre of excellence program .pter bir acknowledges support from the hungarian academy of sciences under its momentum programme ( ld-004/2010 ) , the hungarian scientific research fund , otka , grant no .k108673 , and the jnos bolyai research scholarship of the hungarian academy of sciences .jrme lang , julien lesca and jrme monnot acknowledge support from the anr project cocorico - codec .part of the work was conducted when pter bir visited paris dauphine and when julien lesca visited corvinus university of budapest sponsored by cost action ic1205 on computational social choice .35 [ 1]#1 [ 1]`#1 ` urlstyle [ 1]doi : # 1 d. j. abraham , k. cechlrov , d. manlove , and k. mehlhorn .pareto optimality in house allocation problems . in _ proceedings of the 16th international symposium on algorithms and computation ( isaac ) _ ,volume 3341 , pages 11631175 , 2005 .a. asadpour and a. saberi .an approximation algorithm for max - min fair allocation of indivisible goods ._ siam journal on computing _ , 390 ( 7):0 29702989 , 2010 .s. athanassoglou .efficiency under a combination of ordinal and cardinal information on preferences . _ journal of mathematical economics _ , 470 ( 2):0 180185 , 2011 .h. aziz and b. de keijzer .housing markets with indifferences : a tale of two mechanisms . in _ proceedings of the 26th aaai conference on artificial intelligence ( aaai ) _ , pages 12491255 , 2012 .h. aziz , f. brandt , and p. harrenstein .pareto optimality in coalition formation ._ games and economic behavior _ , 82:0 562581 , 2013 .h. aziz , f. brandl , and f. brandt .universal dominance and welfare for plausible utility functions ._ journal of mathematical economics _, 2015 .h. aziz , s. gaspers , s. mackenzie , and t. walsh .fair assignment of indivisible objects under ordinal preferences ._ artificial intelligence _ , 227:0 7192 , 2015 .h. aziz , s. mackenzie , l. xia , and c. ye .ex post efficiency of random assignments . in _ proceedings of the 14th international conference on autonomous agents and multi - agent systems ( aamas )_ , pages 16391640 , 2015 .s. barber , w. bossert , and p. k. pattanaik ._ ranking sets of objects_. springer , 2004 .i. bezkov and v. dani .allocating indivisible goods ._ sigecom exchanges _ , 50 ( 3):0 1118 , 2005 . s. bouveret and j. lang . efficiency and envy - freeness in fair division of indivisible goods : logical representation and complexity. _ journal of artificial intelligence research _ , 320 ( 1):0 525564 , 2008 .s. bouveret and j. lang . manipulating picking sequences . in _ proceedings of the 21st european conference on artificial intelligence ( ecai ) _ , pages 141146 , 2014 .s. bouveret and m. lematre . characterizing conflicts in fair division of indivisible goods using a scale of criteria . in _ proceedings of the 13th international conference on autonomous agents and multi - agent systems ( aamas )_ , pages 13211328 , 2014 .s. bouveret , u. endriss , and j. lang .fair division under ordinal preferences : computing envy - free allocations of indivisible goods . in _ proceedings of the 19th european conference on artificial intelligence ( ecai ) _ , pages 387392 , 2010 .s. j. brams , p. h. edelman , and p. c. fishburn . ._ theory and decision _ , 550 ( 2):0 147180 , 09 2003 .k. cechlrov , p. eirinakis , t. fleiner , d. magos , d. f. manlove , i. mourtos , e. ocekov , and b. rastegari .pareto optimal matchings in many - to - many markets with ties . in _ proceedings of the 8th international symposium on algorithmic game theory ( sagt ) _ , pages 2739 , 2015 .a. damamme , a. beynier , y. chevaleyre , and n. maudet .the power of swap deals in distributed resource allocation . in _ proceedings of the 14th international conference on autonomous agents and multi - agent systems ( aamas )_ , pages 625633 , 2015 .b. de keijzer , s. bouveret , t. klos , and y. zhang . on the complexity of efficiency and envy - freeness in fair division of indivisible goods with additive preferences . in _ proceedings of the 1st international conference on algorithmic decision theory _ ,pages 98110 , 2009 .s. demko and t. p. hill .equitable distribution of indivisible obejcts . _ mathematical social sciences _, 16:0 145158 , 1988 .u. endriss .reduction of economic inequality in combinatorial domains . in _ proceedings of the 12th international conference on autonomous agents and multi - agent systems ( aamas )_ , pages 175182 , 2013 .a. erdil and h. ergin .two - sided matching with indifferences .june 2015 .e. fujita , j. lesca , a. sonoda , t. todo , and m. yokoo .a complexity approach for core - selecting exchange with multiple indivisible goods under lexicographic preferences . in _ proceedings of the 29th aaai conference on artificial intelligence ( aaai ) _ , pages 907913 , 2015 .d. golovin .max - min fair allocation of indivisible goods .technical report cmu - cs-05 - 144 , carnegie mellon university , 2005 .katta and j. sethuraman .a solution to the random assignment problem on the full preference domain ._ journal of economic theory _ ,1310 ( 1):0 231250 , 2006 .h. konishi , t. quint , and j. wako . on the shapley - scarf economy : the case of multiple types of indivisible goods. _ journal of mathematical economics _ , 350 ( 1):0 115 , 2001 .j. lesca and p. perny .solvable models for multiagent fair allocation problems . in _ proceedings of the 19th european conference on artificial intelligence ( ecai ) _ , pages 393398 , 2010 .r. j. lipton , e. markakis , e. mossel , and a. saberi . on approximately fair allocations of indivisible goods . in _ proceedings of the 5th acm conference on electronic commerce ( acm - ec ) _ ,pages 125131 , 2004 .d. manlove ._ algorithmics of matching under preferences_. world scientific publishing company , 2013 .a. mclennan .ordinal efficiency and the polyhedral separating hyperplane theorem . _ journal of economic theory _ , 1050 ( 2):0 435449 , 2002. h. moulin ._ fair division and collective welfare_. the mit press , 2003 .t. t. nguyen , m. roos , and j. rothe .a survey of approximability and inapproximability results for social welfare optimization in multiagent resource allocation . _annals of mathematics and artificial intelligence _ , pages 126 , 2013 .a. d. procaccia and j. wang .fair enough : guaranteeing approximate maximin shares . in _ proceedings of the 15th acm conference on economics and computation ( acm - ec ) _ , pages 675692 , 2014 .a. sonoda , t. todo , h. sun , and m. yokoo .two case studies for trading multiple indivisible goods with indifferences . in _ proceedings of the 28th aaai conference on artificial intelligence ( aaai ) _ , pages 791797 , 2014 .t. todo , h. sun , and m. yokoo .strategyproof exchange with multiple private endowments . in _ proceedings of the 28th aaai conference on artificial intelligence ( aaai ) _ ,pages 805811 , 2014 .w. yu , h. hoogeveen , and j. k. lenstra . minimizing makespan in a two - machine flow shop with delays and unit - time operations is np - hard ._ journal of scheduling _, 7:0 333348 , 2004 .haris aziz + data61 and unsw + sydney , australia + pter bir + hungarian academy of sciences + budapest , hungary + jrme lang + lamsade , universit paris - dauphine + paris , france + julien lesca + lamsade , universit paris - dauphine + paris , france + jrme monnot + lamsade , universit paris - dauphine + paris , france +below we provide the proof of theorem [ th : testpo - conpc ] .the reduction is done from 2-numerical matching with target sums ( 2nmts in short ) .the inputs of 2nmts is a sequence of positive integers such that and for , and .we want to decide if there are two permutations and of the integers such that for .2nmts is known to be strongly np - complete .the reduction from an instance of 2nmts is as follows .there are agents where , and and objects where , , .let be a positive value strictly lower than .the following table summarizes the non - zero utilities provided by the different objects , where agt#1 is the agent which receives the object in the initial assignment and is her utility for it , and where agt(s)#2 lists the other agents with non - zero utility for the object and corresponds to their utility for it : {0pt}{13pt}\\ \hline h_i^{cr}&c_i&3k&r_i&1-\varepsilon\rule[-3pt]{0pt}{13pt}\\ \hline f_i^l&\ell_i&1&c_j ~\hbox{with}~ a_j\geq i+1&i\rule[-3pt]{0pt}{13pt}\\ \hline f_i^r&r_i&1&c_j ~\hbox{with}~ a_j\geq i+1&3k+i\rule[-3pt]{0pt}{13pt}\\ \hline g_i^r&r_i&3&r_{i+1 } ~\hbox{if}~ i < k&3+\varepsilon\rule[-3pt]{0pt}{13pt}\\ & & & d ~\hbox{if}~ i = k&3+\varepsilon\\ \hline g_i^l&\ell_i&3&\ell_{i-1 } ~\hbox{if}~ i>1&3-\varepsilon\rule[-3pt]{0pt}{13pt}\\ & & & r_1 ~\hbox{if}~ i=1&3+\varepsilon\\ \hline g^c&d&3&\ell_k&3-\varepsilon\rule[-3pt]{0pt}{13pt}\\ \hline o&d&1&&\\ \end{array}\ ] ] the initial assignment provides the following utilities to the agents : , and for , and . clearly , this instance is constructed within polynomial time and each agent has two items in the initial assignment .we claim that there is a pareto improvement of the initial assignment iff is a yes - instance of 2nmts .assume that there exist and such that for , i.e. , is a yes - instance of 2nmts .note that this implies for any that because and .then consider the following assignment : * ( resp . ) is assigned to with ( resp . to ) with utility 4 .* ( resp . ) is assigned to with ( resp . to ) with utility 4 . * is assigned to . using ( [ equationproof ] ) , the utility of agent is .* is assigned to with utility .assume now that is a no - instance of 2nmts .by contradiction , assume that there exists a pareto improvement of the initial assignment .note first that any agent should receive in at least two objects .indeed there is no object which provides a utility greater than to any agent of , and any of those agents receives a utility 4 in the initial assignment .furthermore , any good provides a utility at most to an agent , which is strictly lower than her utility in the initial assignment because ( otherwise would get utility 0 from ) .since the number of objects is twice the number of agents , we can conclude that assigns exactly 2 objects to every agent .let us focus first on the objects of .those objects are the only ones which can provide a utility at least to the agents of .all other objects provide a utility at most to the agents in .so , to achieve a utility at least for all those agents in , each of them should receive exactly one good from ( with non - zero utility for it ) because .figure [ figcycleproof ] illustrates the initial assignment for the agents of . in this figure ,a dotted arrow from an object of means that this object can be reassigned to the pointed agent with a non zero utility .figure [ figcycleproof ] illustrates the fact that the goods of could be allocated in only two different manners in to be a pareto improvement of the initial endowment : either every good of is assigned to the same agent as in the initial assignment , or every good of is assigned to the agent pointed by the corresponding arrow in figure [ figcycleproof ] .first , we consider the case where all goods of are assigned in exactly as in the initial assignment . to achieve a utility at least 4, every agent should receive the object to complete her bundle of two objects .this implies that those objects can not be assigned to agent , with , in order to ensure that they get a utility at least .therefore every agent should receive the object with utility .furthermore no agent can receive an object to complete her bundle of two objects because this object would provide her a utility at most .so , every agent should receive the object . from this , we conclude that should be exactly the same assignment as the initial assignment , which contradicts pareto - dominates this initial assignment . from the previous paragraphs ,we know that any good of should be assigned in to the agent pointed by the corresponding dotted arrow in figure [ figcycleproof ] .to achieve a utility at least 4 , any agent should receive the good to complete her bundle of two objects .if an agent did not receive at least one good such that , then the maximal utility achievable by would be , which would be strictly lower than her utility in the initial assignment .so , every agent should receive exactly one good such that .therefore no good can be assigned to agent .so , to achieve a utility at least 4 , any agent should receive the good to complete her bundle of two objects .then the good should be assigned to agent to complete her bundle of two goods .finally it remains to assign to every agent a good such that .now let us focus on the pair of goods assigned in to agent with .note that those two objects belong to .we know that the total amount of utility provided by the goods of to the agents of should be exactly equal to .furthermore any agent should receive a share of at least of this total amount of utility . since ,any agent should receive two objects and such that .let and be the two permutations of such that for any , the objects and are assigned in to agent .those two permutations are such that for any , .this leads to a contradiction with is a no - instance .
|
reallocating resources to get mutually beneficial outcomes is a fundamental problem in various multi - agent settings . in the first part of the paper we focus on the setting in which agents express additive cardinal utilities over objects . we present computational hardness results as well as polynomial - time algorithms for testing pareto optimality under different restrictions such as two utility values or lexicographic utilities . in the second part of the paper we assume that agents express only their ( ordinal ) preferences over single objects , and that their preferences are additively separable . in this setting , we present characterizations and polynomial - time algorithms for possible and necessary pareto optimality .
|
the evolutionary game theory has been evolving and expanding progressively in the last years due to the emergent experimental facts and the deeper understanding of models developed [ for a survey see the books by , , and reviews by and by ] . in the last decades huge efforts are focused on the emergence of cooperative behavior because of its importance in many human and biological systems . in the first multi - agent evolutionary systemsthe repeated interaction is described by the payoff matrix of traditional game theory and the evolution is governed by a dynamical rule resembling the darwinian selection .the systematic investigations have explored the relevance of the games itself ( as interactions including the set of strategies ) , the connectivity structure , and also the dynamical rule .recently , the co - evolutionary games have extended the original frontiers of evolutionary games by introducing additional ( personal ) features and complex dynamical rules allowing the simultaneous time - dependence in each ingredient of the mathematical model .the possible personal character of players can be enhanced further in a way to postulate players who consider not just their own payoffs but also the neighbors income . to elaborate this possibility we will study the social dilemmas with players located on a square lattice and collect income from games played with all their nearest neighbors .now , it is assumed that the myopic players wish to maximize their own utility function when they adopt another possible strategy . throughout this utility functionthe players combine the self - interest with the other - regarding preference in a tunable way .besides it , the applied strategy adoption rule involves some noise ( characterizing fluctuations in payoffs , mistakes and/or personal freedom in the decision ) that helps the system evolve towards the final stationary state via a spatial ordering process .the present work was motivated by our previous study considering the consequence of pairwise collective strategy updates in a similar model .it turned out that the frequency of cooperators is increased significantly in the case of prisoner s dilemma ( pd ) when two randomly chosen players favors a new strategy pair if it increases the sum of their individual payoff .the latter strategy update can be interpreted as a spatial extension of cooperative games where players can form coalitions to enhance the group income .some aspects of the other - regarding preference is modelled very recently by who studied a spatial evolutionary pd game with synchronized stochastic imitation . on the other hand , the experimental investigations of the human and animal behavior have also indicated the presence of different types of mutual helps , like charity , inequality aversion , and juvenile - adult interactions .the above - mentioned relevant improvement in the level of cooperation has inspired us to quantify the effect of the group size and the number of players choosing new strategies simultaneously . from a series of numerical investigationswe could draw a general picture that can be well exemplified with the present simpler model .more precisely , the most relevant improvement is achieved for those cases where each myopic player has taken into consideration all of her neighbor s payoff together with her own payoff with equal weight when selecting a new strategy .now we will study a more general model where the utility function of each player is combined from her own payoff and her co - players payoff with weight factors and . in this notation a selfish myopic player who wish to maximize her own personal income irrespective of others .the fraternal players with favor to optimize the income ( redistributed and ) shared equally between each pair when choosing another strategy .the present model allows us to investigate the effect of other - regarding preference in spatial models for different levels of altruism ( ) . for the most altruistic case ( ) the players wish to maximize the co - players income andthe system behavior also exhibits a state resembling the `` tragedy of the commons '' that can be interpreted as the `` lovers dilemma '' .it is emphasized that the resultant formalism ( payoff matrix ) of the other - regarding preference was already investigated by as a model to capture the kin- and group - selection mechanisms .the origin of the basic idea goes back to the work of and who studied animal behaviors between relatives .the present work can be considered as a continuation of the mentioned investigations .now our attention will be focused on the consequences of structured population for a myopic strategy update .using the terminology of social dilemmas the above mentioned model with a stochastic myopic strategy update is defined and contrasted with other versions in the following section .the results of monte carlo ( mc ) simulations are summarized in section [ mc ] for a finite noise level .section [ stabanal ] is addressed to the spatial stability analysis in the zero noise limit and the main results are discussed in the final section .the possible solutions of the two - person , two - strategy evolutionary games depend on the details including the spatial structure , the range of interactions , the payoffs , the dynamical rule(s ) , the payoffs , and the measure of noise . before specifying the presently studied modelaccurately we briefly survey the main features of these solutions .we consider a simple model with players located on the sites of a square lattice ( consisting of sites with periodic boundary conditions ) .each player can follow one of the two strategies called unconditional cooperation ( ) or unconditional defection ( ) within the context of social dilemmas . if these strategies are denoted by two - dimensional unit vectors , as then the payoff of player against her neighbor at site can be expressed by the following matrix product : where denotes the transpose of the state vector .the payoff matrix is given as where the reward of mutual cooperation is chosen to be unity ( ) and the mutual defection yields zero income ( ) for both players without any loss of generality .the cooperator receives ( sucker s payoff ) against a defector who gets , the temptation to choose defection .this terminology was originally introduced for the description of prisoner s dilemma ( pd ) ( where ) and later extended for weaker social dilemmas , too . for this notationthe payoff plane can be divided into four segments ( see figs .[ fig : socdil ] ) where the possible nash equilibria are denoted by pairs of open , closed , or half - filled circles referring to pure cooperation , pure defection , and mixed strategies , respectively .for example , within the range of harmony ( h ) game the nash equilibrium ( from which no player has incentives to deviate unilaterally ) is the ( pareto optimal ) mutual cooperation . on the contrary, for the prisoner s dilemma ( pd ) the only nash equilibrium is the mutual defection yielding the second worst payoff .figure [ fig : socdil]a illustrates that both the and strategy pairs are nash equilibria within the region of stag hunt ( sh ) game where and . for the hawk - dove ( hd ) game besides the and strategy pairs there is an additional ( so - called mixed ) nash equilibrium where the players can choose either cooperation or defection with a probability dependent on the payoff parameters .many relevant features of these types of evolutionary games have already been clarified .for example , if the frequency of cooperators and defectors are controlled by a replicator dynamics ( favoring the strategy of higher payoff ) within a well - mixed population then the system evolves towards a final stationary state related to the possible nash equilibria as illustrated in fig .[ fig : socdil]b .namely , cooperators ( defectors ) die out within the region of pd ( h game ) while they can coexist for the case of hd game . the radial segmentation of the region of sh game in fig .[ fig : socdil]b indicates that here the system develops into one of the homogeneous phase and the final result depends on the composition of the initial state .the spatial arrangement of players ( with a short range interaction ) , however , affects significantly the evolutionary process depending on the payoffs . for the present lattice system one can evaluate the total sum of individual payoffs for the ordered arrangement of and strategies .figure [ fig : socdil]c shows that the maximum total payoff is achieved for homogeneous cooperation ( if . in the opposite case ( ) the total ( or average ) payoffis maximized if the cooperators and defectors are arranged in a chessboard - like manner as indicated by the pattern . the comparison of the figs .[ fig : socdil]a and [ fig : socdil]c illustrates the relevant differences between the suggestions of traditional game theory when assuming two selfish players and the optimum total payoff with respect to the whole society with players located on the sites of a square lattice . notice , that the chessboard - like arrangement of cooperators and defectors can provide optimal total payoff within a region involving a suitable part of the pd , the hd , and the h games .the curiosity of these systems is more pronounced if we compare it with the consequences of different evolutionary rules in the lattice models . finally , in fig .[ fig : socdil]d we summarize only the results of a spatial evolutionary game obtained when the myopic players can choose another strategy if this change increases their own payoff assuming quenched neighborhood in the zero noise limit . in the present workthe latter dynamical rule is extended by allowing players to consider not only their own but also their neighbors payoffs .the system is started from a random initial strategy distribution where both and strategies are present with the same frequency .the evolution of the strategy distribution is controlled by repeating the random sequential strategy updates in myopic manner .accordingly , in each elementary step we choose a player at random who can modify her strategy from to ( _ e.g. _ , or _vice versa _ ) with a probability : } \ ; , \label{eq : dyn}\ ] ] where characterizes the average amplitude of noise ( that can appear for fluctuating payoffs ) disturbing the players rational decision and the utility function combines the payoffs of the player with the payoffs of her co - players within the same games .namely , \,,\ ] ] where the summation runs over all nearest neighbors pairs . in this notation characterizes the strength of altruism of the player . for simplicity, we now assume that all players have have the same value , thus the whole population is described by the same attitude of selfishness .for the players are selfish and the resultant behavior has already been explored in a previous work . if then each player tries to maximize a payoff obtained by sharing equally the common payoffs between the interacting players . in the extreme case ( ) players focus exclusively on maximizing the other s income .to give real - life example , the latter behavior mimics the attitude of lovers or the behavior of relatives in biological systems as discussed by .notice that the present model can be mapped into a spatial evolutionary game with selfish ( ) players for myopic strategy update if we introduce an effective payoff matrix , the effective payoff matrix becomes symmetric for and the corresponding model is equivalent to a kinetic ising model where the evolution of spin configuration is controlled by the glauber dynamics in the presence of a unit external magnetic field . in the latter case the system evolves towards a stationary state wherethe probability of a configuration can be described by the boltzmann statistics and the laws of thermodynamics are valid .finally we emphasize that the simultaneous exchanges and leave the system ( ) unchanged .this is the reason why the mc analysis can be restricted to the cases where .the mc simulations are performed when varying the payoffs and at a few representative values of for and . in most of the casesthe system is started from a random initial state and after a suitable thermalization time the stationary state is characterized by the fraction of cooperators ( and ) averaged over a sampling time in the sublattices and corresponding to the white and black boxes on the chessboard .in fact there exist two equivalent sublattice ordered arrangements of cooperators and defectors : ( 1 ) and ; ( 2 ) and if . during the transient timeboth types of ordered arrangements are present in a poly - domain structure and the typical linear size of domains growth as .such a situation can occur for example within the region of hd game as demonstrated in fig .[ fig : domgrow ] . finally one of the ordered structure prevails the whole spatial system . evidently , the requested thermalization time ( to achieve the final mono - domain structure ) increases with system size as . it is emphasized that the domain growing process is blocked for those dynamical rules forbidding the adoption of irrational choices ( when ) .the latter cases yield frozen poly - domain patterns where as described by .the present evolutionary rule allows the irrational choices with a probability decreasing very fast when .consequently , the expected value of is also increased drastically if .the resultant technical difficulties can be avoided if the system is started from a biased initial state where one of the sublattice is occupied by only cooperators ( or defectors ) while other sites are filled randomly with or players . for selfish players ( ) the results of mc simulationsare summarized in fig .[ fig : q00 ] in the low noise limit where the readers can distinguish three types of ordered structure .namely , the above mentioned sublattice ordering occurs within the territory of hd game .the `` homogeneous '' and phases are separated by a first order transition located along the line when varying the payoff parameters .more precisely , in the limit and . on the contrary , the system evolves into a state called `` tragedy of the commons '' ( if ) when and . in the case of finite noise ,the sharp ordered phases disappear and point defect can emerge resulting in intermediate values for cooperator frequency . notice that the transitions from the homogeneous phases to the symmetry breaking sublattice ordered structure follow similar scenario .namely , the stationary frequency of cooperators evolves toward a state ( where ) when approaching the critical point from the homogeneous regions while vanishing algebraically from the opposite direction .the width of the transition regime in both side of the critical point is proportional to .the sublattice ordering as a continuous transition belongs to the ising universality class .this means that the vanishing order parameter ( ) follows a power law behavior when approaching the critical point and simultaneously the correlation length , the relaxation time , and the magnitude of fluctuations diverge algebraically .the latter effects imply an increase in the uncertainty of the numerical data . to avoid this problem we used a significantly larger size ( typically ) and longer thermalization and sampling time ( mcs ) in the close vicinity of the transition point . using this methodwe could reduce the statistical error comparable to the line thickness .all the above mentioned three phases and also the main characteristics of the phase transitions are present for as demonstrated in fig .[ fig : q033 ] . the most striking difference between figs .[ fig : q00 ] and [ fig : q033 ] is the shift of the phase boundaries . as a consequence for the territory of the `` tragedy of the commons '' ( in the plane )is reduced .similar behavior was observed in a model where the evolution is controlled by pairwise collective strategy update .fundamentally different behavior is found for the fraternal players ( ) as illustrated in fig .[ fig : q05 ] .in this case the results depend only on and the system does not fall into the state of the `` tragedy of the commons '' . it is emphasized that in the zero noise limit the system evolves into the state providing the maximum total payoff ( compare figs . [fig : socdil]c and [ fig : q05 ] ) .the above numerical investigations were repeated for several noise levels , too .these numerical data have justified that in the zero noise limit the phase diagrams coincide with those we derived from stability analysis of the spatial patterns .to have a deeper understanding about the ordering process , we first study the stability of the sublattice ordered arrangement of strategies against a single strategy flip in the zero noise limit .the two possibilities are demonstrated in fig .[ fig : bulkinst ] . in the first case a defector reverses its strategyif the cooperation yields higher utility , that is , if < 4 \label{eq : afdtoc1}\ ] ] or after some algebraic simplification due to the absence of the second neighbor interactions all the defectors ( within the chessboard like structure ) are enforced to cooperate in the low noise limit if the condition ( [ eq : afdtoc ] ) is satisfied .consequently , in this case the sublattice ordered arrangement of cooperation and defection transforms into homogeneous cooperation .one can easily check that the appearance of a single defector in the state of homogeneous cooperation is favored if which is the opposite of the condition ( [ eq : afdtoc ] ) .the random sequential repetition of this type of point defect yields a poly - domain structure resembling to those plotted in fig .[ fig : domgrow ] . as mentioned before , this poly - domain structure is not stable for because the fluctuations changes the sizes of ordered domains and their vanishing is not balanced by the appearance of new domains. finally the system evolves into one of the sublattice ordered structure . from the above analysisone can conclude that the equation determines the position of the phase boundary separating the sublattice ordered phase from the homogeneous cooperation in the plane .this mathematical expression reflects clearly that the straight line phase boundary rotates anti - clockwise around the point ( ) from the vertical direction to the horizontal one when increasing the value of from 0 to 1 .the above described analysis can be repeated to study the stability of the sublattice ordered structure against a single strategy change from to as plotted on the right - hand side of fig . [fig : bulkinst ] .it is found that the sublattice ordered structure evolves into the homogeneous phase if and the resulting equation gives a boundary line separating the sublattice ordered phase from the homogeneous defection state in the payoff plane .this phase boundary is also a straight line rotating clockwise around the point from the horizontal ( ) to the vertical ( ) direction . the phase boundaries ( [ eq : afcpb ] ) and ( [ eq : afdpb ] ) divide the plane into four segments excepting the case when the given straight lines are parallel .for these segments are equivalent to those types of games indicated in figs .[ fig : socdil ] .the above stability analysis allows the existence of both the homogeneous and phases within the region of sh game .the simulations indicate the presence of both phases in a poly - domain structure during a transient period .the final result of the competition between these two ordered structure can be deduced by determining the average velocity of the boundary separating the regions of homogeneous cooperation and defection . for the present dynamical rulethe most frequent elementary changes are the shifts of a step - like interface as illustrated in fig .[ fig : invasion ]. evidently other elementary processes are also observable with a vanishing probability when .the latter elementary processes ( e.g. the creation of a new step ) can affect only the average velocity of the interface . using the above observation as a working hypotheseswe can derive a condition for the direction of invasion that is justified by the mc simulations .namely , along the horizontal interface the step moves right in the zero noise limit if .this yields invasion if > 2[1+(1-q)s+qt ] \ , .\label{eq : dinv}\ ] ] in the opposite case invasion is preferred and the system evolves into the state where all the players cooperate .thus , the position of the phase boundary separating the homogeneous and phases can be given by a straight line in the plane .here it is worth mentioning that the phase boundary determined by ( [ eq : cdpb ] ) coincides with those suggestion derived from the criterion of risk dominance favoring the selection of those strategy that provides higher utility if the co - player chooses her strategy at random .in other words , the present myopic strategy update ( on the square lattice ) can be considered as a realization of the criterion of risk dominance in the crucial local constellations because the players and are surrounded by two cooperators and two defectors .the results of the above stability analysis are summarized in figs .[ fig : multi ] .it is emphasized that the three phase boundaries [ given by eqs .( [ eq : afcpb ] ) , ( [ eq : afdpb ] ) , and ( [ eq : cdpb ] ) ] meet at a so - called tricritical point ( if ) and divide the plane into three parts in agreement with the expectations deduced from the mc results in the low noise limit .[ fig : multi ] illustrates graphically what happens if the value of is increased gradually .the territory of the homogeneous cooperation and the sublattice ordered structures expand in the plane at the expense of the homogeneous defection if . evidently this process is accompanied with a relevant increase in the total payoff for the payoff parameters involved .this process is saturated for when the system achieves the optimum total payoff for any payoff matrix . for further increase of the processis reversed within some territory of h and sh games where the system can evolve into the `` tragedy of the commons '' as demonstrated in fig .[ fig : multi ] for when the system ( effective payoff ) can be mapped to the case of selfish player by exchanging the payoffs .consequently , the overstatement of the other regarding preference may also results in a social dilemma .we have introduced a spatial evolutionary game with a myopic strategy update rule to study what happens if the players characters , regarding the target utility , are tuned continuously .the two extreme characters are the egoist players ( maximizing their own income irrespective of others ) and the completely altruistic players or lovers who try to maximize the others income irrespective of their own payoff .this feature is quantified by introducing a utility function composed from the player s and the co - player s incomes with suitable weight factors .it turned out that all the relevant results can be explained by considering an effective payoff matrix in agreement with previous investigations . despite its simplicitythis model indicated clearly the importance of the fraternal behavior .namely , the highest total income is achieved by the society whose members share their income fraternally . any deviation from the fraternal behavior can result in the emergence of the `` tragedy of the commons '' within a suitable region of payoff parameters even for altruistic players characterized by the other - regarding preference . finally , to exemplify the lover s dilemma we end this work by citing the opening sentences of the paper by : `` there is a famous story written by ohenry about a poor young couple in love at christmas time ( ' ' the gift of the magi `` ) .neither has any money with which to buy the other a present , although each knows what the other wants .each has only one prized possession : he , his father s gold pocket watch ; she , her beautiful long hair . he has long been coveting a gold watch fob , while she has long admired a pair of tortoise shell hair combs in a nearby shop .the conclusion of the story is the exchange of gifts , along with a description of their means of getting the money for their purchases .he gives her the combs ( having pawned his watch to raise the money ) .she gives him the watch fob ( having cut and sold her hair ) . ''the mentioned example is discussed exhaustively in the text book by .
|
spatial evolutionary games are studied with myopic players whose payoff interest , as a personal character , is tuned from selfishness to other - regarding preference via fraternity . the players are located on a square lattice and collect income from symmetric two - person two - strategy ( called cooperation and defection ) games with their nearest neighbors . during the elementary steps of evolution a randomly chosen player modifies her strategy in order to maximize stochastically her utility function composed from her own and the co - players income with weight factors and . these models are studied within a wide range of payoff parameters using monte carlo simulations for noisy strategy updates and by spatial stability analysis in the low noise limit . for fraternal players ( ) the system evolves into ordered arrangements of strategies in the low noise limit in a way providing optimum payoff for the whole society . dominance of defectors , representing the tragedy of the commons , is found within the regions of prisoner s dilemma and stag hunt game for selfish players ( ) . due to the symmetry in the effective utility function the system exhibits similar behavior even for that can be interpreted as the `` lovers dilemma '' . evolutionary games , social dilemmas , selfishness , fraternity
|
social media has become an important tool for public engagement .businesses are interested in engaging with potential buyers on platforms such as facebook or their company news pages ; bloggers are publishers are interested in increasing their follower and reader base by writing captivating articles that prompt high levels of user feedback ( in the form of likes , comments , etc . ) .such groups will inevitably need to understand the types of content that is likely to elicit the most user engagement as well as any underlying patterns in the data such as temporal trends .specifically , we focus on predicting the number of feedbacks ( i.e. , comments ) that a blog post is expected to receive . given a set of blog documents that appeared in the past , for which the number and time stamp received, the task is to predict how many feedbacks recently published blog - entries will receive in the next hours . a first challenge in answering this question is how to effectively pre - process the data .despite offering a rich source of information , such data sets are usually noisy , high - dimensional , with many correlated features . in this paper , we focus on two unsupervised learning approaches for pre - processing : the traditional method of principal component analysis ( pca ) , and a more recently - developed method from deep - learning , the sparse autoencoder .these pre - processing methods are generally used to reduce dimensionality , eliminate correlations among variables ( since there may be a number of irrelevant features in the data set ) , decrease the computation time , and extract good features for subsequent analyses .pca linearly transforms the original inputs into new uncorrelated features .the sparse autoencoder is trained to reconstruct its own inputs through a non - recurrent neural network , and it creates as output new features from non - linear combinations of the old ones .we compare the effects of these two methods on two prediction models of linear regression and regression trees in the feedback prediction task . the rest of the paper proceeds as follows .section includes a short review of related literature .section describes the data set used .section discusses the unsupervised feature learning methods of pca and sparse autoencoder .section gives comparative results from different models for the unprocessed data , and the pre - processed data using pca and sparse autoencoder .section concludes and opens future research directions .section acknowledges help .al . ] uses the same data set as we do and compares a variety of models to predict the number of future feedbacks for a blog .they consider two performance metrics : area under curve explained ( auc ) , and the number of blog pages that were predicted to have the largest number of feedbacks out of the top ten blog pages that had the highest number of feedbacks in reality .buza ] , but approach the problem differently .specifically , we use unsupervised feature learning techniques to pre - process the data before running prediction models .this data set is made available from the uci machine learning repository , and comprises a total of crawled blog pages .the prediction task is to predict the number of comments for a blog post in the upcoming hours .the processed data has a total of features ( without the target variable , i.e. , number of feedbacks). ] . in both tests ,the gap formula has the same form - log(w_k),\ ] ] where is the logarithm of some objective function corresponding to the first principal components and ] and calculate their standard deviation .we obtain such values , and call them ] . in this paper, we implement a -layer autoencoder ( figure [ autoencoder_diagram ] ) in order to learn a compressed representation ( encoding ) of the features .each ` neuron ' ( circle ) represents a computational unit that takes as input , , ... ( and a " intercept term , called a bias unit ) , and outputs where is called the transfer ( activation ) function .we choose to be the hyperbolic tangent ( tanh function ) .the tanh function was chosen instead of the sigmoid function since its output range , [ -1,1 ] , more closely approximates the range of our predictor variable than the sigmoid function ( range is [ 0,1 ] ) .the tanh activation function is given below : the leftmost layer of the network is called the input layer , and the rightmost layer the output layer .the middle layer of nodes is called the hidden layer since its values are not observed in the training set .-layer autoencoder architecture .] the autoencoder tries to learn a function .in other words , it is trying to learn an approximation to the identity function , so as to output that is similar to .the sparse autoencoder is the autoencoder with the sparsity constraint added to the objective function . in other words ,the objective function of the sparse autoencoder is given by the reconstruction error with regularization : where is the number of training samples , is the number of layers ( in our case ) , and is the number of units in layers . the first term , is an average sum - of - squares error term .the second term is a regularization ( ) term that tends to decrease the magnitude of the weights , and helps prevent overfitting .the weight decay parameter controls the relative importance of the terms .the sparsity parameter controls how sparse the autoencoder is .this neural network is then trained using a back - propagation algorithm , where the objective function is minimized using batch gradient descent .although the identity function seems like a trivial function to be trying to learn , by placing constraints on the network , such as the weight decay , the sparsity parameter , and the number of hidden units , it is possible to discover interesting structure about the data .when we use a few hidden units , the network is forced to learn a compressed representation of the input .if there is some structure in the data , for example , if some of the input features are correlated , the algorithm will be able to discover some of those correlations , so that the ( sparse ) autoencoder often ends up learning a low - dimensional representation similar to pca .in order to determine the optimal values of the hyper - parameters and the number of hidden units , we split the data into training , validation and test sets and perform a grid search over the parameter space of the number of units ( ] ) while using the default value for of .the reason for not cross - validating over and for using small sets of possible values for and the number of hidden units is the high computational cost . for each value of , we use -fold cross - validation on the training data to select the optimal number of hidden units .we then calculate the root mean squared error ( rmse ) on the validation data and choose the value for the weight decay corresponding to the smallest rmse .this way , we obtain the optimal weight decay and the optimal number of units in the hidden layer .the optimal weight decay was found to be .for this , we plot the rmse over the number of units in the hidden layer in figure [ rmse_val ] and obtain as the optimal number of units . with the determined optimal values of the weight decay , the sparsity parameter , and the number of hidden units, we can also determine the estimated weight matrices and biases .we then fit the training and test data through these weights and biases to obtain the processed data for subsequent analysis .we use two models to predict the number of feedbacks : linear regression ( a linear model ) and regression tree ( a non - linear model ) .we compare the test rmse achieved using the output from pca and sparse autoencoder for each model . as a baseline, we use the centered and scaled data as input to the models . for pca , we use the projected data onto the components , and for the sparse autoencoder , we use the optimal number of units in the hidden layer , as found by cross - validation .results are summarized in table .table : rmse on test set of different models and methods . [ cols="<,^,^",options="header " , ] for linear regression, pca achieves an improvement compared to the baseline model while the sparse autoencoder achieves a improvement . for the regression tree model, pca achieves a improvement compared to the baseline model .the sparse autoencoder , however , performs worse than the baseline model ( an increase in rmse from to ) .unsupervised feature learning in the pre - processing step hence generally improves the prediction accuracy . taking into account the small range of the outcome variable after scaling and centering , the improvements are certainly significant .for the linear regression model the sparse autoencoder outperforms pca .this is likely because the sparse autoencoder solves many of the drawbacks of pca : pca only allows linear combinations of the features , restricting the output to orthogonal vectors in feature space that minimize the reconstruction error ; pca also assumes points are multivariate gaussian , which is most likely not true in many applications , including ours .the sparse autoencoder is able to learn much more complex , non - linear representations of the data and thus achieves much better accuracy .an interesting pattern can be observed in the interactions between the linearity and non - linearity of the models and the feature learning methods .the non - linear feature selection method ( sparse autoencoder ) achieves significant improvement in rmse for the linear model ( linear regression ) , while the linear feature selection method ( pca ) performs best for the non - linear model ( regression tree ) . combining the non - linear regression tree model with the non - linear sparse autoencoder, however , leads to worse results than the baseline .this is likely because regression trees have a top - down construction , splitting at every step on the variable that best divides the data .the sparse autoencoder returns non - linear combinations of features , making it difficult for regression trees to anticipate .in addition , each tree samples a subset of variables to split on while sparse autoencoders are known to be sensitive to parameter selection , and hence are not optimized to predict on subsets of the predictor variables , leading to unstable results and poorer performance .we show empirically that using unsupervised feature learning to pre - process the data can improve the feedback prediction accuracy significantly .these results should be of interest to businesses and publishers in pre - screening or editing their social - media posts prior to publicizing so as to estimate the level of engagement they would be expected to achieve . for instance , an automatic editor may flag posts with low predicted user engagement and draw attention to the writer that revisions may be needed .two directions of future work are immediately obvious .first , we can extend the work by trying other unsupervised feature learning methods such as ica and kernel pca in order to better understand how these methods ( linear versus non linear ) interact with the model type and to what extent our observations can be generalized .second , we would be interested in investigating whether results can be further improved by using different transfer functions and additional hidden layers in the sparse autoencoder ( i.e. , stacked sparse autoencoder ) in order to better capture the time - series aspects of the data set. we may also want to compare the feature learning methods on other models such as svm , boosting , random forests , etc . + * acknowledgements * : we thank robert tibshirani for helpful comments .
|
in this paper , we investigate the effectiveness of unsupervised feature learning techniques in predicting user engagement on social media . specifically , we compare two methods to predict the number of feedbacks ( i.e. , comments ) that a blog post is likely to receive . we compare principal component analysis ( pca ) and sparse autoencoder to a baseline method where the data are only centered and scaled , on each of two models : linear regression and regression tree . we find that unsupervised learning techniques significantly improve the prediction accuracy on both models . for the linear regression model , sparse autoencoder achieves the best result , with an improvement in the root mean squared error ( rmse ) on the test set of over the baseline method . for the regression tree model , pca achieves the best result , with an improvement in rmse of over the baseline .
|
particle laden flows and their numerical simulation are of interest in a wide range of engineering applications as well as in fundamental research .a frequently used approach for the simulation of dynamic granular materials is the discrete element method ( dem ) , cf . . the linear and angular momentum balance of the particlesis solved to obtain translational and rotational velocity . the hydrodynamic interaction between particlesis often neglected or fluid forces are accounted for by simple empirical correlations and the particle interaction modelled using macroscopic collision models .the accurate numerical modelling of the collision process is crucial for the quality of the simulation in a vast regime of parameters .several numerical models for the collision process between particles and for the collision of particles with walls have been developed in the framework of the dem .these models can be divided into two groups : hard - sphere models and soft - sphere models .the hard sphere approach quasi - instantaneous .the post - collisional velocities are calculated from momentum conservation between the states before and after surface contact .the reader is referred to for details .soft - sphere models usually require an excessively small time step if physically realistic material parameters are matched . in the soft - sphere approach the motion of the particlesis calculated by numerically integrating the equations of motion of the particles accounting for the contact forces acting on them .typical for all soft - sphere models is that very small time steps must be used to ensure that for reasons of stability and accuracy the step size in time is substantially smaller than the duration of contact .the soft - sphere contact forces are usually based on linear and non - linear spring damper models , reviews of which may be found in a commonly used model for time - resolved particle interactions in numerical simulations is a hertzian contact force in combination with a linear damping . however , in contrast to linear spring models , no closed solution exists for this equation .one is hence forced to integrate numerically with a very small time step .this issue even leads some researchers to prefer linear spring models which are much easier to evaluate .the discussion whether the linear or the non - linear approach is to be preferred seems unsettled in the community so far , and the present paper does not aim to compare these or to advocate one or the other . instead, the mathematical properties of the equation of damped hertzian contact are discussed and an efficient engineering approximation is proposed so as to reduce the cost of this model .this can enhance the efficiency of dems employing the physically more realistic non - linear approach .due to the time - step reduction required by many soft - sphere models , dem practitioners modify the material parameters of the collision model to this problem and allow the use of larger time steps .the idea is to make the collision process softer hence longer in time , at the price of increased numerical overlap of particles during collisions . except for the linear models ,the adjustment of parameters in the soft - sphere model is usually performed by trail and error . this is time consuming and prone to a sub - optimal choice of values . a current trend in the modelling of particulate flowsis that dems are enhanced by representations of the viscous effects of the continuous phase around the particles , cf . . in this framework ,hard - sphere models are inapplicable as they can not properly account for the coupling to the surrounding viscous fluid , thus introducing substantial numerical errors . here ,soft - sphere models are required , with the drawbacks discussed above . as a remedy , a systematic strategy was recently proposed to determine the parameters for a softened contact model .it is based on imposing the duration of the contact between the particles during collisions according to some external constraint , such as a pre - selected time step .the coefficients in the model are then determined so as to maintain the exact restitution coefficient .this guarantees maximal physical realism under given constraints imposed by computational resources .the approach , termed adaptive collision time model ( actm ) , was implemented and tested with particles in viscous fluids for single collisions as well as for multiple simultaneous collisions . beyond that, the approach is very interesting for pure dem without viscous fluid , as it provides an automated systematic approach to regularizing the collision process .another substantial advantage of this approach is that the original physical values of the coefficients can be introduced as bounds so that the original model is obtained again in a regular limit when the collision time is sufficiently reduced .this provides optimal commodity for the user . in a simulation with many particles ,each collision takes place with different velocities of the collision partners .hence , when imposing duration of contact an restitution coefficient , one is forced to select the model coefficients for stiffness and damping for each collision individually .if no closed solution is available , this requires an iterative procedure as indeed used so far . in the present paper, this is now improved by providing a direct solution to this problem , based on a systematically controlled approximation .the increased efficiency is demonstrated by suitable test cases and comparison to the original method .the paper is structured as follows .first , an exact formal solution of the equation of motion for a normal linarly damped hertzian collision is derived using nonlinear transformations and a parametric series expansion .then , a rigorous calculation of the collision time and restitution coefficient is carried out . , compact analytical approximations are developed formul allow the direct computation of the physically relevant parameters collision time and restitution coefficient from the intrinsic material parameters .afterwards , the inverse problem is addressed .the artificial lengthening of the collision time while preserving the restitution coefficient requires the computation of the appropriate stiffness and damping .finally , numerical tests demonstrate the accuracy and efficiency of the proposed algorithm , including test runs in typical engineering settings .particle deformation during contact is represented here by the overlap of the undeformed particle with the collision partner .the equation of motion governing the surface penetration during the contact phase with the mass of the particle and .the overdot represents differentiation with respect to time .the second term is the nonlinear restoring force originally derived by .the first term on the right hand side of corresponds to the damping , which is assumed to be .the initial conditions at the beginning of the collision are .equation is more conveniently expressed in dimensionless form by defining new variables with , which is tantamount to fixing the characteristic unit of velocity for the system as and choosing the ( at present arbitrary ) unit of time .the first and second derivatives of can be expressed as and following , the collision time is obtained as the strictly positive root of , i.e. the point of time when the surface penetration of the particle - wall system returns to zero .the restitution coefficient is then conveniently defined as the latter expression makes clear that the only way the restitution coefficient may depend on material and other input parameters is as a function of the parameter introduced above .physically , one may interpret this as follows : all linearly damped normal hertzian particle - wall collisions are , up to a characteristic parameter , similar .this characteristic parameter can be the restitution coefficient , which is easy to determine experimentally but can not be calculated trivially from known material properties and initial conditions .the other option , introduced here , is , which can not be measured directly but is readily calculated . at the heart of the present workis the convenient analytical conversion between the two .the case corresponds to the undamped case considered by , which is simply a conservative system , i.e. , and is integrated readily .furthermore , gave the collision time as {\frac{25m_{\rm_p}^2}{16k^2u_{\rm in}}}\:,\ ] ] with denoting the gamma function .yields in dimensionless form .the purpose of this section is to calculate rigorously from first principles , i.e. starting from the equation of motion and using mathematically justifiable techniques , the physical parameters pertaining to the characterization of collisions in the chosen model .such results are not only useful for the purpose of theoretical investigation , but are also instructive from the perspective of the analytical technique . since are , to the best of our knowledge , not available in the existing literature , in the following . due to the fractional power of in and the initial condition ,a naive power series solution for in is not possible .furthermore , it is not desirable at all to have a series representation in , since it is not a variable that is naturally small .on the other hand , especially in practical applications , very low values of are unlikely to be of interest . by consequence, is a parameter that may be assumed small for practical purposes . indeed , numerical examples show that if a lower bound of on , may be safely assumed .thus , in the following , the solution of the equation of motion is expressed using series in .it will be found that the relevant mathematical operations are facilitated when the problem is transformed to phase space , the dimensionless velocity can be interpreted as a function of the penetration , i.e. . using the relation the equation of motion expressed in these new variables reads in passing , observe that if the damping term were absent , this would reduce to an exact differential , with the solution given by the level curve of a potential function which readily interpret as the conserved total energy of the system .energy method was by when the undamped case .the analysis here , however , needs to be more involved . where ` in ' refers to the part of the collision during which the particle is compressed and the motion is directed into the wall , while ` out ' describes the outward motion of the particle ( figure [ fig : zt_ph ] ) .finally , , whence . inserting these new variables into the initial value problems for assumed small , : differentiating times with respect to and evaluating at successively yields the equations determining for .for , this gives the higher order corrections are given by since the -st derivative of at can only involve , equations and yield well - defined recursions for .furthermore , it is readily seen by mathematical induction , that have a convergent taylor series expansion around , and that the convergence radius is . from conservation of energyhowever , it can be deduced that , i.e. can be represented by a power series on the whole domain of interest .a physical interpretation of these terms is discussed in [ app : physint ] .note also that derivatives of appear explicitly in the expressions for .this does not imply that needs to be known _ a priori_. rather , it will be seen later that is uniquely determined by certain conditions that need to be fulfilled in order for the trajectory to be continuous in time and space . to find a connection to the time domain, one may exploit the fact that to find with the maximum surface penetration .this solves the problem in the classical sense . in order to obtain an explicit relation of the form , one would have to take the functional inverse and square the resultant expression .however , since it does not contribute to the calculation of the relevant physical properties and , that part of the problem , though utterly non - trivial , will not be addressed . with the above solution of the equation of motion at hand, it is now possible to calculate the significant physical quantities .although the two primary parameters of interest are the restitution coefficient and collision time , the calculation of the maximum penetration is required as an intermediate step . from the definition of , it follows that , the more tractable condition with the maximum penetration .this equation uniquely determines the smallest positive real root of .since the solution trajectory is required to be continuous , the velocity on both the inward and outward trajectory must go to zero at the same value ielding the condition this uniquely determines .finally , the collision time can be computed directly as obviously , each of the above quantities depend on . for , the classical case of the undamped hertzian restoring forceis obtained , for which the solutions are for small , therefore , it is natural to develop the solutions in terms of a power series in around , whereby the respective governing equations are successively differentiated at using implicit differentiation , with the chain rule used to evaluate the taylor coefficients . [[ maximum - surface - penetration ] ] maximum surface penetration + + + + + + + + + + + + + + + + + + + + + + + + + + + to zeroth order , as established above , condition yields the result {5/4 } + \mathcal{o}(\lambda) ] with .periodic boundary conditions were applied in the - and -direction .the gravitational acceleration is and the density of the particles is , with the particle diameter .the mobile particles are placed randomly in the subdomain \times [ 0.3 , 1.2 ] \times [ 0.1 , 1.4]$ ] velocity initialized with zero .once particles are released from their initial position ( figure [ fig:100_on_hex_sketch]a ) , they are accelerated by gravity towards the fixed layer and then collide with the bed or with other mobile particles .at the same time they are subjected to a dissipation of kinetic energy .two different cases are considered here . in case 1 , the coefficient of restitution is and in case 2 the is .the simulations were run with for 5000 steps corresponding to a non - dimensional simulation time of .this situation is depicted in figure [ fig:100_on_hex_sketch]b for case 1 .the particles are coloured according to the absolute value of their velocity from red ( ) to blue ( ) showing that not all the particles come to rest at the end of the simulation if the damping is weak .the collision process is elucidated by computing the various components of the total energy of the particles .the potential energy , the kinetic energy and the energy stored by deformation of the springs in the collision model are defined as and respectively .the total energy in the computational domain then is the fractions of energy are displayed in figure [ fig:100_on_hex_sketch_energy]a and figure [ fig:100_on_hex_sketch_energy]b for case 1 and 2 , respectively . as already mentioned above , in case 1 the particles not come to rest at the end of the simulation , reflected by their kinetic energy not being zero at the end of the simulation .in contrast to this , the kinetic energy of the particles is zero for case 2 .obviously , no significant differences of the various fractions of the energy are observed if the iterative or if the approximate method is used . to further elucidate the efficiency of the two numerical procedures , the cpu times of the and the scheme are in table [ tab : times_actm ] .here , is the overall cpu time required for the whole simulation , the time spen in the particle routines , is the part of required for the determination of stiffness and damping , the overall number of collisions and , finally , is the average time required per collision . obviously , the numerical effort is substantially reduced for all values of if the new approximate method is used .the results presented in this section confirm the accuracy , robustness and efficiency of the proposed method .in this paper , normal particle - wall and particle - particle collisions were studied the elastic interaction a repulsive hertzian contact force and an additional damping force linear in the velocity .first , the equation of motion was converted to its dimensionless form .this reduces the number of parameters in the equation to a single constant depending on the material parameters and the impact velocity . in particular , the physically relevant and practically interesting properties , the collision time ( dimensionless : ) and restitution coefficient , hence ,only depend on this particular combination of physical input parameters .this is conceptually and technically analogous to the damping ratio , or its inverse , the quality factor , which are familiar from the simple harmonic oscillator , the latter more so from circuits in electrical engineering but appears to be new in the present setting of damped hertzian collisions . in contrast , earlier works typically use a three - parameter family of input variables . while the use of to label cases is helpful in practice , the proposed nondimensionalisation and the subsequent analytical investigation establishes in a straightforward manner the one - to - one correspondence between the important experimental parameter and the parameter , which is easy to obtain from material parameters .furthermore , it demonstrates that different parameters with the same have solutions that are identical up to a linear scaling of space and time .next , an exact formal solution of the governing equation using nonlinear transformations and a series in the parameter was proposed , a rigorous calculation of and from the equation of motion . owing to the technical difficulties presented by the governing equation ,the analysis is necessarily involved , the methods and technique employed may be instructive , the results may be of use in theoretical considerations .the approach is also quite flexible , and a similar analysis would be applicable to more general settings , such as the extension to fully nonlinear models , i.e. nonlinear spring force in combination with a nonlinear dissipative force some studies . this may be the subject of future investigation .subsequently , compact and convenient formul for and .inverse formul were derived on that basis , enabling a very efficient and accurate calculation of the required stiffness and damping from the given input parameters , i.e. the desired collision time and restitution coefficient .these were then applied to engineering context .the scheme in the actm replaced by the direct approximate solution developed here .numerical tests on binary and multiple particle collisions confirm the accuracy and efficiency of the proposed method .the computations in section [ sec : num - test ] were performed at zih , tu dresden .the authors thank prof .dr . ralph chill and prof .jrgen voigt for helpful discussions on an early status of this work .the first author wishes to express his gratitude to the martin - andersen - nex - gymnasium dresden for providing the appropriate framework within its school curriculum to carry out parts of the research .t may be instructive to the physical meaning of the coefficients in the series expansion in powers of of , equations .first , it is readily seen that as obtained in .the fact that the two expressions are equal indicates reversibility .indeed , for , the classical oscillation problem with hertzian restoring force is obtained , which is a conservative system . moreover , upon inspection of the functional dependence , it is evident that the zeroth - order term expresses conservation of mechanical energy itself .he kinetic energy per unit mass and in dimensionless form is given by depending on which part of the trajectory is considered .thus , the auxiliary variables are proportional to the kinetic energy . the potential energy associated with the hertz contact force ( dimensionless and per unit mass ) is for both the inward and outward motion of the particle , the conservation of energy is valid to zeroth - order in . ow consider the more involved first - order corrections . both integrals can be evaluated analytically to give with denoting the hypergeometric function ( cf .* chapter 15 , pp .555 ) . a similar expression results for .the integral representations are actually more conducive for physical interpretation .indeed , both expressions can be interpreted as work done by the dissipative force . in case of , there is an added constant term , because the initial condition depends on , unlike the much more straightforward case .for this reason , the + , or inward , branch of the trajectory in the following . since , on the other hand , from , it follows that to first order in , the work done by the damping force is thus , are essentially first - order corrections in the energy balance due to the work done by the dissipative term in the equations of motion .hoomans , b. p. b. , kuipers , j. a. m. , briels , w. j. , van swaaij , w. p. m. , 1996 .discrete particle simulation of bubble and slug formation in a two - dimensional gas - fluidised bed : a hard - sphere approach .51 , 99108 .kempe , t. , vowinckel , b. , frhlich , j. , 2014 . on the relevance of collision modeling for interface - resolving simulations of sediment transport in open channel flow .j. multiphase flow 58 , 214235 .kruggel - emden , h. , wirtz , s. , scherer , v. , 2008 .selection of optimal models for the discrete element method : the single particle perspective . in : asme 2008 pressure vessels and piping conference .american society of mechanical engineers , pp .123135 .kruggel - emden , h. , wirtz , s. , scherer , v. , 2009 .applicable contact force models for the discrete element method : the single particle perspective .journal of pressure vessel technology 131 ( 2 ) , 024001 .luding , s. , clment , e. , blumen , a. , rajchenbach , j. , duran , j. , nov 1994 .anomalous energy dissipation in molecular - dynamics simulations of grains : the `` detachment '' effect .e 50 ( 5 ) , 41134122 .( 7,7.7 ) ( 0.5,0.2 ) ) with being the non - dimensional maximum surface penetration .a ) physical space , b ) phase space , title="fig:",scaledwidth=57.0% ] ( 10.0,0.2 ) ) with being the non - dimensional maximum surface penetration .a ) physical space , b ) phase space , title="fig:",scaledwidth=40.0% ] ( 5.0,-0.0 ) _ a _ ) ( 13,-0.0 ) _ b _ ) ( 7,7 ) ( 0,0 ) ( case 1 ) .a ) initial configuration at , b ) situation at the end of the simulation .the particles are coloured by the absolute value of their velocity from red ( ) to blue ( ).,title="fig:",scaledwidth=45.0% ] ( 8.5,0 ) ( case 1 ) .a ) initial configuration at , b ) situation at the end of the simulation .the particles are coloured by the absolute value of their velocity from red ( ) to blue ( ).,title="fig:",scaledwidth=45.0% ] ( 0.5,0.2 ) a ) ( 8.9,0.2 ) b ) .summary of test runs binary particle - particle collisions .the results achieved by the method presented in section [ subsec : subroutine ] ( heading : present ) are compared with those employing the proposed by .the situation simulated is depicted in figure [ fig : sketch_collision ] with initial condition , particle density and radius .the pre - set collision time was and the target restitution coefficient was varied as documented in the first column .the cases with are fictitious , in the sense that they are unlikely to be of practical interest as of now , and are included to show the integrity of the presented method even in such extremes .[ cols=">,>,^ , < , < , < , < , < , < , < , < , < , < , < " , ]
|
in this paper the normal collision of spherical particles is investigated . the particle interaction is modelled in a macroscopic way using the hertzian contact force with additional linear damping . the goal of the work is to develop an efficient approximate solution of sufficient accuracy for this problem which can be used in soft - sphere collision models for discrete element methods and for particle transport in viscous fluids . first , by the choice of appropriate units , the number of governing parameters of the collision process is reduced to one , a dimensionless parameter that characterizes all such collisions up to dynamic similitude . next , a rigorous calculation of the collision time and restitution coefficient from the governing equations , in the form of a series expansion in parameter . such a calculation based on first principles is particularly interesting from a theoretical perspective . since the governing equations present some technical difficulties , the methods employed are also of interest from the point of view of analytical technique . using further approximations , compact expressions for the restitution coefficient and the collision time are then provided . these are used to implement an approximate algebraic rule for computing the desired stiffness and damping in the framework of the adaptive collision model ( kempe & frhlich , journal of fluid mechanics , 709 : 445 - 489 , 2012 ) . numerical tests with binary as well as multiple particle collisions are reported to illustrate the accuracy of the proposed method and its superiority in terms of numerical efficiency . particle - laden flow , , collision modelling , hertzian contact
|
the certification of quantum devices is an important strand in current research in quantum information .research in this direction is not only of relevance to quantum information but also the foundations of quantum theory : what are the truly quantum phenomena ?for example , if presented with devices as black boxes that are claimed to contain systems associated with particular quantum states and measurements , we can certify these claims by demonstrating quantum non - locality , i.e. by violating a particular bell inequality .the obvious aspect of quantum non - locality that is useful for quantum information is that it can certify quantum entanglement .while this is relevant for the certification of the presence of quantum entanglement , if we wish to certify a particular state and measurement we need more information .more specifically , given a particular violation of a bell inequality , can we infer the state and measurements ?the amount of information necessary to certify a particular state once entanglement is certified has been discussed in ref .let us consider the specific example of the clauser - horne - shimony - holt ( chsh ) inequality .it can be shown that ( up to local operations that will be specified later ) the only state that can maximally violate the chsh inequality is the maximally entangled two - qubit state .furthermore , if we are close to the maximal violation , then we are also close to this maximally entangled state ( for appropriate notions of closeness ) . results in this direction are referred to as _ robust self - testing _ ( rst ) such that a near - maximal violation of a bell inequality robustly self - tests a state .we can also robustly self - test measurements performed on a state therefore equipping us with certification techniques for both states and measurements .to be more concrete , rst is possible if the correlations we observe in a bell test are -close to some ideal correlations such as those maximally violating a bell inequality then we can infer that the state used in the bell test is -close to our ideal state .the notion of closeness will be expounded upon later but for correlations we often talk about the difference between the maximal bell inequality violation and the violation obtained in the experiment , and for quantum states , we refer to the trace distance .this quadratic difference in the distance measures can not be improved upon if we only have access to the correlations . in this direction ,a bounty of results have emerged .there are now analytical methods for robustly self - testing greenberger - horne - zeilinger ( ghz ) states , graph states , partially entangled two - qubit states and the so - called w state .in addition to this , numerical robust self - testing methods were developed that allow for using arbitrary bell inequalities . also , it is worth noting that by simply and directly considering the correlations produced in the experiment , numerical methods developed in refs . can also be tailored to these considerations .it is now well - established that the violation of a bell inequality is not the only method for detecting entanglement in general .it is the appropriate method if one only has access to measurement statistics , i.e. the devices are treated like black boxes .clearly , if we have direct access to the quantum state ( e.g. the devices are trusted ) , we can do full state tomography to see if it is an entangled state .there does exist a third option , if a provider claims to produce a bipartite entangled state and sends one half of the state to a client who wants to use the state .we can assume that the client trusts all of the apparatus in their laboratory and can thus do state tomography on their share of the system .this set - up corresponds to the notion of _ epr - steering _ in the study of entanglement , where epr represents einstein - podolsky - rosen in tribute to their 1935 original paper .a natural question is whether one can perform robust self - testing in such a scenario ?this is obviously true since we can use the violation of a bell inequality between the client and provider .a better question is whether it is vastly more advantageous to consider self - testing in this scenario ? in this work , we address this question . before describing the work in this paper ,we would like to motivate this scenario from the point - of - view of quantum information .in particular , studying these epr - steering scenarios may be useful when considering _ blind quantum computing _ where a client has restricted quantum operations and wishes to securely delegate a computation to a server " that has a full - power quantum computer . by securely , we mean that the server does not learn the input to the computation nor the particular computation itself . in this framework , the client trusts all of his quantum resources but distrusts the server .epr - steering has also been utilised for _ one - sided device - independent quantum key distribution _ where the one - sided " indicates that one of the parties does not trust their device but the other does .there have even been experimental demonstrations of cryptographic schemes in this direction .also in this one - sided device - independent approach , the detection loophole is less detrimental to performing cryptographic tasks as compared with full device - independence so it is more amenable to current optical experiments .since one party ( the client ) now trusts all systems in their laboratory , they can perform quantum state tomography ; after all , they know the hilbert space dimension of their quantum systems and can choose to make measurements that characterise states of that particular dimension .this novel aspect of epr - steering as compared to standard non - locality introduces a novel object of study , called the assemblage : the reduced states on a client s share of some larger states conditioned on measurements made on the provider s side .an element of an assemblage is then a sub - normalized quantum state and we can now also phrase robust self - testing in terms of these objects , which we call _ robust assemblage - based one - sided self - testing _ ( ast ) with one - sided " to indicate there is one untrusted party .in essence , we show that ast can be achieved and the experimental state is at least -close to an ideal state if the observed elements of an assemblage are -close to the ideal elements ( where distance in both cases is the trace distance ) .this is in addition to considering the correlations between the client and provider obtained from performing a measurement on the elements of an assemblage , which we call _ robust correlation - based one - sided self - testing _ ( cst ) the notions of robustness are the same as for rst .conventional rst based on bell inequality violation implies cst so in the latter scenario we will never do any worse than in the former .furthermore , cst implies ast so the latter truly captures the novel capabilities in the formalism . in this work , for particular situationswe show both analytically and numerically that one can do better in the framework of cst and ast as compared to current methods in rst .this is to be expected since by trusting one side , we should have access to more information about our initial state . on the other hand ,we show that the degree of the improvement is not as dramatic as we would like .in particular , if the assemblage is , in some sense , -close to the ideal assemblage , we can only establish -closeness of our operations to the ideal case .this quadratic difference is also shown to be a general limitation and not just a limitation of our specific methods . in this way , from the point - of - view of self - testing , epr - steering behaves much like quantum non - locality .we indicate where ast and cst could also prove advantageous over rst and this is in the case of establishing the structure of sub - systems within multi - partite quantum states .that is , in certain rst proofs a lot of work and resources goes into establishing that untrusted devices have quantum systems that are essentially independent from one another .in addition to considering the self - testing of a bipartite quantum state , we show that one can get further improvements by establishing a tensor product structure between sub - systems .this could be where the essential novelties of ast and cst lie .aside from work in the remit of self - testing there is other work in the direction of entanglement verification between many parties .for example , pappa _ et al _ show how to verify ghz states among parties if some of them can be trusted while others not .their verification proofs boil down to establishing the probability with which the quantum state passes a particular test given the state s distance from the ideal case .this can be seen as going in the other direction compared to cst , where we ask how close a state is to ideal if we pass a test ( demonstrating some ideal correlations ) with a particular probability .our work thus nicely complements some of the existing methods in this direction .another line of research that is related to our own is to characterise ( non - local ) quantum correlations given assumptions made about the dimension of the hilbert space for one of the parties .this assumption of limiting the dimension is a relaxation of the assumption that devices in one of the parties laboratories are trusted .these works are relevant for _ semi - device - independent quantum cryptography _ and _ device - independent dimension witnesses _ in sec .[ sec1 ] we outline the general framework , introduce cst and ast and introduce the methods which will be relevant .given our framework , in sec .[ sec2 ] we demonstrate how to self - test the maximally entangled two - qubit state and give analytical and numerical results demonstrating an improvement over conventional rst . in sec .[ sec3 ] we briefly discuss the self - testing of multi - partite states and give numerical results showing how the ghz state can be self - tested .we also discuss how one could exploit tensor product structure on the trusted side to aid self - testing .we conclude with some general discussion in sec .in this section we introduce the framework in which our results will be cast . for brevity we will restrict ourselves to the case of two parties each with access to some devices . in sec .[ sec3 ] we will extend the framework to more - than - two parties . in our setting( see fig .[ fig : fig1 ] ) , one of the parties is the client and the other is the provider and the two of them share both quantum and classical communication channels and all devices are assumed to be quantum mechanical .therefore we can associate the parties with the finite - dimensional hilbert spaces and for the client and provider respectively .the quantum communication channel is used to send a quantum system from the provider to the client and the client will then perform tomography on this part of the state . after the provider has communicated a quantum system , there will be some joint quantum system and the client can now ask the provider ( using the classical communication channel ) to perform measurements on their share of the system ; the outcome is then communicated to the client . and generate an outcome labelled by all the while treating the provider s measurement device and the source as a black box .the dotted lines denote classical channels , while full lines represents a quantum channel.[fig : fig1],scaledwidth=33.0% ] in this work we assume that the provider gives the client arbitrarily many copies of the subsystem such that they can do perfect tomography on their quantum system. we will not consider complications introduced by only having access to finitely many systems .this is a standard assumption in many works on self - testing and we will comment on relaxing this assumption in sec .[ sec4 ] . after the provider sends a quantum system to the client they share a quantum state , a density matrix acting on the hilbert space .crucially , in our work , the dimension of the hilbert space is known but the space can have an unrestricted dimension since we do not , in general , trust the provider .therefore , without loss of generality , the density matrix is associated with a pure state since we can always dilate the space to find an appropriate purification . after establishing the shared state , the client asks the provider to perform a measurement from a choice of possible measurements .these measurements are labelled by a symbol if there are possible choices of measurement . for each measurement, there are possible outcomes labelled by the symbol .the client then communicates a value of to the provider and then receives a value of from the provider .again , since the dimension of is unrestricted , we assume that the measurement made by the provider has outcomes that are associated with projectors such that and .conditioned on each measurement outcome given the choice , the client performs state tomography on their part of the state which can be described in terms of the operators where is the identity operator acting on and is the partial trace over the provider s system . an _assemblage _ is then the set with elements satisfying , the reduced state of the client s system .one can extract the probability of the provider s measurement outcome for the choice by taking . instead of studying the assemblage directly, we may simplify matters by considering the _ correlations _ between the client and provider where both parties make measurements and look at the conditional probabilities where is the client s choice of measurement and the outcome for that choice .if the measurement made by the client is described in terms of the generalised measurement elements such that then these correlations can be readily obtained from elements of the assemblage as . in self - testing, the provider claims that they are manufacturing a particular state and performing particular ( projective ) measurements on .we call this combination of state and measurements the _ reference experiment _ to distinguish it from the physical experiment where and are the state and measurements respectively . since we do not have direct access to the hilbert space of the provider it is possible that they are manufacturing something different that has no observable effect on experimental outcomes . for example, they could prepare the state and retain the system in state but never perform any operation on it .this will not affect the assemblage so we must allow for operations on the provider s system in that leave assemblages unaffected .following the discussion by mckague and mosca , some of these changes include : 1 . unitary change of basis in 2 .adding ancillae to physical systems ( in tensor product ) upon which measurements do not act , i.e. 3 . altering the measurements outside the support of the state 4 . embedding the state and measurements into a hilbert space where has a different dimension to . allowing for these possible transformations we need an appropriate notion of equivalence between the physical experiment and the reference experiment .we say that the physical experiment associated with the state and measurements are equivalent to the reference experiment associated with the state and measurements if there exists an isometry such that for all , and .a consequence of this notion of equivalence is that if a physical experiment is equivalent to the reference experiment then the former can be constructed from the latter by the operations described above . in the other direction , if the provider does indeed construct the reference experiment and then performs one of the transformations listed above then an isometry can always be constructed to establish equivalence between the physical and reference experiments .an important issue in self - testing based on probabilities is that experimental probabilities are invariant upon taking the complex conjugate of both the state and measurements .thus , the best one can hope for in this kind of self - testing is to certify the presence of a probabilistic mixture of the reference experiment and its complex conjugate . due to this deficiency and the fact that complex conjugation is not a physical operation , only purely real reference experiments can be properly self - tested . in the introduction we gave an overview of the known results in self - testing andindeed all the states and measurements which allow for self - testing have a purely real representation ( -, ) . in ref . the authors deal more rigorously with the problem and even show that for some cryptographic purposes self - testing of the reference experiment involving complex measurements does not undermine security .we note in appendix [ app1 ] that for our work we may not need to restrict to purely real reference experiments : an assemblage is not typically invariant under taking the complex conjugate of both the state and measurements . for simplicity we will study experiments with states and measurements that have real coefficients but note that an advantage of basing self - testing on epr - steering eliminates the restriction to only real coefficients .however , for an arbitrary physical experiment there may exist operations not included in the list above that leave the assemblage and reduced state unchanged .the essence of self - testing based on an assemblage and reduced state is to establish that the only operations a provider can perform that leave it unchanged are those described above . given our formalism , the self - testing of quantum states is rendered extremely easy due to the purification principle : every density matrix on some system can result as the marginal state of some bipartite pure state on the joint system such that , and this pure state is uniquely defined up to an isometry on system .therefore , in our formalism , we can observe that given a reduced state we can describe the state upto an isometry on provider s system . in particular , due to the schmidt decomposition of the reduced state ( such that and for all ) we have a purification of the form : where ( ) is some set of orthogonal states in ( ) .the local isometry then maps the set ( ) to another set of orthogonal states ( ) . as a consequence of our formalism, we can establish that and are equivalent solely by checking to see if the reduced state is equal to the reduced state . another obvious consequence for entanglement verification between the client and provideris that they share some entanglement if and only if is mixed .this is purely a consequence of the assumption that they share a pure state .indeed , it is cryptographically well - motivated to say that the provider produces a pure state since this gives the provider _ maximal information _ about the devices that are used in a protocol .even though self - testing of states is rendered easy by our assumptions , the self - testing of measurements does not follow from only looking at the reduced state .in other words , knowing the global pure from the reduced state , does not immediately imply that the provider is making the required measurements on a useful part of that pure state .it should be emphasized that in any one - sided device - independent quantum information protocol , measurements will be made on a state in any task to extract classical information from the systems , both trusted and untrusted .the self - testing of measurements made by an untrusted agent is , as explicitly stated in eq ., crucial .we give a simple example to illustrate this point .this is an example of a physical system that a provider can prepare and a measurement they can perform . establishing that the client and provider share a state that is equivalent to a reference state is not immediately useful .consider the situation where the provider prepares the state where the subscripts and label two qubits that the provider retains and sends the qubit with the subscript to the client .the two qubits labelled by and can be jointly measured or individually measured . in this examplethe provider s measurement solely consists of measuring qubit and ignoring qubit such that measurement projectors are of the form .therefore , the reduced state of the client is which indicates that the client and provider share a maximally entangled state .however , every element of the assemblage is , and thus unaffected by any measurement performed by the provider .therefore we can not say anything about the provider s measurements and , furthermore , the entanglement is not being utilised by the provider and will thus not be useful for any quantum information task .this example just highlights that in our scenario it only makes sense to establish equivalence between a physical experiment and reference experiment taking into account _ both the state and measurements_. the example motivates the need to study the assemblage generated in our scenario and not just the reduced state .also , as will be shown later , this allows us to construct explicit isometries demonstrating equivalence between a physical and reference experiment instead of just knowing that such an isometry exists . in colloquial terms , being able to explicitly construct an isometry allows one to be able to locate " their desired state within the physical state .so far we have assumed perfect equivalence between the reference and physical experiment as described by eqs . .[ secrob ] we extend our discussion to the case where equivalence can be established approximately which is known as robust self - testing . instead of using the reduced state of the client and assemblage, we may wish to study self - testing given the correlations resulting from measurements on the assemblage and we discuss this in sec .[ seccor ] . in this sectionwe formally introduce _ robust assemblage - based one - sided self - testing _ ( ast ) and indicate its advantages and limitations . before thiswe need to recall some mathematical notation in order to discuss robustness " .we need an appropriate distance measure between operators acting on a hilbert space . to facilitate thiswe will use the schatten -norm for being a linear operator acting on .this norm is directly related to , the _ trace distance _ between quantum states since for , .equivalently , where is the eigenvalue of the operator .another property of the trace distance is that when and are pure then .the motivation for introducing a distance measure is clear when we consider imperfect experiments .that is , if our physical experiment deviates from the predictions of our reference experiment by a small amount can we be sure that our physical experiment is ( up to a local isometry on ) close ( in the trace distance ) to our reference experiment ?now we can utilise the trace distance to describe closeness between the physical state and reference state . to whit , if where and allowing for isometries on the provider s side , then the minimal distance between physical and reference states will be the minimal value of for .clearly , since the trace distance does not increase when tracing out the provider s sub - system .this lower bound on the distance in eq .[ distance ] does not tell us that there is an isometry achieving this bound .we wish to be able to state that there exists an isometry for which the distance in eq .furthermore it would be preferable to be able to construct this isometry .this is , in essence , robust self - testing .we now formalise this intuition in the following definition : given a reference experiment consisting of the state with reduced state and measurements such that the assemblage has elements , , .also given a physical experiment with the state , reduced state and measurements such that the assemblage has elements , , .if , for some real , and , , , then **-robust assemblage - based one - sided self - testing ( -ast ) is possible * * if the assemblage implies that there exists an isometry such that for , , and . in this definition , in order to simplify matters , we have bounded both the distance between physical and reference states both with and without measurements by the function . it will often be the case that the trace distance between states ( without measurements ) will be smaller than the distance between measured states , but we are considering the _ worst case _ analysis . in further study , it could be of interest to give a finer distinction between these distance measures in the definition .note also that , in this definition , we only ask for the existence of an isometry .later , in sec . [ sec2 ] , we will construct an isometry for robust self - testing which will be more useful for various protocols .also , for this definition to be useful , a desirable function would be where is upper - bounded by a small positive integer .if , as mentioned earlier this establishes a lower bound on the distance between physical and reference experiments , and so the ideal case would be -ast .we now give a simple example to show that , in general , this ideal case is not obtainable .the client has a three - dimensional hilbert space .the reference experiment consists of the state with measurements and where is a two - dimensional hilbert space .the assemblage for this reference experiment has the following elements : the physical experiment consists of the state where and the subscript denotes a second qubit that the provider has in their possession .the measurements in the physical experiment are for .the state has the reduced state thus implying that .the assemblage for this physical experiment then has the elements : from the above assemblages we observe that , , .here we have just defined a new closeness parameter for the convenience of our definitions . given these physical and reference experiments ,we now wish to calculate a lower bound on for all possible isometries in the definition above ; this will give a lower - bound on the function for -ast . to do this ,we introduce the notation for the ancillae that the provider can introduce and as the unitary that they can perform jointly on the ancillae and their share of the physical state .this then gives us : where where is the identity on the client s system . thus maximizing this quantity for all isometries , we obtain the maximal value and the lower bound .this example excludes the possibility of having -ast given that the client s hilbert space is three - dimensional .we will later return to this reference experiment in sec .[ sec2a ] with the modification that the client s hilbert space is two - dimensional .as outlined earlier , epr - steering can be studied from the point - of - view of the probabilities obtained from measurements performed on elements of an assemblage , i.e. known measurements made by the trusted party .this point - of - view is native to bell non - locality and is suitable for making further parallels between non - locality and epr - steering . in this regardone can construct epr - steering inequalities ( the epr - steering analogues of bell inequalities ) which can be written as a linear combination of the measurement probabilities .the two figures - of - merit , assemblages and measurement correlations , lead to a certain duality in the theory of epr steering .the approach that one will use depends on the underlying scenario . in the casewhen correlations are obtained by performing a tomographically complete set of measurements ( on the trusted system ) the two approaches become completely equivalent . however , in some cases probabilities obtained by performing a tomographically incomplete set of measurements , or even just the amount of violation of some steering inequality can provide all necessary information .another possibility is that a trusted party can perform only two measurements and nothing more , i.e. has no resources to perform complete tomography . in this sectionwe consider the definition and utility of defining robust self - testing with respect to these probabilities for an appropriate notion of robustness .this approach to self - testing is not immediately equivalent to the notion of ast defined previously ( even if tomographically complete measurements are made ) for reasons that will be become clear .recall the probabilities for being elements of general measurement associated with the outcome for measurement choice such that .naturally , we can also obtain the probabilities .in addition to the physical probabilities " , we have the reference probabilities " which refer to the probabilities resulting from making the same measurements on a reference assemblage as described above . performing robust self - testing given these probabilitieswill be the focus of this section .a useful definition of the schatten -norm is where is the operator norm . since is a positive operator with operator norm upper bounded by andif and for all elements of an assemblage we can conclude that \vert\leq\vert \tilde{\sigma}_{a|x}-\sigma_{a|x}\vert_{1}\leq\epsilon,\nonumber\\ \vert p(b|y)-\tilde{p}(b|y)\vert&=\vert \textrm{tr}\left[f_{b|y}\left(\rho_{c}-\tilde{\rho}_{c}\right)\right]\vert\leq2d(\rho_{c},\tilde{\rho}_{c})\leq 2\epsilon\nonumber\end{aligned}\ ] ] for all , , , .this then establishes that knowledge of the assemblage and establishing its closeness to the assemblage associated with a reference experiment implies closeness in the probabilities obtained from both experiments .clearly , the converse is not necessarily true and closeness in probabilities does not always imply closeness of reduced states and assemblages .assemblages can be calculated from the statistics obtained from performing tomographically complete measurements , and then the distance ( in schatten -norm ) between this assemblage and some ideal assemblage can be calculated . however , even for tomographically complete measurements , we only have that \vert\leq\vert \tilde{\sigma}_{a|x}-\sigma_{a|x}\vert_{1} ] does not imply .this goes to show that the ast approach is distinct from solely looking at the difference between probabilities .inspired by the literature in standard self - testing ( see , e.g. refs . ) , it should still be possible to attain robust self - testing based on probabilities for measurements on assemblages and with this in mind , we give the following definition : given a reference experiment consisting of the state with reduced state and measurements such that the assemblage has elements , , . also given a physical experiment with the state , reduced state and measurements such that the assemblage has elements , , .additionally given a set of general measurements that act on such that and , .if , for some real , , , , , then **-robust correlation - based one - sided self - testing ( -cst ) * * is possible if the probabilities imply that there exists an isometry such that for , , and . instead of directly bounding the distance between reference and physical probabilities , we can indirectly bound this distance by utilising an epr - steering inequality . in the literature on standard self - testing ,probability distributions that near - maximally violate a bell inequality robustly self - test the state and measurements that produce the maximal violation . as a first requirement, there needs to be a unique probability distribution that achieves this maximal violation , and we now have many examples of bell inequalities where this happens .the same applies to epr - steering inequalities : there needs to be a unique assemblage that produces the maximal violation of an epr - steering inequality .furthermore this unique assemblage needs to imply a unique reference experiment ( up to a local isometry ) . for epr - steering inequalities of the form for real numbers , any assemblage that violates this inequality is necessarily _steerable_. if all quantum assemblages satisfy for some positive real number then is the maximal violation of the epr - steering inequality . if we consider probabilities of the form that satisfy then they are at most -far from the reference experiment that produces the maximal violation of .we will make use of this approach to cst in sec .[ sec2b ] .we now briefly return to the issue of complex conjugation . as mentioned above and discussed in appendix [ app1 ] ,the ast approach is advantageous to the standard self - testing approach in that we can rule out the state and measurements in the reference experiment both being the complex conjugate of our ideal reference experiment .one issue with cst is that since we are reconsidering probabilities for a fixed set of measurements made by the client , if the measurements are invariant under complex conjugation then the provider can prepare a state and make measurements that are both the complex conjugate of the ideal case without altering the statistics .this can be remedied by the client choosing measurements that have complex entries as longas it does not drastically affect the ability to achieve -cst .in this section , we look at the self - testing of the maximally entangled two - qubit state ( or , _ ebit _ ) .this is a totemic state in the self - testing literature ( e.g. ) and that it is possible to do rst for this state is now well - established : it is achieved by looking at probability distributions that near - maximally violate the chsh inequality .that is , since the maximal violation of the chsh inequality is , say , then probability distributions that give a violation of result from quantum states that are -close to the ebit ( up to local isometries ) . in current analytical approachesthe constant in front of the term can be shown to be quite large . however , there are numerical approaches that substantially improve upon this constant by several orders of magnitude .we turn to ast and cst to see if we can improve the current approaches that appear for rst . in particular , in sec .[ sec2a ] we look at analytical methods for ast and show that , for the ebit , -ast is possible where the constant in front of the term is reasonable . in sec .[ sec2b ] we turn to numerical methods for cst where the study of probabilities instead of assemblages is currently more amenable .we show that -cst is possible and also that our numerical methods do better than existing numerical methods for rst .thirdly , in sec .[ sec2c ] we then show that -ast is essentially the best that one can hope for by explicitly giving a physical state and measurements where in the definition of -ast will be at least .in other words , -ast is impossible .we first set - out the reference experiment that we will be studying for the rest of this section .it consists of the experiment described in sec .[ secrob ] but now with the client s hilbert space being two - dimensional . recall that the state is and the measurements are and where we have dropped the subscripts for reasons of clarity .the assemblage for this reference experiment has the following elements : we will henceforth call this reference experiment the _epr experiment_. we can now state a result about ast for this experiment . [ thm1 ] for the epr experiment , -robust assemblage - based one - sided self - testing is possible for . before proving this theorem we will present two useful observations that will be used in the proof .the first observation is a lemma about the norm that we are using while the second is specific to the self - testing of the epr experiment .we require the notation .[ goodlem ] for any two vectors , where and , if , then for another vector such that , and this fact essentially follows from the definition of .that is , and since the rank of is then the which concludes our proof ( along with the fact that ) .the next observation follows from the conditions outlined in the definition of -ast and is as follows : [ niceobs ] if and then the proof follows from a series of basic observations : the first inequality results from the fact that and and that and .the second inequality follows from the observation that .we are now in a position to prove thm .[ thm1 ] .recall that we are promised that for all , where . the aim is now to find an explicit isometry that gives a non - trivial upper bound for the following expression : for , and as defined before .we first focus on the cases where and and use this to argue the more general result .the isometry that we use is the so - called swap isometry that has been used multiple times in the self - testing literature . in this isometry( see fig .[ fig : fig2 ] ) an ancilla qubit is introduced in the state where denotes the ancilla register on the provider s side in addition to the provider s hilbert space . after introducing the ancillaa unitary operator is applied to both the provider s part of the physical state and the ancilla , i.e. where , and and , and . after applying this isometry to the physical state obtain the state , scaledwidth=40.0% ] the desired result of this isometry to establish an ebit in the hilbert space in addition to the measurements acting on the hilbert space .therefore we wish to give an upper bound to at this point we can now apply a combination of lem .[ goodlem ] and lem .[ niceobs ] to bound this norm .firstly , we observe that by virtue of lem .[ niceobs ] we have that where , for the sake of brevity , we do not write identities , e.g. . we can apply these observations in conjunction with lem .[ goodlem ] ( and noticing that ) to eq .[ tracedist ] to obtain since and , for the pauli- matrix , we obtain the following result that we then obtain we will now apply the same reasoning to but we need the fact that which follows from the condition on the reduced state and . using these observations and lem .[ niceobs ] we arrive at where to obtain the last inequality we chose to be the pure state that is proportional to , i.e. where thus . we have shown that . now we consider the case of self - testing where measurements are made . that is , establishing an upper bound on the expressions of the form in eq .[ condition ] where and and after applying the swap isometry described above , the projector acting on the physical state gets mapped to in the case that , utilising the fact that , for eq .[ condition ] we obtain : by using the same reasoning as above we obtain the bounds and for the and cases respectively . for the case that , more work is required in bounding eq .[ condition ] .however , again by repeatedly applying the observation in lem .[ niceobs ] , as shown in appendix [ app1c ] we obtain the bound of thus concluding the proof .central to the proof of this theorem was lem .[ niceobs ] , but it is worth noting that the minimal requirements for proving this lemma were bounds on the probabilities and not necessarily bounds on the elements of the assemblage. we utilised the fact that bounds on the probabilities are obtained from the elements of the assemblage , but if one only bounds the probabilities then our result still follows .we then obtain the following corollary .[ corr1 ] for the epr experiment , -robust correlation - based one - sided self - testing is possible for .furthermore , one can also obtain this result using an epr - steering inequality as we outline in appendix [ app1d ] with some minor alterations to the function .the fact that the function in thm .[ thm1 ] and cor .[ corr1 ] are the same suggests at the sub - optimality of our analysis , since ast could utilise more information than cst .it is now worth commenting on the function and contrasting it with results in the standard self - testing literature .in particular , we want to contrast this result with other analytical approaches .this is quite difficult since the measure of closeness to the ideal case is measured in terms of closeness to maximal violation of a bell inequality and not in terms of elements of an assemblage or individual probabilities .here we give an indicative comparison between the approach presented here and the current literature .firstly , mckague , yang and scarani developed a means of robust self - testing where if the observed violation of the chsh inequality is -close to the maximal violation then the state is -close to the ebit .this is a less favourable polynomial than our result which demonstrates -closeness . on the other hand, the work of reichardt , unger and vazirani does demonstrate -closeness in the state again if -close to the maximal violation of the chsh inequality .however , the constant factor in front of the term has been calculated in ref . to be of the order and our result is several orders of magnitude better even considering the analysis in appendix [ app1d ] for a fairer comparison . in various other works more general families of self - testing protocolsalso demonstrate -closeness of the physical state to the ebit when the violation is -far from tsirelson s bound .we must emphasize that our analysis could definitely be tightened at several stages to lower the constants in but epr - steering already yields an improvement over analytical methods in standard self - testing .as demonstrated by the general framework in refs . and , numerical methods can be employed to obtain better bounds for self - testing . for reasons that will become clear we will shift focus from ast to cst instead and , in particular , cst based on violation of an epr - steering inequality .also , we will not be considering cst in full generality and only seek to establish a bound on the trace distance between the physical and reference states ( up to isometries ) .this will facilitate a direct general comparison with previous works .we begin by constructing the same swap isometry as used in the proof of thm .[ thm1 ] . as before ,it is applied to the physical state and again we wish to upper bound the norm in eq .[ tracedist ] .since this is the trace distance between the pure states , and , we have that where such that inspired by the work in refs . and , instead of bounding the quantity , we wish to bound another quantity which is the _ singlet fidelity_. for , this quantity is defined as {|\tilde{\psi}\rangle}\nonumber\\ & = \frac{1}{2}\left({\langle 0_{c}|}\sigma_{0|0}{|0_{c}\rangle}+2{\langle 0_{c}|}(\sigma_{0|1,0|0}-\sigma_{0|0,0|1,0|0}){|1_{c}\rangle}+2{\langle 1_{c}|}(\sigma_{0|0,0|1}-\sigma_{0|0,0|1,0|0}){|0_{c}\rangle}+{\langle 1_{c}|}(\rho_{c}-\sigma_{0|0}){|1_{c}\rangle}\right)\nonumber\end{aligned}\ ] ] such that and .the above two quantities are related through as shown in ref . .the goal is now to give a lower bound to given constraints on the assemblage .in fact , to facilitate comparison with previous work , we will use the violation of the chsh inequality to impose these constraints .every bell inequality gives an epr steering inequality when assuming the form of the measurements on the trusted side .if on the client s side we assume the measurements that give the maximal violation of the chsh inequality for the assemblage generated in the epr experiment the chsh expression , denoted by , can be written as where the last bound is tsirelson s bound .the measurements that the client makes are measurements of the observables in the set .we then have the constraint that for a near - maximal violation .we now want a numerical method of minimising the singlet fidelity ( so as to give a lower bound ) such that .this method is given by the following semi - definite program ( sdp ) : where such that , and is a -by- matrix of all zeroes .we constrain in the optimization to be positive semi - definite and not that each sub - matrix of corresponding to something like an element of an assemblage is a valid quantum object .it actually turns out that all assemblages that satisfy no - signalling can be realised in quantum theory .discussion of this point is beyond the scope of this paper as all we wish to do is give a lower bound on the value of therefore just imposing gives such a bound . before giving an indication of the results of the above sdp , we still need to show that . we do this by showing that is a gramian matrix and all gramian matrices are positive semi - definite .first observe that entries of are of the form for .by cyclicity of the partial trace we can also write for , .we now note that where is an orthonormal basis in such that and is some scalar . since the elements of are all the inner product of vectors associated with a row and column , where has column vectors associated with the vectors .therefore , is gramian .this then makes the above optimization problem a completely valid problem for lower bounding .we further note that matrix represents the epr - steering analogue of the moment matrix in the navascus- pironio - acn ( npa ) hierarchy which is useful for approximating the set of quantum correlations . with elements corresponding to assemblage elements with longer sequences of measurements on the provider s side . however , due to the work in ref . , having the client s system be two - dimensional already essentially puts us in the first level of the hierarchy without the need to go higher . ] in fig .[ fig : fig3 ] we plot the lower bound on achieved through this method and then compare it to the value obtained through the method of bancal _ et al _ in ref . . in both casesthe violation of the chsh inequality is lower - bounded by , and we clearly see that the lower - bound is more favourable for our optimization through epr - steering as compared to full device - independence . for the case of epr - steeringwe observed that the plot can be lower - bounded by the function whereas the plot for device - independence is lower - bounded by .respectively , these functions give an upper bound on of and .the difference between these two approaches is not as dramatic as the difference in the analytical approaches .however , these results just highlight that the analytical approaches are quite sub - optimal for both epr - steering and device - independent self - testing .is the distance from the maximal violation of the chsh inequality.[fig : fig3],scaledwidth=45.0% ] both the analytical and numerical approaches have utilised the same swap isometry . while constructing this isometry demonstrates in a clear and simple manner that self - testing is possible , it is natural to ask if there may be more useful isometries that give a different error scaling for our particular scenario ? in particular , can we do better than the in the function for -ast ? as we have already shown in sec .[ sec1 ] , in general this is not possible but the example demonstrating this is somewhat contrived .that is , we are trying to self - test a two - qubit state but assume that the hilbert space of the client is three - dimensional .we wish to ask if -ast is possible in the particular example of the epr experiment ? in this section we will show that this is not possible and the best we can hope for is -ast which we have already established is possible . as a side note , in appendix [ app1e ]we show that the trace distance between the physical and reference states in the epr experiment can be for some isometries .we emphasize that this trace distance between physical and reference states ( condition given in the first line of eq .[ eq : defast ] ) only amounts to part of the criteria for ast . the other part of the criteria ( the second line of eq .[ eq : defast ] ) rules out many isometries that might give the optimal trace distance between physical and reference states only . with this in mindwe want to bound the expression in eq .[ condition ] for all possible isometries given -closeness between the elements of the physical and reference assemblages .in particular , we give an example of a physical experiment where -closeness for the assemblages is satisfied but for all isometries , the smallest value of eq .[ condition ] is .the physical state is where and denote two qubits that the provider has in their possession , thus .the physical measurements are , , and .these physical measurements on the state produce the following assemblage elements : we see then that and for all , .we now show that for all possible isometries . by considering all possible isometries we have for and beinga unitary applied jointly to the provider s qubits and the ancillae .this then allows us to observe that we see that which achieves the maximal value of .therefore for all possible isometries .this example demonstrates that -ast is impossible for the epr experiment and our analytical results are essentially optimal ( up to constants ) .so far all the work presented thus far has been presented within a bipartite format both in terms of the client - provider scenario but also the reference state s hilbert space being the tensor product of two hilbert spaces . due to their utility in various tasks ,the self - testing of multi - partite quantum states is also desirable . within the device - independent self - testing literature therehave already been many developments along this line of research ( see , e.g. refs . ) . in this sectionwe give a brief indication of how to generalise our set - up to the consideration of such states . in sec .[ sec3a ] we will discuss the self - testing of tri - partite states and give initial numerical results demonstrating the richness of this scenario .we will briefly sketch in sec .[ sec3b ] how epr - steering could prove useful in establishing a tensor product structure within the provider s hilbert space .already for three parties , how to modify the client - provider set - up opens up new and interesting possibilities .for example , the simplest modification is to have the new , third party be a trusted part of the client s laboratory ; the total hilbert space of the client is now the tensor product of the two hilbert spaces associated with these two parties . the next possible modification , as shown in fig .[ fig : fig4 ] , is to have a second untrusted party that after receiving their share of the physical state does not communicate with the initial provider : they only communicate with the client .this restriction establishes a tensor product structure between the two untrusted parties which is useful .-trusted setting in the text .there are two non - communicating providers and we assume without loss of generality that one of them generates a quantum state and sends one part to the client and another to the other provider .the client may communicate with each provider individually and ask them to perform measurements.[fig : fig4],scaledwidth=33.0% ] to illustrate the interesting differences between the bipartite and tri - partite cases , we look at the example of self - testing the greenberger - horne - zeilinger ( ghz ) state where and with subscripts denoting the number of the qubit . in the scenario with two trusted parties ( that together form the client ) , a qubit is sent from the provider to each of these parties ( say , qubits and are sent ) ; we will call this scenario the -trusted setting__. in the other scenario with two non - communicating untrusted providers , a qubit ( say , qubit ) is sent to the client ; we will call this scenario the -trusted setting__. these different scenarios correspond to different types of multipartite epr - steering introduced in ref . .we now describe the reference experiments for both settings for the state . in the case of the -trusted setting , as in the epr experiment , the provider claims to make measurements for as well as and .the assemblage for the two trusted parties has elements for the -trusted setting , in addition to the provider claiming to making the above measurements , the second untrusted party , or second provider claims also to make the same measurements , which we denote by for , .the assemblage will be where each element is .the assemblage for the one trusted party will have elements but for the sake of brevity we will not write out the elements .we then wish to self - test this reference experiment when the elements of the physical assemblage are close to the elements of the ideal , reference experiment . instead of doing this, we will mimic the numerical approach in sec .[ sec2b ] by considering the ghz - mermin inequality adapted to the -trusted and -trusted scenarios . utilising the notation of and for the pauli- and pauli- matrices respectively , for the -trusted and -trusted settings ,the inequalities respectively are : the maximal quantum violation of these inequalities is .we now aim to carry out self - testing if the physical experiment achieves a violation of .for the untrusted parties , we implement the swap isometry to each of their systems as outlined in sec .[ sec2a ] . for the -trusted setting ,the physical state gets mapped to . in the -trustedsetting , the physical state gets mapped to where is the physical measurement made by the second untrusted party , and denotes the ancilla qubit introduced for one party and for the other party .our figure of merit for closeness between the physical and reference states is the _ ghz fidelity _ which for the -trusted and -trusted settings is and respectively where where in both cases we trace out the provider s ( providers ) hilbert space(s ) .now we minimize while and minimize such that .these problems again can be lower - bounded by an sdp and in fig .[ fig : fig5 ] we give numerical values obtained with these minimization problems .this case is numerically more expensive than the simple self - testing of the epr experiment and for tackling it we used the sdp procedures described in ref .we also compare our results to those obtained in the device - independent setting where all three parties are not trusted but the violation of the ghz - mermin inequality is .we see that the ghz fidelity increases when we trust more parties .interestingly , we can see that the curve for -trusted scenario is obviously closer to the curve of -trusted scenario than to the device - independent one .this may hint that multi - partite epr - steering behaves quite differently to quantum non - locality .however , to draw this conclusion from self - testing one would have to pursue more rigorous research , since we have only obtained numerical lower bounds on the ghz fidelity using only one specific isometry .-trusted setting is closer to the -trusted setting than device - independence . in future workwe will aim to understand if there is fundamental reason for this.[fig : fig5],scaledwidth=45.0% ] the previous section hints at what might be the most useful aspect of self - testing through epr - steering : establishing a tensor product structure in the provider s hilbert space . in the work of reichardt , unger and vazirani ,a method is presented for self - testing many copies of the ebit between two untrusted parties .this testing is achieved through measurements made in sequence .recent work has established the same feat but now with measurements being made at the same time , thus giving a more general result .the difficulty in establishing that the two untrusted parties have multiple copies of the ebit is to establish that ( up to isometries ) the hilbert spaces of the parties decompose as a tensor product of several -dimensional hilbert spaces : in each sub - space there is one - half of an ebit .we now remark that epr - steering offers a useful simplification in achieving the same task of identifying a tensor product structure .note that in the trusted laboratory a tensor product structure is known : the client knows they have , say , two qubits .if the assemblage for each qubit is close to the ideal case of being one half of an ebit , then we may use lem .[ niceobs ] to transfer " the physical operations on the untrusted side to one of the qubits on the trusted side .we also note that this observation forms part of the basis of the work presented in ref . , in the context of verification of quantum computation . to be more exact, we now have the client s hilbert space being constructed from a tensor product of two - dimensional hilbert spaces , i.e. where .we now have a modified form of the epr experiment with the reference state being for each .that is , in the reference experiment , the provider s hilbert space has a tensor product structure . for each hilbert space , there is a projective measurement with projectors acting on that space where , and these projectors are the qubit projectors in the epr experiment .therefore , the total reference projector is of the form which act on the hilbert space . in this case , the measurement choices and outcomes are bit - strings and respectively .we call this reference experiment the _ n - pair epr experiment _ and we are now in a position to generalise lem . [ niceobs ] .[ nniceobs ] for the n - pair epr experiment , if for all , and where and then the proof of this lemma is almost identical to the proof of lem .[ niceobs ] and so we will leave it out from our discussion .a nice relaxation of the conditions of the above lemma is to insist that each observed element of an assemblage is -close to and still recover a similar result .this requires a little bit more work since we have not been specific in how we model the provider s measurements .for example , we have not stipulated whether the probability distribution satisfies the no - signalling principle .furthermore , even if these probabilities satisfy this principle , it does not immediately enforce a constraint on the behaviour of the measurements . for the sake of brevity we will not address this issue in this work .it remains to point out that lem .[ nniceobs ] can be used to develop a result for self - testing ( cf ref .in our work we have explored the possibilities of self - testing quantum states and measurements based on bipartite ( and multi - partite ) epr - steering .we have shown that the framework allows for a broad range of tools for performing self - testing .one can use state tomography on part of the state and use this information to get more useful analytical methods . or ,indeed , one only needs to use the probabilities of outcomes for certain fixed ( and known ) measurements .furthermore , self - testing can be based solely on the near - maximal violation of an epr - steering inequality .we compared these approaches to the standard device - independent approach and demonstrated that epr - steering simplifies proofs and gives more useful bounds for robustness .we hope that this could be used in future experiments where states produced are quite far from ideal but potentially useful for quantum information tasks. however , we note that epr - steering - based self - testing only really improves the constants in the error terms ( for robustness ) and not the polynomial of the error , i.e. we can only demonstrate -ast for the epr experiment .this highlights that from the point - of - view of self - testing , epr - steering resembles quantum non - locality and not entanglement verification in which all parties are trusted . in future work, we wish to explore the self - testing of other quantum states . for example, we can show that similar techniques as outlined in this work can be used to self - test partially entangled two - qubit states .we would like to give a general framework in which many examples of states and measurements can be self - tested .this would be something akin to the work of yang _et al _ that utilizes the npa hierarchy of sdps .recent work by kogias _et al _ could prove useful in this aim .in addition to this , our work has hinted at the interesting possibilities for studying self - testing based on epr - steering in the multipartite case . in future workwe will investigate adapting our techniques to general multipartite states .for example , the general multipartite ghz state can be self - testing by adapting the family of bell inequalities found in refs . .also , it would be interesting to try to establish some new insights in the fundamental relations between non - locality and epr - steering using self - testing .it is possible that self - testing could be a useful tool for exploring their similarities and differences , especially given interesting new developments for multi - partite epr steering .one may question our use of the schatten -norm as a measure of distance between elements of a reference and physical assemblage .for example , the schatten -norm is a lower bound on the -norm so could be a more useful measure of closeness. it may be worthwhile to explore this possibility but we note that the argument for the impossibility of -ast for the epr experiment in sec .[ sec2c ] still applies even if we replace all the distance measures with the -norm .finally , it would be interesting to consider relaxing the assumption of systems being independent and identically distributed ( i.i.d ) and tomography being performed in the asymptotic limit .this would take into account the provider having devices with memory as well as only being given a finite number of systems . in the case of cst , we may use statistical methods to bound the probability that the provider can deviate from their claims and trick us in accepting their claims . for the case of ast , tools from non - i.i.d .quantum information theory might be required which makes the future study of ast interesting from the point - of - view of quantum information ._ acknowledgements _ - the authors acknowledge useful discussions with antonio acn , paul skrzypczyk , daniel cavalcanti and peter wittek .mjh also thanks nathan walk for discussions and petros wallden , andru gheorghiu and elham kashefi for discussing their recent independent work in ref . about self - testing based on epr - steering as applied to the verification of quantum computation .mjh acknowledges support from the epsrc ( through the nqit quantum hub ) and the fqxi large grants _thermodynamic vs information theoretic entropies in probabilistic theories _ and _ quantum bayesian networks : the physics of nonlocal events_. is asknowledges funding from the erc cog project qitbox , the mineco project foqus , the generalitat de catalunya ( sgr875 ) and the ministry of science of montenegro ( physics of nanostructures , contract no 01 - 682 ) . 10 j. s. bell , _ physics _ * 1 * , 195 ( 1964 ) . c. carmeli , t heinosaari , a karlsson , j schultz , and a toigo , _ phys .lett . _ * 116 * , 230403 , ( 2016 ) .j. f. clauser , m. a. horne , a. shimony , and r. a. holt , _ phys .* 23 * , 880 ( 1969 ) .b. tsirelson , _ hadronic journal supplement _ * 8 * , 329 - 345 ( 1993 ) ; s. popescu , and d. rohrlich , _ physics letters a _ * 169 * , 411 ( 1992 ) .m. mckague , t. h. yang , and v. scarani , _ j. phys .a : math . theor . _ * 45 * , 455304 ( 2012 ) .b. reichardt , f. unger , and u. vazirani , _ nature _ , * 496 * , 456 - 460 ( 2013 ) .k. f. pl , t. vrtesi , and m. navascus , _ phys .rev . a _ * 90 * , 042340 ( 2014 ) .m. mckague , _ theory of quantum computation , communication , and cryptography _ , 104 - 120 ( 2014 ) . c. bamps and s. pironio , _ phys .a _ * 90 * , 052111 ( 2015 ) .x. wu , y. cai , t. h. yang , h. n. le , j. d. bancal , and v. scarani , _ phys .a _ , * 90 * , 042339 ( 2014 ) .j. d. bancal , m. navascus , v. scarani , t. vertesi , and t. h. yang , _ phys .rev . a _ * 91 * , 022115 ( 2015 ) . o.nieto - silleras , s. pironio , and j. silman , _ new j. phys . _* 16 * , 013035 ( 2014 ) .t. h. yang , t. vrtesi , j. d. bancal , v. scarani , and m. navascus , _ phys .* 113 * , 040401 ( 2014 ) .e. schrdinger , _ proc ._ * 31 * , 555 ( 1935 ). h. m. wiseman , s. j. jones , and a. c. doherty , _ phys .lett . _ * 98 * , 140402 ( 2007 ) .a. einstein , b. podolsky , and n. rosen , _ phys ._ * 47 * , 777 ( 1935 ) .a. broadbent , j. fitzsimons , and e. kashefi , _ proc .of the 50th annual ieee symposium on foundations of computer science ( focs 2009 ) _ , 517 - 526 ( 2009 ) . c. branciard , e. g. cavalcanti , s. p. walborn , v. scarani , and h. m. wiseman , _ phys . rev . a _ * 85 * , 010301 ( 2012 ). n. walk _et al _ , _ optica _ * 3 * ( 6 ) , 634 - 642 , ( 2016 ) .t. gehring , v. hndchen , j. duhme , f. furrer , t. franz , c. pacher , r. f. werner , and r. schnabel , _ nature communications _ * 6 * , 8795 ( 2015 ) .d. h. smith _et al _ , _ nature communications _ * 3 * , 625 ( 2012 ) ; a. j. bennet _et al _ , _ phys .x _ * 2 * , 031003 ( 2012 ) ; b. wittmann _et al _ , _ new j. phys . _* 14 * , 053030 ( 2012 ) .s. armstrong , m. wang , r. y. teh , q. gong , q. he , j. janousek , h .- a .bachor , m. d. reid , and p. k. lam , _ nature physics _ * 11 * , 167 - 172 ( 2015 ) .m. f. pusey , _ phys . rev .a _ * 88 * , 032313 ( 2013 ) .a. pappa , a. chailloux , s. wehner , e. diamanti , and i. kerenidis , _ phys .lett . _ * 108 * , 260502 ( 2012 ) .m. navascus , g. de la torre , and t. vrtesi , _ phys .x _ * 4 * , 011011 ( 2014 ) .m. pawlowski , and n. brunner , _ phys .a _ * 84 * , 010302(r ) ( 2011 ) r. gallego , n. brunner , c. hadley , and a. acin , _ phys .lett . _ * 105 * , 230501 ( 2010 ) m. mckague and m. mosca , _ theory of quantum computation , communication , and cryptography _ , 113 - 130 , springer ( 2011 ) . m. a. nielsen and i. l. chuang , _ quantum computation and quantum information _ , cambridge university press ( 2000 ). e. g. cavalcanti , s. j. jones , h. m. wiseman , and m. d. reid , _ phys .a _ * 80 * , 032112 ( 2009 ) . c. a. miller and y. shi , _ theory of quantum computation , communication , and cryptography _ , 254 - 262 , springer ( 2013 ) .i. upi , r. augusiak , a. salavrakos , and a. acn , _ new j. phys . _ * 18 * , 035013 ( 2016 ) .n. gisin , _ helvetica physica acta _ * 62 * , 363 ( 1989 ) .l. p. hughston , r. jozsa , and w. k. wootters , _ phys . lett . a _ * 183 * , 14 ( 1993 ) .m. navascus , s. pironio , and a. acn , _ phys .lett . _ * 98 * , 010401 ( 2007 ) .d. cavalcanti , p. skrzypczyk , g. h. aguilar , r. v. nery , p. h. souto ribeiro , and s. p. walborn , _ nat .* 6 * , 7941 ( 2015 ) .n. d. mermin , _ phys .* 65 * , 1838 ( 1990 ) .wittek , _ acm transactions on mathematical software _ * 41*(3 ) , 21 , ( 2015 ) .m. mckague , _ new j. phys _ * 18 * , 045013 ( 2016 ) .a. gheorghiu , p. wallden , and e. kashefi , _arxiv:1512.07401 _ [ quant - ph ] ( 2015 ) .i. kogias , p. skrzypczyk , d. cavalcanti , a. acn , and g. adesso , _ phys ._ * 115 * 210401 ( 2015 ) .m. ardehali , _ phys .a _ * 46 * , 5375 ( 1992 ) . a. v. belinskii and d. n. klyshko , _ sov ._ * 36 * , 653 ( 1993 ) .m. j. hoban , e. t. campbell , k. loukopoulos , and d. e. browne , _ new j. phys . _* 13 * 023014 ( 2011 ) .a. b. sainz , n. brunner , d. cavalcanti , p. skrzypczyk , and t. vrtesi , _ phys .lett . _ * 115 * , 190403 ( 2015 ) .in this section we give an example of an assemblage that is altered upon taking the complex conjugation of the state and measurements on the provider s side . the state is and we consider the element of the assemblage generated by the projector .the element of the assemblage is then for .we immediately see that upon taking the complex conjugate of the state and projector , the respective element of the assemblage becomes .therefore if the client measures the element of the assemblage in the basis , they can differentiate between the two cases of the physical state being and its complex conjugate .we now aim to put a bound on where then we aim to prove the bound in eq .[ final ] by expanding out eq .[ newcondition ] where and .we focus on the case where and since the other case is essentially yields essentially the same bound for eq .[ newcondition ] . we , therefore , wish to find an upper bound for through repeated uses of lem .[ niceobs ] we obtain the first inequality is obtained in conjunction with the fact that and . in the proof of thm .[ thm1 ] it was shown that which then gives us the function in thm .in this section we use an epr - steering inequality to give us a result for cst .in particular , we prove a version of lem .[ niceobs ] . given this , all the steps in thm .[ thm1 ] apply .the epr - steering inequality we use is the following this can be written in the simplified form of where , with and being the pauli- and pauli- matrices respectively .it can be readily verified that the epr experiment violates this inequality and achieves a value of for the left - hand - side ; this is the maximal attainable value .given near - maximal violation we wish to prove a version of lem .[ niceobs ] .[ niceobsnew ] if for , then from the near - maximal violation of the epr - steering inequality we have that and .we will address the case where as all other cases follow the same proof strategy .we first note that we can write as . utilising this, we make a series of simple observations : note that we have phrased the lemma in terms of the variable and not as in the main text of the paper .we can relate the two since if the conditions of -cst are met then all probabilities differ from the ideal by , which then implies that , say , since each probability incurs an error of .putting this value of , we see that our analysis in the above lemma incurs a less favourable constant than in lem .[ niceobs ] .however , given the above lemma we may use exactly the same strategy in thm .[ thm1 ] to obtain a possibility result on self - testing based on the above epr - steering inequality now in terms of . for the epr experiment , -robust correlation - based one - sided self - testing based on the epr - steering inequality satisfying where the proof essentially follows that of thm .[ thm1 ] except now we use lem .[ niceobsnew ] every time lem .[ niceobs ] is used .one difference is now that for and for the pauli- matrix we have and likewise for and , the pauli- matrix .the other difference is in the final stage where we chose to be the pure state that is proportional to , i.e. where .we must bound the error associated with making this choice .we use the following observation that which in turn implies that observing that so if we choose we have that where the equality in the second line results from invariance of the absolute value under complex conjugation .therefore we have which then implies that and thus .this then completes our proof .for the epr experiment , let us consider the trace distance for all possible isometries and not just the swap isometry .an isometry will take the physical state to by introducing ancillae and applying a unitary to the physical state and ancillae . as discussed in sec .[ sec1 ] , the trace distance is then for .we write in terms of its schmidt decomposition for as some real number such that and . since is a state of a qubitit may be written as .given this , we obtain where and .we now maximize for all isometries so as to obtain a lower bound on .the value of will be maximized when and is in the linear span of .therefore , and will be the maximum of which then implies that .we now wish to put bounds on which can be easily attained since and . if we assume that then we have that and thus where in the last equation we take the taylor series expansion of and represents polynomials of degree and higher . in conclusion , given -closeness of the reduced states , there is an isometry such that .this then demonstrates that our swap isometry is not optimal for demonstrating such closeness between physical and reference states .however , the optimal isometry will be dependent on the basis and thus more complicated than the swap isometry .
|
the verification of quantum devices is an important aspect of quantum information , especially with the emergence of more advanced experimental implementations of quantum computation and secure communication . within this , the theory of device - independent robust self - testing via bell tests has reached a level of maturity now that many quantum states and measurements can be verified without direct access to the quantum systems : interaction with the devices is solely classical . however , the requirements for this robust level of verification are daunting and require high levels of experimental accuracy . in this paper we discuss the possibility of self - testing where we only have direct access to one part of the quantum device . this motivates the study of self - testing via epr - steering , an intermediate form of entanglement verification between full state tomography and bell tests . quantum non - locality implies epr - steering so results in the former can apply in the latter , but we ask what advantages may be gleaned from the latter over the former given that one can do partial state tomography ? we show that in the case of self - testing a maximally entangled two - qubit state , or ebit , epr - steering allows for simpler analysis and better error tolerance than in the case of full device - independence . on the other hand , this improvement is only a constant improvement and ( up to constants ) is the best one can hope for . finally , we indicate that the main advantage in self - testing based on epr - steering could be in the case of self - testing multi - partite quantum states and measurements . for example , it may be easier to establish a tensor product structure for a particular party s hilbert space even if we do not have access to their part of the global quantum system .
|
the design of magnetic devices used to store and process information crucially relies on a detailed understanding of how the magnetization dynamics are influenced not only by external magnetic fields but also by dissipation and thermal fluctuations . in the simplest scenario ,the time evolution of the magnetization is governed by the stochastic generalization of the landau - lifshitz - gilbert ( llg ) equation introduced by brown to study the relaxation of ferromagnetic nanoparticles . in recent years , much attention has been directed to the theoretical and experimental understanding of how the magnetization can be manipulated with spin polarized currents via the spin torque effect originally discussed by slonczewski and berger , an effect that can be described by a simple generalization of this equation .explicit analytical solutions to the stochastic llg equation are available in very few cases ; in more general circumstances information has to be obtained by direct numerical simulation of the stochastic equation , the study of the associated fokker - planck equation ( see ref . for a recent review ) , or via functional methods .the stochastic llg equation is a stochastic equation with _ multiplicative _ noise .it is a well - known fact that in these cases , a careful analysis of the stochastic integration prescriptions is needed to preserve the physical properties of the model . in the stochastic llg case oneshould force the modulus of the magnetization to stay constant during evolution , and different schemes ( ito , stratonovich or the generic ` alpha ' prescription ) require the addition of different drift terms to preserve this property ( for a recent discussion see ref . ) .all these issues are by now well - understood and they are also easy to implement in the continuous time treatment of the problem .nevertheless , this problem has not been analyzed in as much detail in the numerical formulation of the equation .indeed , most of the works focusing on the numerical analysis of the stochastic equation use cartesian coordinates . although there is nothing fundamentally wrong with this coordinate system , most algorithms based on it do not preserve , in an automatic way , the norm of the magnetization during time evolution .these algorithms require the explicit magnetization normalization after every time step , a trick that is often hidden behind other technical difficulties .this problem can be avoided only if the specific midpoint prescription ( stratonovich ) is used .given that the modulus of the magnetization should be constant by construction , a more convenient way to describe the time evolution should be to use the spherical coordinate system . despite its naturalness ,no detailed analysis of this case exists in the literature .the aim of this work is to present a numerical algorithm to solve the llg equation in the spherical coordinates system and to discuss in detail how different discretization prescriptions are related , an issue which is not trivial due to the multiplicative character of the thermal noise . in order to make precise statements, we focus on the study of the low - temperature dynamics of an ellipsoidal cobalt nanoparticle , a system that has been previously studied in great detail by other groups .our goal is to introduce the numerical method in the simplest possible setting and the uniaxial symmetric potential involved in this problem seems to us a very good choice .the paper is organized as follows . in sec .[ sec : problem ] we present the problem .we first recall the stochastic llg equation in cartesian and spherical coordinates . in both caseswe discuss the drift term needed to ensure the conservation of the magnetization modulus as well as the approach to boltzmann equilibrium .we then describe the concrete problem that we solve numerically . in sec .[ sec : numerical ] we present the numerical analysis .we first introduce the algorithm and then discuss the results . section [ sec : conclusions ] is devoted to the conclusions .the stochastic landau - lifshitz - gilbert ( sllg ) equation in the landau formulation of dissipation reads where . is the product of , the gyromagnetic ratio relating the magnetization to the angular momentum , and , the vacuum permeability constant .the gyromagnetic factor is given by and in our convention with bohr s magneton and lande s -factor .the symbol denotes a vector product .for the first term in the right - hand - side describes the magnetization precession around the local effective magnetic field .the term proportional to is responsible for dissipation .thermal effects are introduced la brown via the random field which is assumed to be gaussian distributed with average and correlations for all .the parameter is , for the moment , free and is determined below . is the dissipation coefficient and in most relevant physical applications , . an equivalent way of introducing dissipation was proposed by gilbert but we have chosen to work with the landau formalism in this work .this equation conserves the modulus of and takes to boltzmann equilibrium _ only if _ the stratonovich , mid - point prescription , stochastic calculus is used .otherwise , for other stochastic discretization prescriptions , none of these physically expected properties are ensured .the addition of a carefully chosen drift term is needed to recover the validity of these properties when other stochastic calculi are used .the generic modified sllg equation where the time - derivative has been replaced by the -covariant derivative ensures the conservation of the magnetization modulus and convergence to boltzmann equilibrium for any value of .the reason for the need of an extra term in the covariant derivative is that the chain - rule for time - derivatives of functions of the stochastic variable is not the usual one when generic stochastic calculus is used .it involves an additional term ( for a detailed explanation see ref . ) .in addition , having modified the stochastic equation in this way , one easily proves that the associated fokker - planck equation is independent of and takes the magnetization to its equilibrium boltzmann distribution at temperature provided the parameter is given by where is the volume of the sample that behaves as a single macrospin , the boltzmann constant , and the saturation magnetization .the parameter is constrained to vary in $ ] .the most popular conventions are the ito one that corresponds to and the stratonovich calculus which is defined by .note that this is not in contradiction with the claim by garca - palacios that ito calculus does not take _ his _ llg equation to boltzmann equilibrium as he keeps , as the starting point , the same form for the ito and stratonovich calculations and , consequently , he obtains the boltzmann result only for stratonovich rules . as the modulus of the magnetizationis conserved , this problem admits a more natural representation in spherical coordinates .the vector defines the usual local basis ( ) with and the sllg equation in this system of coordinates becomes \ ; , \label{eq : spherical - landau1 } \\{ \rm d}_t \phi & = & \frac{\gamma_0}{1+\eta^2\gamma_0 ^ 2 } \left [ \eta\gamma_0 ( h_{{\rm eff},\phi } + h_\phi ) \right .\nonumber\\ & & \left .\qquad\qquad\quad - ( h_{{\rm eff},\theta } + h_\theta ) \right ] \ ; , \qquad \label{eq : spherical - landau2}\end{aligned}\ ] ] where the and components of the stochastic field are defined as and similarly for .we introduce an adimensional time , , and the adimensional damping constant , and we normalize the field and the magnetization by defining , , , , to write the equations as , \label{eq : spherical - landau - adim } \\ \sin\theta \ { \rm d}_\tau \phi & = & \frac{1}{1+\eta_0 ^ 2 } \left [ \eta_0 ( h_{{\rm eff},\phi } + h_\phi ) - ( h_{{\rm eff},\theta } + h_\theta ) \right ] .\qquad \label{eq : spherical - landau - adim2}\end{aligned}\ ] ] the random field statistics is now modified to and .we focus here on the dynamics of a uniformly magnetized ellipsoid with energy per unit volume is the external magnetic field and are the anisotropy parameters .this case has been analized in detail in ref . and is used as a benchmark with which to compare our results .we normalize the energy density by , and write the effective magnetic field is .once normalised by , it reads we study the dynamics of a cobalt nanoparticle of prolate spherical form with radii [ nm ] ( in the easy - axis direction ) and [ nm ] ( in the and directions , respectively ) , yielding a volume [ m .there is no external applied field , the saturation magnetization is [ a / m ] , the uniaxial anisotropy constant in the direction is [ j / m , and the temperature is [ k ] . in the following we work with the adimensional damping constant andthe physical value for it is . for this nanoparticle onehas ( where are the demagnetization factors ) , and since .the constant takes the value [ m/(as ) ] .we recall that in ref .the time - step used in the numerical integration is [ ps ] , that is equivalent to . for future reference, we mention here that in the absence of an external field , , the energy density can be written in terms of the component of the magnetization as \ ; .\ ] ] note also that in this system the anisotropy - energy barrier is , and therefore the ratio , that indicates that the dynamics take place in the low - temperature regime .in this section we first give some details on the way in which we implemented the numerical code that integrates the equations , and we next present our results .first , we stress an important fact explained in ref . :the random fields and are not gaussian white noises but acquire , due to the prefactors that depend on the angles , a more complex distribution function .therefore , we do not draw these random numbers but the original cartesian components of the random field which are uncorrelated gaussian white noises .we then recover the field and by using eqs .( [ eq : def - htheta ] ) and ( [ eq : def - hphi ] ) and the time - discretization of the product explained below .most methods used to integrate the sllg equation rely on explicit schemes .such are the cases of the euler and heun methods . while the former converges to the ito solution , the latter leads to the stratonovich limit . to preserve the module of , in these algorithmsit is necessary to normalize the magnetization in each step , a nonlinear modification of the original sllg dynamics .implicit schemes , on the other hand , are very stable and , for example , the mid - point method ( stratonovich stochastic calculus ) provides a simple way to automatically preserve the module under discretization . in what follows ,we describe our numerical - implicit scheme which keeps the module length constant and , unlike previous approaches , is valid for any discretization prescription .next , we define the -prescription angular variables according to with . in the following we use the short - hand notation , , and so on .the discretized dynamic equations now read and with \nonumber \\ & & + \\frac{1}{1+\eta_0 ^ 2 } \left [ \delta w_\phi + \eta_0\ \delta w_\theta \right ] \label{eq : first } \ ; , \\ f_\phi & \equiv & - \ ( \phi_{\tau+\delta \tau}-\phi_\tau ) \nonumber \\ & & + \\frac{\delta \tau}{1+\eta_0 ^ 2 } \left[\frac { \eta_0 \h_{{\rm eff},\phi}^\alpha - h_{{\rm eff},\theta}^\alpha } { \sin \theta^\alpha_\tau}\right ] \nonumber \\ & & + \\frac{1}{1+\eta_0 ^ 2 } \left[\frac { \eta_0 \\delta w_\phi - \delta w_\theta } { \sin \theta^\alpha_\tau}\right ] \; , \label{eq : second}\end{aligned}\ ] ] where , the effective fields at the -point are and , and and .as we said above , we first draw the cartesian components of the fields ( ) as where the are gaussian random numbers with mean zero and variance one , and we then calculate and using eqs .( [ eq : def - htheta ] ) and ( [ eq : def - hphi ] ) .the numerical integration of the discretised dynamics consists in finding the roots of the coupled system of equations and with the left - hand sides given in eqs .( [ eq : first ] ) and ( [ eq : second ] ) .we used a newton - raphson routine and we imposed that the quantity be smaller than . to avoid singular behavior when the magnetization gets too close to the axis , or , we apply in these cases a rotation of the coordinate system around the axis .all the results we present below , averages and distributions , have been computed using independent runs .s showing rapid fluctuations around zero for ( a ) and ( b ) and telegraphic noise with sudden transitions between the up and the down magnetization configurations for ( ) .the initial condition is , , , , and that is equivalent to . in this and all other figuresthe working temperature is k.,width=264 ] we start by using the stratonovich discretization scheme , , to numerically integrate the stochastic equation using the parameters listed in sec . [sec : model ] which are the same as the ones used in ref . .we simply stress here that these are typical parameters ( in particular , note the small value of the damping coefficient ) .although we solved the problem in spherical coordinates , we illustrate our results in cartesian coordinates [ using eqs .( [ sphtrans ] ) to transform back to these coordinates ] to allow for easier comparison with the existing literature .[ [ trajectories . ] ] trajectories .+ + + + + + + + + + + + + figure [ fig : uno ] displays the three cartesian components of the magnetization , , , and , as a function of time for a single run starting from an initial condition that is perfectly polarized along the axis , .the data show that while the and components fluctuate around zero , the component has telegraphic noise , due to the very fast magnetization reversal from the ` up ' to the ` down ' position and vice versa .indeed , the working temperature we are using is rather low , but sufficient to drive such transitions . [ [ equilibrium - criteria . ] ] equilibrium criteria .+ + + + + + + + + + + + + + + + + + + + + in fig .[ fig : dos ] we show the relaxation of the thermal average of the component , , evolving from the totally polarized initial condition , , during a maximum adimensional time . in the insetone can see temporal fluctuations around zero in the averages of the other two components , and .the error bars in these and other plots are estimated as one standard deviation from the data average , and when these are smaller than the data points we do not include them in the plots . the data in fig .[ fig : dos ] demonstrate that for times shorter than the system is still out of equilibrium while for longer times this average is very close to the equilibrium expectation , .component as a function of .insets : dependence of the other two components , and . , , and .,width=264 ] , on a linear - log scale , obtained as explained in the text for the three values of given in the key compared to the exact equilibrium law ( solid line ) .inset : the parameter defined in eq . ( [ eq : s - def ] ) as a function of .the upper ( dotter ) curve was computed using the exact pdf , while the lower ( solid ) one , which gets closer to zero , was computed using a finite number of bins to approximate the exact .( b ) , on a double - linear scale , for the same runs . , , and .,title="fig:",width=264 ] , on a linear - log scale , obtained as explained in the text for the three values of given in the key compared to the exact equilibrium law ( solid line ) .inset : the parameter defined in eq .( [ eq : s - def ] ) as a function of .the upper ( dotter ) curve was computed using the exact pdf , while the lower ( solid ) one , which gets closer to zero , was computed using a finite number of bins to approximate the exact .( b ) , on a double - linear scale , for the same runs . , , and .,title="fig:",width=264 ] , on a linear - log scale , for three values of given in the key , compared to the exact equilibrium law ( solid line ) .inset : averages of and as a function of .the parameters are the same as in fig .[ fig : dos ] but the initial condition is ., width=264 ] a more stringent test of equilibration is given by the analysis of the probability distribution function ( pdf ) of the three cartesian components , , and . in fig .[ fig : tres ] ( a ) we present the numerical pdf s of , , and we compare the numerical data to the theoretical distribution function in equilibrium .we computed the former by sampling over the second half of the temporal window , that is , by constructing the histogram with data collected over , and then averaging the histograms over independent runs . for the equilibrium note that the equilibrium probability density of the spherical angles is where , which implies } \ dm_z \ ; .\label{eq : equil - pdf - mz}\end{aligned}\ ] ] here is the partition function and , for the parameters used in the simulation , and .it is quite clear from fig .[ fig : tres ] that the numerical curves for the two shortest are still far from the equilibrium one , having excessive weight on positive values of .the last curve , obtained for the longest running time , is , on the contrary , indistinguishable from the equilibrium one in this presentation .a more quantitative comparison between numerical and analytic pdf s is given in the inset in fig .[ fig : tres ] ( a ) , where the probability distribution ` h - function ' with given in eq .( [ eq : equil - pdf - mz ] ) , is plotted as a function of the inverse time .the two sets of data in the inset correspond to computed with the continuous analytic form ( [ eq : equil - pdf - mz ] ) , data falling above , and with a discretized version of it , where the same number of bins as in the numerical simulation is used ( specifically , 51 ) , and data falling below and getting very close to zero for the longest used .the latter is the correct way of comparing analytic and numerical data and yields , indeed , a better agreement with what was expected . finally , in fig .[ fig : tres ]( b ) we show the pdf of for the same three used in fig . [fig : tres ] ( a ) , and we observe a faster convergence to an equilibrium distribution with a form that is very close to a gaussian . for symmetry reasons the behavior of is the same . figure [ fig : cuatro ] shows the pdf s for the same set of parameters but starting from the initial condition .the approach to equilibrium is faster in this case : all curves fall on top of the theoretical one .insets show the time dependence of and which still fluctuate around zero with larger temporal fluctuations for the latter than the former .we reckon here that the fluctuations of and are quite different .the oscillations of around zero are due to the telegraphic noise of this component and to the fact that the average is done over a finite number of runs .the amplitudes of these oscillations tend to zero with an increasing number of averages .we conclude this analysis by stating that the dynamics in the spherical coordinate system for the stratonovich discretization scheme behave correctly , with the advantage of keeping the norm of the magnetization fixed by definition . although it was shown in ref . that in the limit every discretization of the stochastic equation leads ( at equilibrium ) to the boltzmann distribution , the numerical integration of the equationsis done at finite and then both , the time - dependent and the equilibrium averaged observables may depend on . with this in mind , we investigated which discretization scheme is more efficient in terms of computational effort .the aim of this section is to study the dependence of the numerical results for different values of and to determine for which one can get closer to the continuous - time limit ( ) for larger values of . as a function of for , and , , and .inset : vs for and the same with the same symbol code as in the figure .the dashed black line is a reference and corresponds to and .the initial condition is ., width=264 ] figure [ fig : cinco ] shows the temporal dependence of the for stratonovich calculus , i.e. for , for a small window of time and from the initial condition .a very fast decay followed by a slow relaxation is observed .the phenomena can be well fitted by a sum of two exponential functions , one ( of small amplitude ) describing the rapid intra - well processes and the another one the dominant slow over - barrier thermo - activation .concretely , we used and we found that the best description of data is given by that is and , consistently with statements in ref . .in addition , we compared our result for to the one arising from eq .( 3.6 ) in ref . for our parametersand we found which is less than our numerical estimate , a very reasonable agreement , in our opinion .most importantly , we reckon that the time - dependent results do not depend strongly on for ( see fig . [fig : cinco ] , where data for , , and prove this claim ) and we can assert that the master curve is as close as we can get , for the numerical accuracy we are interested in , to the one for , that is , to the correct relaxation . instead , for other discretization prescriptions , the dependence on is stronger .for example , for ( ito calculus ) the curves for and are still notably different from each other ( see the insert to fig . [fig : cinco ] ) , and they have not yet converged to the physical time - dependent average .even smaller values of are needed to get close to the asymptotically correct relaxation , shown by the dotted black line .we do not show the pdf s here , but consistently , they are far away from the equilibrium one for these values of . for , , and , and , compared to the exact equilibrium law ( solid line ) .the initial condition is .inset : distributions for the same runs and using the same symbol code as in the rest of the figure , compared to the limit ( equilibrium ) function shown in fig .[ fig : tres ] ( b ) for and the longest .,width=264 ] in fig .[ fig : seis ] we use the initial condition to see whether the efficiency of the ito calculus improves in this case .although the values of are very close to the expected vanishing value both distributions , ( main panel ) and ( inset ) , are still far from equilibrium .we conclude that also for this set of initial conditions smaller are needed to reach the continuous - time limit .we have investigated other values of and in all cases we have found that convergence is slower than for the case . we conclude that the stratonovich calculus is ` more efficient ' than all other -prescriptions in the sense that one can safely use larger values of ( and therefore reach longer times ) in the simulation .this does not mean that other discretization schemes yield incorrect results . for must use smaller values of the time - step to obtain the physical behavior . as a function of for two values of the time increment , ( open symbols ) and ( filled symbols ) . and several values of the damping coefficient ( shown in different colors ) as defined in the legend .( b ) ito calculus , , and .curves correspond to and different ( from top to bottom ) .the solid black line displays for , , and .inset : the parameter defined in eq .( [ eq : s - def - dos ] ) for these curves , taking as a reference the curve for . ,title="fig:",width=264 ] as a function of for two values of the time increment , ( open symbols ) and ( filled symbols ) . and several values of the damping coefficient ( shown in different colors ) as defined in the legend .( b ) ito calculus , , and .curves correspond to and different ( from top to bottom ) .the solid black line displays for , , and .inset : the parameter defined in eq .( [ eq : s - def - dos ] ) for these curves , taking as a reference the curve for . , title="fig:",width=264 ] previously , we studied the magnetization relaxation for a physically small damping coefficient , . under these conditions relaxationis very slow and it is difficult to reach convergence for generic values of . to overcome this problem , we increased slightly the damping up to .as a consequence , and because we are still in the low damping regime , relaxation to equilibrium is expected to be faster [ in fig . [fig : siete ] ( a ) we show below that this statement is correct ] .note that more subtle issues can arise in the non axially symmetric case if initial conditions are not properly chosen .then , in this subsection we check whether the dependence found for the calculus improves under these new dissipation conditions . in fig .[ fig : siete ] ( a ) we test the dependence of for and five values of ranging from to and increasing by a factor of two .filled and open data points of the same color correspond to and , respectively .the agreement between the two data sets is very good for all .indeed the agreement is so good that the data are superimposed and it is hard to distinguish the different cases .the curves also show that the dynamics are faster for increasing .figure [ fig : siete ] ( b ) displays the decay of as a function of time for and a rather large value of the damping coefficient , , for different time increments , . the curves tend to approach the reference one shown by the solid black line and corresponding to for decreasing values of .a quantitative measure of the convergence rate is given by another parameter , defined as \ ; ,\label{eq : s - def - dos}\ ] ] and shown in the inset for . here, is the average of for and , while is the curve corresponding to other values of and .for all -schemes tends to zero for .note that for this parameter is very close to zero for all the values shown in fig .[ fig : siete ] ( b ) , confirmation of the fact that this prescription yields very good results for relatively large values of and it is therefore ` more efficient ' computationally .in this paper we have introduced a numerical algorithm that solves the the sllg dynamic equation in the spherical coordinate system with no need for artificial normalization of the magnetization .we checked that the algorithm yields the correct evolution of a simple and well - documented problem , the dynamics of an ellipsoidal magnetic nanoparticle .we applied the algorithm in the generic ` alpha'-discretization prescription .we showed explicitly how the finite dynamics depend on , despite the fact that the final equilibrium distribution is -independent .we showed that at least for the case reported here , the stratonovich mid - point prescription is the ` more efficient one ' in the sense that the dependence of the dynamics on the finite value of is less pronounced so , larger values of can be used to explore the long time dynamics .we think it would be worthwhile to explore , both analytically and numerically , if this is a generic result of the sllg dynamics . _ a priori _ , it is not clear what will be the optimal prescription to deal with other problems such as a system under a non - zero longitudinal external magnetic field or for a more general non - axially symmetric potential .finally , we mention that it is well known that for a particle on a line with multiplicative noise , the addition of an inertial term acts as a regularization scheme that after the zero mass limit `` selects '' the stratonovich prescription ( see ref . ) . for the case of magnetization dynamicsnon markovian generalizations of the llg equation have been considered in refs . .it could be interesting to analyze in this case , how the markovian limit relates to any specific stochastic prescription .we plan to report on this issue in the near future .we thank c. aron , d. barci and z. gonzlez - arenas for very helpful discussions on this topic .f.r . acknowledges financial support from conicet ( grand no . pip 114 - 201001 - 00172 ) and universidad nacional de san luis , argentina ( grand no .proipro 31712 ) and thanks the lpthe for hospitality during the preparation of this work. l.f.c . and g.s.l .acknowledge financial support from foncyt , argentina ( grand no . pict-2008 - 0516 ) .
|
we introduce a numerical method to integrate the stochastic landau - lifshitz - gilbert equation in spherical coordinates for generic discretization schemes . this method conserves the magnetization modulus and ensures the approach to equilibrium under the expected conditions . we test the algorithm on a benchmark problem : the dynamics of a uniformly magnetized ellipsoid . we investigate the influence of various parameters , and in particular , we analyze the efficiency of the numerical integration , in terms of the number of steps needed to reach a chosen long - time with a given accuracy .
|
the rapid increase in the number of astronomical data sets and even faster increase of overall data volume demands a new paradigm for the scientific exploitation of optical and near - infrared imaging surveys .historical surveys have been digitized ( poss and its southern counterpart ) or are in the process of being digitized . in recent years surveyshave been performed which cover hundreds or thousands of square degrees up to the whole sky ( sdss , 2mass , cfhtls , etc . ) .many more are in progress or coming up with increasing spatial resolution , depth , and survey areas ( omegacam on vst , vircam on vista , pan - starrs , lsst , etc . ) .the data rate of existing surveys is rapidly approaching _ terabytes _ per night , leading to survey volumes well into the _ petabyte _ regime and the new surveys will add many tens of petabytes to this .hundreds of terabytes of data will start entering the system when eso s omegacam camera starts operations in chile in late-2011 .several large surveys plan to use the information system to manage their data : the 1500 deg survey , the vesuvio survey of nearby superclusters , the omegawhite white dwarf binary survey and the omegatrans search for transiting variables .quality control is typically one of the largest challenges in the chain from raw data of the `` sensor networks '' to scientific papers .it requires an environment in which all non - manual qualification is automated and the scientist can graphically inspect where needed by easily going back and forth through the data ( the pixels ) and metadata ( everything else ) of the whole processing chain for large numbers of data products .the full quality control mechanisms are treated in complete detail in the quality control paper .the really novel aspect of this new paradigm is the long - term preservation of the raw data and the ability of re - calibrating it to the requirements of new science cases .the data of the majority of these surveys is fully public : any astronomer is entitled to a copy of the data . therefore the same survey data is used for not only science cases within the original plan , but many new science cases the original designers of the survey were not planning to do themselves or did not foresee . to be able to do thissuccessfully requires that everyone is provided access to detailed information on the existing calibration procedures and resulting quality of the data at every stage of the processing , that is , have access to the data and the metadata , including process configuration at every step in the chain from raw data to final data products . in this paperwe describe the reduction of data in the information system , generally referred to as the ( hereafter ` awe ` ) . the processing of data from both the wfi and omegacam instruments has been used to qualify the pipeline , the results of which have been or will be included in separate publications , for example .the remainder of this section briefly describes some key concepts of ` awe ` covered in detail elsewhere : previously in and more currently in .sections [ sec : calib ] and [ sec : image ] describe how an instrument is calibrated and how science data is processed .finally , sect .[ sec : summa ] presents the summary .context is the primary tool of project managers in ` awe ` .each _ process target _( i.e. , the result of some processing step , see sect .[ sec : objec ] ) in ` awe ` is created at a specific privilege level .privilege levels are analogous to the permission levels of a unix / linux file system ( e.g. , privilege levels 1 , 2 , 3 map loosely to permission levels user , group , other ) . to allow access to their desired set of objects , users can set their privilege level and their project .this concept of _ context _ is completely about visibility of the objects in ` awe ` and nothing else .proprietary data is protected from access by all but authorized users and undesirable data can be hidden for any purpose ( e.g. , to use project - specific calibrations instead of general ones ) .all processing is done within this framework , allowing complete control over what is processed and how , and how it is _ published _ between project groups and to the world .visibility for processing targets is not only governed by the privilege level , but also by validity .three properties dictate validity : 1 . `is_valid ` manual validity flag 2 .` quality_flags ` automatic validity flag 3 ._ timestamps _ validity ranges in time ( for calibrations only ) determining what needs to be processed and how is indicated by setting any or all of the above flags .for instance , obviously poor quality data can be flagged by setting its ` is_valid ` flag to 0 , preventing it from ever being processed automatically .the calibrations used are determined by their _ timestamps _ ( which calibrations are valid for the given data ? ) and the quality of processed data by the automatic setting of its ` quality_flag ` ( is the given data good enough ? ) .good quality data can then be flagged for promotion ( ` is_valid ` ) and eventually promoted in privilege by its creator ( published from level 1 to 2 ) so it can be seen by the project manager who will decide if it is worthy to be promoted once again ( published from level 2 to 3 or higher ) to be seen by the greater community .` awe ` uses its federated database to link all data products to their progenitors ( dependencies ) , creating a full data lineage of the entire processing chain .this allows creation of complete data provenance for any data item in the system at any time .raw data is linked to the final data product via database links within the _ data object _ , allowing all information about any piece of data to be accessed instantly .see for a detailed description of the ` awe ` s data lineage implementation .this data linking uses the power of object - oriented programming to create this framework in a natural and transparent way .illustrating their inheritance relationship to each other .the classes without color do not appear in the previous figure , but are nonetheless part of the hierarchy and are shown for clarity .every target inherits from ` dbobject ` ( a database object ) , but only those with associated bulk data ( typically a file stored on a dataserver ) inherit from ` dataobject ` ., width=355 ] ` awe ` uses the advantages of object - oriented programming ( oop ) to process data in the simplest and most powerful ways .in essence , it turns the aforementioned _ data objects _ into oop _objects _ , called _ process targets _ ( or ` processtarget`s ) , that are instances of classes with attributes and methods that can be inherited ( see fig . [ fig : targe ] and [ fig : class ] for an overview of an object model ) .each of these ` processtarget ` instances knows of all of its local and linked metadata , and knows how to process itself .each persistent attribute of an object is linked to metadata or to another object that itself contains links to its own metadata .the code for ` awe ` is written in , a programming language highly suitable for oop .consequently , _ classes _ are associated with the various conventional calibration images , data images , and other derived data products .for example , in ` awe ` , bias exposures become _ instances _ of the ` rawbiasframe ` class , and twilight ( sky ) flats become instances of the ` rawtwilightflatframe`class . these instances of classesare the `` objects '' of oop . for the remainder of this document, the class names of objects , their properties , and methods will be in ` teletype ` font for more clear identification .the most unique aspect of ` awe ` is its ability to process data based on the final desired result to an arbitrary depth .in other words , the data is _ pulled _ from the system by the user .the desired result is the _ target _ to be processed , and the framework used is called _target processing_. target processing uses methods similar to those found in the unix / linux ` make ` utility .when a target is requested , its dependencies are checked to see if they are _ up - to - date_. if there is a newer dependency or if the requested target does not exist , the target is ( re)made . this process is recursive and is an example of _backward chaining_. at the base of ` awe ` target processing is the concept of _ backward chaining_. contrary to the typical case of forward chaining ( e.g. , ` objectn ` is processed into ` objectn+1 ` is processed into ` objectn+2 ` , etc . ) . `awe ` database links allow the dependency chain to be examined from the intended target ( even if it does not yet exist ) all the way back to the raw data .the above scenario would then look like : if ` targetm ` is up - to - date , check if ` targetm-1 ` is up - to - date ; if ` targetm-1 ` is up - to - date , check if ` targetm-2 ` is up - to - date ; etc . ,processing as necessary until ` targetm ` ( and all targets it depends on ) exists and is up - to - date .this is the ` awe ` implementation of backward chaining that is used in target processing ( see fig .[ fig : targe ] for an example with astronomical data ) . as mentioned earlier , conventional astronomical calibration images / products as well as science productsare collectively referred to as _ process targets _ and inherit from the ` processtarget ` class .each ` processtarget ` has an associated _ processing parameters _ object , an instance of a class named after the respective process target class ( e.g. , ` sometarget.sometargetparameters ` ) which stores configurable parameters that guide the processing or reprocessing of that target .those ` processtarget`s that use external programs in their derivation may have additional objects associated with them which contain the configuration of the external program that was used .these processing parameters are stored in an object linked to the ` processtarget ` for comparison by the system and to allow the all persons involved in survey operations to discover which settings resulted in the best data reduction . ` awe ` combines all of the above concepts into a coherent archiving and processing system .all the information about a particular instrument and its calibration and processing history is stored in the federated database within the object - oriented data model with full linking of the data lineage .the values of the process parameters of all objects in the dependency chain and all the results of the integrated ( and manual ) quality controls of the _ target _ of interest ( regardless of visibility or existence ) are used to determine if that _target _ can or should be ( re)built and how .this data pulling is the heart of ` awe ` and is called _ target processing _ ( see fig . [fig : tproc ] and http://process.astro-wise.org/ ) . as mentioned earlier , ` awe ` does not provide as the ultimate end of the processing chain a static data release .the system allows for survey data to be reprocessed for any reason and for any purpose . if a newer , better calibration is made , or if a different purpose requires a different processing technique , the data can be easily reprocessed .this is only possible when the raw survey data is retained in its original form . in ` awe` raw data is always preserved .target processing does not use static information to determine what gets processed how . as seen in all the previous sections , all the survey data ,its dependency linkages and processing parameters are all reviewed to allow any target to be ( re)processed on - demand as needed .all these dependencies create a built - in workflow , automatically processing only those targets that need it .this on - the - fly ( re)processing is the hallmark of the ` awe ` information system .the philosophy of ` awe ` is to share improved insight in calibrations . in ` awe ` , calibration scientists can , over time , have many versions of calibration results at their disposal . from thisthey determine ( subtle ) long term trends in instrument , telescope and atmospheric behaviour and can collaborate to improve the calibration procedures for that instrument in ` awe ` accordingly .the _ complete observational system _ ( generally termed `` the instrument '' for simplicity ) eventually becomes calibrated over its full operational period as opposed to a series of individual nights calibrated from data in a limited time window .[ fig : calib ] shows the schematic view of the pixel calibrations pipeline .this gives an overview of the flow of the pixel calibrations to be described in the coming sections .it is continued in the photometric pipeline schematic in fig .[ fig : photo ] . .the recipes , also called ` tasks ` , used to produce various ` processtarget`s are indicated in each box ( with their data product in parentheses ) and described in the various sections .the arrows connecting them indicate the direction of processing .note that the sections with the hatched boxes are optional branches in this pipeline , and the arrow at the end leads to the beginning of the photometric pipeline schematic in fig . [ fig : photo ] .also note , in order to simplify this diagram , the ` gainlinearity ` , ` darkcurrent ` and ` nightskyflatframe`objects have been omitted ., width=445 ] .the recipes , also called ` tasks ` , used to produce various ` processtarget`s are indicated in each box ( with their data product in parentheses ) and described in the various sections .the arrows connecting them indicate the direction of processing .note that the sections with the hatched boxes are optional branches in this pipeline , and the input follows from the pixel calibrations pipeline shown in fig .[ fig : calib ] ., width=445 ] in the ` awe ` , calibration objects have a set validity range in time or per frame object that depends upon the calibration object ( the defaults are specified per calibration object in table [ tab : valid ] below ) . the default validity time range ( ` timestamp_start ` to ` timestamp_end ` ) can be altered on the command - line using context methods ( see sect . [sec : conte ] ) , or via the calts web - service ( see fig . [fig : calts ] ) . .default validities of calibration ` processtarget`s . all time spans are centered on local midnight of the day the source observations were taken unless otherwise indicated . [ cols="<,<",options="header " , ] the most basic outcome of the image pipeline is the ` reducedscienceframe ` .conventional de - trending steps are performed when making this frame : 1 .overscan correction and trimming 2 .subtraction of the ` biasframe ` 3 .division by the ` masterflatframe ` 4 .scaling and subtraction of a ` fringeframe ` if indicated 5 .multiplication by an ` illuminationcorrectionframe ` if indicated 6 .creation of the individual weight image 7 .computation of the image statistics please note that : * the overscan correction can be a null correction ( i.e. , no modification of the pixel values ) * the illumination correction step ( i.e. , application of a photometric flat field ) has had a sextractor - created background removed and then reapplied after the multiplication , and the correction only occurs when requested and if a suitable ` illuminationcorrectionframe ` exists in addition to the effects of hot and cold pixels , individual images may be contaminated by saturated pixels , cosmic ray events , and satellite tracks . for purposes of subsequent analysis and image combination , affected pixels unique to each image need to be assigned a weight of zero in that image s weight map .since the variance is inversely proportional to the gain , which is proportional to the flatfield , the weight is given by : where is the weight of a given pixel , is the gain of a given pixel ( taken from the flat field ) , and the rest of the members are binary maps where good pixels have a value of 1 and bad pixels have a value of 0 .these maps are , respectively , a ` hotpixelmap ` , a ` coldpixelmap ` , a ` saturatedpixelmap ` , a ` cosmicmap ` , and a ` satellitemap ` , the last three being calculated directly from the ` reducedscienceframe ` after detrending .saturated pixels are pixels whose counts exceed a certain threshold .in addition , saturation of a pixel may lead to _dead _ neighbouring pixels , whose counts lie below a lower threshold . these upper and lower thresholds are defined and stored in the object .two programs may be used to detect cosmic ray events : 1 .* sextractor * can be run with a special filter that is only sensitive to cosmic - ray - like signal .this requires a ` retina ' filter , which is a neural network that uses the relative signal in neighboring pixels to decide if a pixel is a cosmic .a retina filter , called cosmic.ret is provided .run sextractor with ` filter_name = cosmic.ret ` , to run sextractor in comic ray detection mode .this results in a so - called segmentation map , recording the pixels affected by cosmic ray events .this segmentation can be used to assign a weight of zero to these pixels .* cosmicfits * is designed as a stand - alone program to detect cosmic ray events . in the ` awe ` , the sextractor method is the preferred cosmic ray event detection method .linear features can be detected using a _ hough transform _algorithm , which is used to find satellite tracks .see for more information about the hough transform .a point defines a curve in hough space , where : corresponding to lines with slopes , passing at a distance from the origin .this means that different points lying on a straight line in image space , will correspond to a single point ( ) in hough space .the algorithm then creates a hough image from an input image , by adding a hough curve for each input pixel which lies above a given threshold . this hough image ( effectively a histogram of pixels corresponding to possible lines )is clipped , and transformed back into a pixelmap , masking lines with too many contributing pixels .the parameters from the astrometric solution are used during the regridding process and their creation has already been discussed in sect .[ sec : astro ] .the parameters from the photometric solution are used during the coaddition process and their creation has already been discussed in sect .[ sec : photo ] .regridding and co - adding are done using the swarp program . before images are co - added , they are resampled to a predefined pixel grid ( see sect .[ sec : skygr ] ) .by co - adding onto a simple coordinate system , characterized by the projection ( tangential , conic - equal - area ) , reference coordinates , reference pixel , and pixel scale , the distortions recorded by the astrometric solution are removed from the images . to this enda set of projection centers is defined , at 1 degree separation and pixel scale of 0.2 arcsec .a ` reducedscienceframe ` resampled to this grid is called a ` regriddedframe ` .the background of the image can be calculated and subtracted at this time , if desired .after the ` regriddedframe`s are made , it is only a matter of applying the photometry of each frame and stacking the result .this process creates a ` coaddedregriddedframe ` .one point of great importance in considering the coadded data is its pixel units .the units are fluxes relative to the flux corresponding to magnitude=0 .in other words , the magnitude corresponding to a pixel value is : the value of a pixel in the ` coaddedregriddedframe ` is computed from all overlapping pixels _i _ in the input ` regriddedframe`s according to this formula : where is the pixel value in the ` regriddedframe ` , is calculated from the zeropoint , and where is the value of the pixel in the input weight image .a ` weightframe ` is created as well .the value of the pixel in the weight frame for the coadd is : in ` awe ` , source information from processed frames can be stored in the database in the form of ` sourcelist`s .these are simply a transcription of a sextractor - derived catalog values ( position , ellipticity , brightness , etc . ) into the database .normally , the catalog was derived from a processed frame existing in the system , but this is not a requirement .arbitrary sextractor catalogs meeting a minimum content criteria can be ingested as well .this is how large survey results and reference catalogs are brought into the system .these ` sourcelist`s can be used for a variety of purposes such as astrometric and photometric correction , but are normally an end product of the image pipeline storing key quantities about the sources in question for further analysis .multiple ` sourcelist`s can be combined into an ` associatelist ` , and later into another ` sourcelist ` via the ` combinedlist ` machinery .multiple ` sourcelist`s can be spatially combined ( via ra and dec values ) and stored in the database via the ` associatelist ` class .the association is done in the following way : 1 .the area of overlap of the two ` sourcelist`s is calculated .if there is no overlap no associating will be done .2 . the sources in one ` sourcelist ` are paired with sources in the other if they are within a certain association radius .default radius is 5 .the pairs get an unique associate i d ( aid ) and are stored in the ` associatelist ` .a filter is used to select only the closest pairs .finally the sources which are not paired with sources in the other list and are inside the overlapping area of the two ` sourcelist ` are stored in the ` associatelist ` as singles .they too get an unique aid . very important is the type of association being done .one of three types : chain , master or matched , will be done . in a _ chain_ association , all subsequent ` sourcelist`s are matched to the previous ` sourcelist ` to find pairs , in a _association , they are always matched with the first ` sourcelist ` , and in a _ matched _ association , all ` sourcelist`s are matched with all other ` sourcelist`s .the development and implementation of the optical pipeline has been described .this pipeline uses the : an information system designed to integrate hardware , software and human resources , data processing , and quality control in a coherent system that provides an unparalleled environment for processing astronomical data at any level , be it an individual user or a large survey team spread over many institutes and/or countries .the is built around an object - oriented programming ( oop ) model using where each data product is represented by the instantiation of a particular type of object .the processability and quality of these data objects ( ` processtarget`s ) is moderated by built - in attributes and methods that know , for each individual type of object or oop class , how to process or qualify itself .all progenitor and derived data products are transparently linked via the database , providing an uninterrupted path between completely raw and fully processed data .this data lineage and provenance allows for a type of processing whereby the pipeline used for a given set of data is created _ on - the - fly _ for that particular set of data , where the unix ` make ` metaphor is employed to chain backward though the data , processing only what needs to be processed ( target processing ) .this allows unparalleled efficiency and data transparency for reprocessing the data when necessary , as the raw data is always available when newer techniques become available .calibration of data follows the usual routes , but has been optimized for processing of omegacam calibration data meant for detrending survey data . in this process, data is processed and reprocessed as more and more knowledge of the instrument system ( from the optics through detector chain ) is acquired .this effectively calibrates the instrument , leaving the data simply to be processed without the need of users find or qualify their own calibrations .various attributes of calibration objects ( validity , quality , valid time ranges ) transparently determine which calibrations are best to be used for any data .processing parameters are set and can be reset as desired .these parameters are retained as part of the calibration object and guarantee that a given object can be reprocessed to obtain the same result or be _ tweaked _ to improve the result .the processing of science data is governed by the same validity , quality , valid time range , and processing parameter mechanism that is used for calibration data .the calibration pipeline starts with a ` readnoise ` object created from ` rawbiasframe`s that is used to determine a clipping limit for ` biasframe ` creation .a ` gainlinearity`object can be processed from a special set of ` rawdomeflatframe`s taken for the purpose . from this result , both the gain ( in ) and the detector linearity can be determined .a master ` biasframe ` is created from a set of ` rawbiasframe`s to remove 2-dimensional additive structure in detectors .the ` darkcurrent ` is measured for quality control of the detectors , but is not applied to the pixels .bad pixels in a given detector can be found from the ` biasframe ` and a flat field image .these are termed ` hotpixelmap ` and ` coldpixelmap ` , respectively .flat field creation in can be very simple or very complex . on the simple side , a single set of ` rawdomeflatframe`s or ` rawtwilightflatframe`scan be combined with outlier rejection and normalized to the median . on the complex side ,high spatial frequencies can be taken from the ` domeflatframe`and the low spatial frequencies from the ` twilightflatframe ` .in addition , a ` nightskyflatframe ` can be added to improve this result . for an additional refinement to the flat field correction for redder filters ,a ` fringeframe ` can be created .astrometric calibration starts with extraction of sources from individual ` reducedscienceframe`s .the source positions are matched to those in an astrometric reference catalog ( e.g. , usno - a2.0 ) and all the positional differences minimized with the ldac programs .this _ local _ solution can then be further refined by adding overlap information from a dither to form a _astrometric solution .astrometric solutions are always stored for each ` reducedscienceframe ` individually .photometric calibration also starts with source extraction ( as a ` photsrccatalog ` ) and positional association .then , the magnitudes of the associated sources are compared to those in a photometric reference catalog ( e.g. , landolt ) and the mean of the kappa - sigma - clipped values results in a zeropoint for a given detector for the night in question .the extinction can be derived from multiple such measurements , the results of both being stored in a ` photometricparameters`object . as an optional refinement to the photometric zeropoint, a photometric super flat can be constructed by fitting magnitude differences as a function of radius across the whole detector block .the result of this is stored in an ` illuminationcorrectionframe ` object .the image pipeline takes all the calibrations from ` biasframe ` through ` masterflatframe ` to transform a ` rawscienceframe ` into a ` reducedscienceframe ` .this includes trimming the image after applying the overscan correction , subtracting the ` biasframe ` , dividing by the ` masterflatframe ` , and applying the ` fringeframe ` and ` illuminationcorrectionframe ` if necessary .the ` weightframe ` is constructed by taking the ` hotpixelmap ` and ` coldpixelmap ` and combining them with a ` saturatedpixelmap ` , a ` satellitemap ` , a ` cosmicmap ` , and optionally a ` illuminationcorrectionframe ` .these are all applied to the ` masterflatframe ` to create the final ` weightframe ` .next , the ` astrometricparameters ` is applied to the ` reducedscienceframe ` in creating the ` regriddedframe ` , and the ` photometricparameters ` is applied to multiple ` regriddedframe`s to form a ` coaddedregriddedframe ` .lastly , the sources from one ` coaddedregriddedframe ` can be extracted into a ` sourcelist`and associated with other ` sourcelist`s to form an ` associatelist`object .this last is the final output of the image pipeline and can combine information from multiple filters on the same part of the sky into one data product .using ` awe ` , the survey team has begun processing each week s worth of data taken at the vst ( more than half a terabyte ) in a single night .the part of the data that requires it ( bad quality or validity ) is reprocessed nightly as necessary to gain the required insight into the different aspects of the calibration process : detrending calibrations , astrometric calibrations , and photometric calibrations .the is a unique multi - purpose pipeline for astronomical surveys .all required tools ( ingestion , processing , quality control , and publishing ) are integrated in an intuitive and transparent way .it has already been used to process archive wfi.2 m , megacam ( cfhtls ) , and vircam data in pseudo - survey mode in preparation for its main task : processing , vesuvio , omegawhite , and omegatrans survey data from the newly commissioned omegacam .tables [ tab : grid1 ] & [ tab : grid2 ] describe a grid on the sky for projection and co - addition purposes in a condensed format .it contains 95 strips as function of decreasing declination ( ) . for each strip the size in degrees and the number of fields per stripthe last column contains the overlap between fields in % . by mirroring the grid along the equator one obtains a grid for the northern hemisphere .the combination of the grids for both hemispheres is a grid for the entire sky .lcccr * strip*&* [ *&*size [ *&*fields / strip*&*overlap [ % ] * + 1 & 0.00&360.00&378&5.0 2 & 0.96&359.95&378&5.0 3 & 1.91&359.80&378&5.1 4 & 2.87&359.55&378&5.1 5 & 3.83&359.20&377&5.0 6 & 4.79&358.74&376&4.8 7 & 5.74&358.19&375&4.7 8 & 6.70&357.54&374&4.6 9 & 7.66&356.79&373&4.510 & 8.62&355.94&372&4.511 & 9.57&354.99&371&4.512&10.53&353.94&370&4.513&11.49&352.79&369&4.614&12.45&351.54&368&4.715&13.40&350.19&367&4.816&14.36&348.75&366&4.917&15.32&347.21&365&5.118&16.28&345.57&363&5.019&17.23&343.84&361&5.020&18.19&342.01&359&5.021&19.15&340.08&357&5.022&20.11&338.06&355&5.023&21.06&335.95&353&5.124&22.02&333.74&350&4.925&22.98&331.43&347&4.726&23.94&329.04&344&4.527&24.89&326.55&341&4.428&25.85&323.97&338&4.329&26.81&321.31&335&4.330&27.77&318.55&332&4.231&28.72&315.70&329&4.232&29.68&312.77&326&4.233&30.64&309.74&323&4.334&31.60&306.64&320&4.435&32.55&303.44&317&4.536&33.51&300.16&314&4.637&34.47&296.80&311&4.838&35.43&293.35&308&5.039&36.38&289.83&304&4.940&37.34&286.22&300&4.841&38.30&282.53&296&4.842&39.26&278.76&292&4.743&40.21&274.91&288&4.844&41.17&270.99&284&4.845&42.13&266.99&280&4.946&43.09&262.92&276&5.047&44.04&258.78&272&5.148&45.00&254.56&267&4.949&45.96&250.27&262&4.750&46.91&245.91&257&4.5 lcccr * strip*&* [ *&*size [ *&*fields / strip*&*overlap [ % ] * + 51&47.87&241.48&252 & 4.452&48.83&236.99&247 & 4.253&49.79&232.43&242 & 4.154&50.74&227.80&237 & 4.055&51.70&223.11&232 & 4.056&52.66&218.36&227 & 4.057&53.62&213.54&222 & 4.058&54.57&208.67&217 & 4.059&55.53&203.74&212 & 4.160&56.49&198.75&207 & 4.161&57.45&193.71&202 & 4.362&58.40&188.61&197 & 4.463&59.36&183.46&192 & 4.764&60.32&178.26&187 & 4.965&61.28&173.01&182 & 5.266&62.23&167.71&176 & 4.967&63.19&162.36&170 & 4.768&64.15&156.97&164 & 4.569&65.11&151.54&158 & 4.370&66.06&146.06&152 & 4.171&67.02&140.54&146 & 3.972&67.98&134.98&140 & 3.773&68.94&129.39&134 & 3.674&69.89&123.76&128 & 3.475&70.85&118.09&122 & 3.376&71.81&112.39&116 & 3.277&72.77&106.66&110 & 3.178&73.72&100.90&104 & 3.179&74.68 & 95.11 & 98 & 3.080&75.64 & 89.30 & 92 & 3.081&76.60 & 83.46 & 86 & 3.082&77.55 & 77.59 & 80 & 3.183&78.51 & 71.71 & 74 & 3.284&79.47 & 65.80 & 68 & 3.385&80.43 & 59.88 & 62 & 3.586&81.38 & 53.94 & 56 & 3.887&82.34 & 47.98 & 50 & 4.288&83.30 & 42.01 & 44 & 4.789&84.26 & 36.03 & 38 & 5.590&85.21 & 30.04 & 32 & 6.591&86.17 & 24.05 & 26 & 8.192&87.13 & 18.04 & 19 & 5.393&88.09 & 12.03 & 13 & 8.194&89.04 & 6.02 & 7&16.495&89.90 & 0.63 & 1 & - astro - wise is an on - going project which started from a fp5 rtd programme funded by the ec action `` enhancing access to research infrastructures '' .this work is supported by fp7 specific programme `` capacities - optimising the use and development of research infrastructures '' .special thanks to francisco valdes for his constructive comments .mwebaze , j. , boxhoorn , d. & valentijn , e. : astro - wise : tracing and using lineage for scientific data processing .nbis , 2009 international conference on network - based information systems , p.475 ( 2009 ) valentijn , e.a . ,mcfarland , j.p . ,snigula , j. , begeman , k.g . ,boxhoorn , d.r . ,rengelink , r. , helmich , e. , heraudeau , p. , kleijn , g.v . ,vermeij , r. , vriend , w .- j . , tempelaar , m.j . ,deul , e. , kuijken , k. , capaccioli , m. , silvotti , r. , bender , r. , neeser , m. , saglia , r. , bertin , e. , mellier , y. : astro - wise : chaining to the universe .asp conference series , vol .376 , p.491 ( 2007 )
|
we have designed and implemented a novel way to process wide - field astronomical data within a distributed environment of hardware resources and humanpower . the system is characterized by integration of archiving , calibration , and post - calibration analysis of data from raw , through intermediate , to final data products . it is a true integration thanks to complete linking of data lineage from the final catalogs back to the raw data . this paper describes the pipeline processing of optical wide - field astronomical data from the wfi and omegacam instruments using the information system ( the or simply ` awe ` ) . this information system is an environment of hardware resources and humanpower distributed over europe . ` awe ` is characterized by integration of archiving , data calibration , post - calibration analysis , and archiving of raw , intermediate , and final data products . the true integration enables a complete data processing cycle from the raw data up to the publication of science - ready catalogs . the advantages of this system for very large datasets are in the areas of : survey operations management , quality control , calibration analyses , and massive processing .
|
with over 400 extra - solar planets detected today , most via the indirect detection techniques such as the radial velocity approach , the direct imaging of exoplanets is receiving increasing attention .direct detection of photons from exoplanets will allow us eventually to achieve the most critical scientific goals in the astrophysics such as searching for another earth . for jupiter - like exoplanets , a moderate contrast on the order of 10 is required for the direct imaging ( marley et al .2007 , marois et al . 2008 ) , which can be done on a ground - based telescope . in recent years, many coronagraphs have been proposed which can theoretically reach a high contrast on the order of with inner working angle ( iwa ) of few ( kasdin et al ., 2003 , vanderbei et al .2004 , guyon et al .2006 , ren & zhu 2007 , enya et al .2008 ) . however , most previous coronagraphs were optimized for dedicated off - axis telescopes that have no central obstructions and spider support structures , which is not suitable for today s large ground - based telescopes. the existence of central obstruction and spider structure will introduce further diffraction , which makes the design of a high - contrast coronagraph difficult .recently , soummer et al . have discussed a coronagraph that uses transmission apodized pupil ( soummer et al .2009 ) , in which they use an analytic function called generalized prolate spheroidal function to design the apodized pupil .although the transmission apodized pupil realized by metallic coating is one of the promising techniques , it is intrinsically chromatic and may induce wavefront phase errors by a metal layer of variable thickness . to overcome these problems ,a technique called microdot was proposed recently ( martinez et al .2009a , martinez et al .the microdot technique is more complex in the design and manufacturing .we will show here that by carefully choosing coating material and using optimization algorithm our transmission filter that is based on the metallic coating technique can also deliver a similar or slightly better performance , but at a much low cost .in this work , we report our recent development for the design and laboratory test regarding a transmission - filter coronagraph .our design is based on a discrete optimization algorithm , in which only finite number of transmission steps / pixels is used .such a discrete optimization approach is especially suitable for telescopes with specific central obstructions and spider structures .we future include the phase error in the optimization , which makes it more realistic for the real situation .wideband imaging is also discussed . in section 2, we describe our discrete optimization algorithm .the laboratory test of the coronagraph is discussed in section 3 .conclusions are presented in section 4 .the general idea of using transmission filters with finite number of transmission steps for high - contrast imaging was discussed by ren & zhu ( 2007 ) . here, we discuss the algorithm that uses numerical approach to find the optimization solution for a specific situation with telescopes that have different obstructions and spider structures .our coronagraph uses finite number of transmission steps where the transmission is identical in each step .the transmission filter is located on a conjugated pupil image plane , where the light is collimated .star and exoplanet images are formed on the focal plane of the coronagraph .our transmission filter is realized by metallic coating material deposited on a glass substrate , in which the transmission is controlled by the adjustment of the thickness of the coating material .since the transmission of the filter is variable as a function of the radius , the optical path is not identical in each step of the filter , which will introduce a phase error .assume the filter is circularly symmetrical around the center , and it is convenient to use a polar coordinate system .the point spread function ( psf ) of the starlight on the focal plane ( radial coordinate ) is related with the transmission filter ( radial coordinate ) by a the fourier transform function as , |^{2},\ ] ] where represents the operation of the 2-dimensional fourier transform . is the so called pupil function , which is determined by the transmission filter . and are the radii on the focal and filter planes , respectively . if the intensity is uniform on the pupil , the pupil function is simply the electric field of the transmission filter . is the possible phase error that may be introduced by the thickness variation of the filter coating material . for a metallic coating material ,its refractive is a complex number and can be expressed as , where is the refractive index indicating the phase velocity and is called the extinction coefficient , which indicates the amount of absorption when the electromagnetic wave propagates through the metallic material .the transmission of the metallic film is decreased with the thickness as ( born and wolf 1999 ) where , is the wavelength in the vacuum . is called the absorption coefficient .the pupil function is related with the transmission as . by adjusting the thickness, one can change the transmission .the variation of the thickness will , however , induce a phase error , which will greatly degrade the performance if such a phase error is not considered in the design of a transmission filter . the phase error induced by the thickness differenceis calculated as .it is clear that for a fixed and transmission , a large will result in a small optical path difference and phase error .in addition , for the same , the phase error will decrease at a longer wavelength .the contrast is defined as the ratio of intensity on a specific location to the peak intensity on the psf center , and is given as in the discovery area that is defined by the inner working angular distance and the outer working angular distance ( owa ) , assume the target contrast is a constant ( such as ) , the algorithm that is based on the discrete optimization is to minimize the following equation the algorithm is to optimize the contrast on a focal plane discovery area that is defined by the iwa and owa . to get a good optimization result ,a trade - off is needed among the target contrast , discovery area and transmission .for example , an over - low contrast may not be able to achieve and which may also result in a low transmission .therefore , the design of the metallic transmission filter is to find the best transmission profile that satisfies equation 4 .the discrete optimization algorithm has the advantage to be able to find an optimized solution for telescopes with specific obstructions and spider structures , and the step number used for the discrete optimization can exactly match the actual pixel number that are determined by the manufacturing spatial resolution .for example , in our filter design we use 50 steps , since the filter was made by reynard corporation who can make the filter with a spatial resolution of 50 pixels along a 15-mm radius of the clear aperture . in general , increasing the step number can increase the owa , which was discussed on our previous work ( ren & zhu 2007 ) .our discrete optimization includes the phase error that is induced by the thickness variation of the coating material .the phase error is not an independent parameter .it is associated with the thickness of the coating material and can be solved directly from equation ( 2 ) . a large and a small will result in a small phase error . by carefully choosing the coating material , good performance with a contrast up to , which is enough for the direct imaging of young jupiter - like exoplanets with a ground - based telescope , can be achieved . as a demonstration of the filter - transmission coronagraph, we present a design example .we choose inconel as the metallic coating material that is widely used for the neutral metallic density filter . according to the data sheet provided by the reynard corporation , it has a complex refractive index at 600 nm wavelength , while the refractive index is at 2000 nm .complex refractive indices at other wavelengths can be interpolated from the discrete data provided by the company . since the variation of the complex refractive index ,both phase error and transmission will slightly change at other wavelengths .the transmission filter is optimized at the design / optimized wavelength that is the central wavelength for a wideband imaging .figure 1 shows the contrast at the 1.65 m ( band ) optimized wavelength , which is designed with our discrete optimization algorithm .the contrast at the 1.825 m non - optimized wavelength that is the end wavelength of the band is also calculated , which includes the variation of the phase and transmission because of the wavelength shift . for the non - optimized longer wavelength ,the contrast is slightly improved at smaller angular distances while it is slightly degraded at larger angular distances .the contrasts at both wavelengths are better than at an angular distance equal or larger than , and the contrast difference between the optimized and non - optimized wavelengths is less than in general .it is clear that the inconel can be used for the high - contrast imaging over a good wavelength range .to demonstrate the performance of the transmission - filter coronagraph , one filter is designed .the metallic material of inconel is deposited on a bk7 substrate and the transmission is controlled by the adjustment of thickness of the inconel .the metallic coating filter , which consists of 50 steps along the aperture radius , was manufactured by reynard corporation .the filter has a clear diameter of 30 mm with a central circular opaque region of 3.6-mm diameter , which corresponds to a linear obstruction of 12 .the width of the spider is 0.45 mm , which takes 1.5 diameter of the clear aperture .the transmission error of the coating in manufacturing is less than 5 . for test purpose and measurement convenience ,the filter is designed at the 632.8 nm helium - neon laser test wavelength .the overall throughput of the filter is 31 .the filter and spider structure were manufactured individually , as shown in figure 2 .the coronagraph optics consists of two transmission lenses .one is served as collimator while the other is used as camera lens that form a focal plane image of the test light source where a starlight xpress ccd detector array is used to measure the psf .a spatial pinhole is used to create a perfect point light source .the transmission filter is located immediately after the collimator lens .we found that the multi - reflection from these lens curvature surfaces , the ccd detector glass window as well as the optical imperfect such as the dust particles introduces some scattered lights .nevertheless , the test shows that the coronagraph is able to deliver a contrast up to at an iwa of , which is consistent with our theoretical estimation .the psf images with different exposure times are shown in figure 3 . in order to see the details of the low intensity areas on the psf plane , the center and right panels in figure 3are overexposed .the strong bright vertical patterns in these two panels are due to the ccd image bloom .figure 4 shows the associated contrast along the psf diagonal direction , in which the test contrast is shown in solid line , while the theoretical psf profile is shown in dotted line .compared with the theoretical profile , the test psf has a slight deviation , which is introduced by the filter transmission error as well as possible residual wave - front error of the test lenses .however , such a deviation is well controlled and is at an acceptable level .the precision of the transmission is always a concern for a transmission filter , which determines the performance of the coronagraph .the image of the illuminated transmission filter is recorded on the ccd detector array by using a replay optics that creates a filter image onto the ccd focal plane .the measured intensity distribution is compared with the design values .figure5 ( left ) shows the image of the intensity distribution of the test filter , in which small bulbs resulted from defect glass surfaces can be seen clearly .the comparison of the transmission section plot of the test filter and the design profile is shown figure 6 .it is clear that the test transmission curve agrees well with the design profile , except at the areas around the 2 intensity peaks , which introduces some deviation between the test and the theoretical psfs as shown in figure 4 .we have demonstrated how to design a transmission - filter coronagraph for wideband high - contrast imaging .our design is based on the discrete optimization algorithm which includes the possible phase error that is induced by the thickness variation of the metallic coating material .the discrete optimization approach uses finite number of steps / pixels which is suitable for specific telescopes that have different sizes for the central obstruction and spider structure . since phase erroris also included in the discrete optimization , good agreements between the test and theoretical estimation are achieved .the coronagraph laboratory test has achieved a contrast of at an angular distance of or larger , without any wave - front correction by using a deformable mirror .the design and test results indicate that our transmission - filter coronagraph can be used immediately for the direct imaging of hot jupiter - like exoplanets with a ground - based telescope that is equipped with an adaptive optics system that can effectively correct the atmospheric turbulence .born , m. , & wolf , e. 1999 , principle of optics , seventh edition , cambridge university press , 738 enya , k. , abe , l. , tanaka , s. , nakagawa , t. , haze , k. , sato , t. , & wakayama , t. 2008 , , 480 , 899 guyon , o. , pluzhnik , e. a. , kuchner , m. j. , collins , b. , & ridgway , s. t. 2006 , , 167 , 81 kasdin , n.j . , vanderbei , b.j . ,spergel , d.n . , & littman , m.g .2003 , , 582 , 1147 marley m. s. , fortney , j.j . , hubickyj o. , bodenheimer , p. , & lissauer , j. j. 2007 , , 655 , 541 marois c. , macintosh , b. , barman , t. , zuckerman , b. , song , i. , patience , j. , lafrenire , d. , & doyon , r. 2008 , science , 322 , 1348 martinez , p. , dorrer , c. , carpentier , e. a. , kasper , m. , boccaletti , a. , dohlen , k. , & yaitskova , n. , 2009a , , 495 , 363 martinez , p. , dorrer , c. , kasper , m. , boccaletti , a. , & dohlen , k. , 2009b , , 500 , 1281 ren , d. q. , & zhu , y. t. 2007 , , 119,1063 soummer , r. , pueyo , l. , ferrari , a. , aime , c. , sivaramakrishnan , a. , & yaitskova , n. 2009 , , 695 , 695 vanderbei , r. j. , kasdin , n. j. , & spergel , d. n. 2004 , , 615 , 555
|
we propose a transmission - filter coronagraph for direct imaging of jupiter - like exoplanets with ground - based telescopes . the coronagraph is based on a transmission filter that consists of finite number of transmission steps . a discrete optimization algorithm is proposed for the design of the transmission filter that is optimized for ground - based telescopes with central obstructions and spider structures . we discussed the algorithm that is applied for our coronagraph design . to demonstrate the performance of the coronagraph , a filter was manufactured and laboratory tests were conducted . the test results show that the coronagraph can achieve a high contrast of at an inner working angle of , which indicates that our coronagraph can be immediately used for the direct imaging of jupiter - like exoplanets with ground - based telescopes .
|
mitochondria are organelles in eukaryotic cells that provide most of the chemical energy , atp , from oxidative metabolism and more recently have been shown to play a key role in apoptosis or programmed cell death .mitochondria have their own dna and are thought to have evolved from a prokaryotic organism that became engulfed and lived inside the ancient eukaryotic cell .mitochondria have an outer membrane that surrounds a complex inner membrane structure that in turn encloses the matrix space of the organelle .the inner membrane and the matrix contain a rich collection of enzymes that are crucial to breaking down a number of metabolites such as fatty acids and pyruvate forming acetyl - coa .acetyl - coa is oxidized via the citric acid cycle to produce reduced nucleotides , nadh and fadh2 , that provide reducing potential for the mitochondrial electron transport chain that converts this energy into an electrochemical proton gradient across the inner mitochondrial membrane that the atp synthase uses to synthesize atp , the principal source of energy for cell function .mitochondria also interact with the cell in the process of apoptosis .when mitochondria receive certain signals they undergo a structural transformation that leads to the release of cytochrome c , which in turn causes the cell to destroy itself in a controlled process .our research examines the physical structure of mitochondria in hopes of better understanding a number of the key functions performed by this organelle .electron tomography has provided high resolution three - dimensional structures of orthodox " ( healthy ) mitochondria _ in vivo _ .( 100,100)(0,0 ) ( 0,0 ) these structures , shown in figure 1 , exhibit several features necessary for proper function .the inner mitochondrial membrane consists of a lipid bilayer that has two components .( 1 ) an inner boundary membrane ( ibm ) that lies closely apposed to the outer membrane and ( 2 ) a crista membrane that projects into the matrix forming cristae , which are either tubular in shape or lamellar .tubules are connected to the ibm by crista junctions ( see fig .1 ) , shaped like the bell of a trumpet .the observed mitochondria have a large matrix volume that pushes the inner boundary membrane against the outer membrane and collapses the cristae into flat lamellar compartments . in our work, we explore the possibility that at least portions of the mitochondrial membrane make up a thermodynamically stable structure that minimizes free energy . rather than try to deduce the morphology from first principles , we take the observed morphology as given and make inferences regarding the physico - chemical environment in which this morphology could exist .we begin by noting that the observed morphology shows a definite scale . that the crista junctions have a fairly constant radius of about 10 nm has been noted in several places .in fact , their diameter roughly matches the spacing between the lamellar regions of the cristae and that of the tubules linking these regions to the junctions .it is certainly possible that some skeletal components maintain the spacing everywhere and thereby account for the scale .we consider the hypothesis that such skeletal elements exist only in the lamellae whose surfaces house the machinery of atp production which probably requires ( and gives ) some mechanical stability at a spacing that roughly matches the distance that the membrane bound proteins extend into the intermembrane space . in that casethe shape of the tubular regions is determined by elastic energy minimization rather than skeletal elements .suppose that a cylindrical tubular bilayer of fixed length is constrained in such a way that it can only increase or decrease its radius by exchanging area ( molecules ) with a flat membrane as a reservoir .it is not surprising , all other things being equal , that its radius would grow indefinitely in order to mitigate the energetic cost of bending required by the formation of the cylindrical tubule .if , however , there were a positive osmotic pressure difference across the membrane favoring the exterior of the tube , i.e. the mitochondrial matrix , then osmotic work would be required to grow the radius of the tube .the result is a tradeoff of two energetic components ( bending and pressure work ) , giving an equilibrium tube radius whose magnitude depends on the pressure difference .although such a pressure difference has not been measured , the matrix volume has been shown to respond to changes in osmolarity of the surrounding media , and the crista junction diameters respond to changes in matrix volume the imm has been shown to contain several types of phospholipids .in addition , 50% of membrane surface is occupied by proteins , while proteins make up approximately 75% of the inner membrane mass . for the sake of simplicity ,our model includes no proteins and only the two most common lipid types : phosphatidyl ethanolamine ( pe ) and phosphatidyl choline ( pc ) .these occur naturally in the imm at fractions of 27.7% and 44.5% , respectively .moreover , in our model , we consider only the dioleic acid esters of the lipds , dope and dopc , each of which is heterogeneous with respect to its fatty acid composition .although most authors neglect membrane composition altogether , some have attributed the variations in membrane curvature to the existence of domains of differently shaped molecules .while for lipids of limited viscibility these can be seen , we do not expect this to be the case for dope and dopc which are chemically very similar and thus should form nearly ideal solutions in which the entropic incentive to mix is far outweighed by possible energetic advantages of segregation .however , a different lipid composition on in the two monolayers of the tubular membrane is to be expected .hence , we assess the extent to which the geometry of the lipids contributes to the shape of the membrane .the contest here is between the entropic contribution to the free energy and the bending energy savings obtained by distributing the molecules according to shape .we formulate the free energy of a tubule plus surroundings as a function of its radius and composition .optimality with respect to variation of the radius gives a predicted osmotic pressure difference across the membrane .optimality with respect to composition predicts the extent to which shape - based redistribution takes place among the molecules .the two most extreme curvature environments are represented by the inner and outer monolayers of the tube .the composition of the principal lipid is calculated to vary by about 7% between these two regions for the observed tubular size .this result reveals a dominant role played by the entropic contribution to the free energy at normal physiological temperatures .although our approach does not come close to explaining all aspects of inner membrane morphology , it is well grounded in experimental observations and enables us to leverage observed morphologies into predictions regarding additional aspects of the physico - chemical environment in which membrane morphology is observed .in this section we formulate the free energy of a tubule and its surroundings as a function of its radius and composition . the flat portions act as a reservoir which constrains the chemical potential of the lipid molecules in the tubules , which , by our assumption , must be in equilibrium with this reservoir . since the reservoir is a bilayer comprising a surface of mean curvature zero , the lipid compositions on the two sides ( inner and outer ) are the same , at least as far as bending forces are concerned .short of postulating a preference of some lipids for the chemical environment on the two sides of the membrane , we may assume that the compositions on the two sides are the same and act as a reservoir for lipid molecules in the tubular regions .hence , we consider molecules of dope and molecules of dopc distributed among the inner and outer layers of a cylindrical bilayer of unit length and a flat bilayer reservoir .let and denote respectively the total bending energy and the total entropy of the membrane molecules , and the temperature .then , up to constants in the radius and the compositions of the membrane , the sum of the free energies of all the systems that participate in the energetics of altering the radius or the tubule and its composition can be written as in this equation we have dropped the term for the membrane and the and contributions of the surrounding cytosol .thus the portion is the free energy of the membrane while the term is the free energy of the matrix and intermembrane region , which depends on the volume v inside the cylindrical tubule and the osmotic pressure difference between the matrix and intermembrane space , with the higher pressure in the matrix .we will use the following notational conventions .subscripts and will continue to denote the molecular species dope and dopc .superscripts , , and will refer , respectively , to the inner monolayer , the outer monolayer of the tubule , and either monolayer of the flat bilayer reservoir . will continue to indicate the number of molecules , and lower case letters and will indicate energy and entropy per molecule .more precisely , will denote a partial molecular entropy .for example , is the partial molecular bending energy of dope residing in the outer monolayer of the cylindrical tubule .consider first the total bending energy .the conventional approach is to employ helfrich s theory and to take the free energy density per unit membrane area as where and are the ambient and sponteneous curvatures of the membrane and is the bending modulus .however , to allow us to study the lipid redistribution between the monolayers of the tubular membrane and the reservoir , we employ a molecular level model .our model takes the bending energy of the bilayer to be additive over the individual lipids in each of the monolayers .following israelachvilli , we take the bending energy of one lipid molecule in the cylindrical monolayer to be where is the compressibility modulus , and is the characteristic interfacial area of the lipid at the ambient curvature .the compressibility modulus is related to the bending modulus . for small deformations depends linearly on and the square of the membrane thickness .the relation between and will be given shortly .for a monolayer containing one type of lipid , is defined as the area of the membrane divided by the total number of lipids . is the characteristic interfacial area for a monolayer at the spontaneous curvature . since the spontaneous curvature and hence depend on the type of lipid , the bending energy differs as wellthis causes a redistribution over the two leaflets , since they both have different ambient curvatures .spontaneous curvatures have been determined experimentally and are understood as the curvature of choice " for a particular lipid type constrained to a cylindrical monolayer with minimum bending energy .the total bending energy these terms can be rearranged to give where , all four are defined relative to the value of each quantity in the reservoir . for example we have also defined the quantity which does not vary as the molecules are redistributed among the three compartments .we introduce and to represent the fraction of dope on the inner and the outer monolayers of the tubule , respectively . with this, can be rewritten : a perfectly analogous formula holds for : it remains to formulate the s , the s , and the s .the s depend not only on the fractions and , but also on the fraction of dope in the flat reservoir .assuming that the membrane contains only dope and dopc and that their ratio is that of the ratio of pe and pc in mitochondrial inner membranes , we have taken .we define and analogous terms to be .the partial molecular entropies are decomposed into a pure part and a mixing part .since the pure part is the same in all three compartments , the s depend only on the mixing , _i.e. _ , we write . the total entropy of ( ideal ) mixing of the two species on the inner monolayer of the tubule is given by : or , more explicitly , as a function of the molecule numbers : .\ ] ] the partial molecular entropy is obtained as similarly , , and , finally , the other s are obtained similarly : these are now substituted into ( [ 7 ] ) .( 100,100)(0,0 ) ( 0,0 ) the interfacial surface of each compartment ( ) has its own ( cylindrical ) curvature .they can be expressed in terms of the radius , , of the midsurface of the cylindrical tubule and the width , , of the hydrocarbon tails due to one monolayer ( see figure 2 ) .it follows that : for the immediate discussion we suppress the superscripts .let be the volume of the hydrocarbon tails of a lipid molecule .each molecule residing in a cylindrical monolayer with interfacial curvature , , has a characteristic interfacial area , , which is defined as the area of the cylindrical tubule divided by the total number of molecules .these quantities are related by the packing factor " equation : in our model the hydrocarbon tails of both lipids are identical and hence , so one fits all molecules in a monolayer .this means that the total number of molecules in a monolayer can be written recall that we take the ( fixed ) length of the cylindrical tubule to be 1 , for convenience .also recall is negative ; hence , the absolute value . combining ( [ 18 ] ) and( [ 19 ] ) , we have using ( [ 15 ] ) and ( [ 16 ] ) to adapt ( [ 20 ] ) to the two cylindrical monolayers , we have we now calculate the following bending energies : combining ( [ 24 ] ) and ( [ 18 ] ) , we obtain distributing the appropriate subscripts and superscripts to the quantities , , , and , we obtain the 6 quantities ( [ 23 ] ) .this completes the formulation of the free energy .values for all constants have been obtained from the literature .the thickness of the layer of hydrocarbon tails , , is assumed to be constant in the model at hand and equals 1.6 nm . experimentally it has been found that , when dope and dopc form monolayers with a cylindrical shape , their spontaneous curvatures , , are the inverse of their intrisic radii of curvature respectively nm and nm .the area per headgroup for dope equals 0.163 nm . using ( [ 18 ] ) for dope oneobtains nm .since the volume of the hydrocarbon tails of the two lipid species is the same , one can use the same formula ( [ 18 ] ) to obtain that for dopc equals 0.208 nm .the compressibility moduli for dope and dopc are and , respectively .( 100,100)(0,0 ) ( 0,0 ) for values of between 0.4 mbar and 4 bar the free energy as a function of the radius , and the compositions , and has been calculated using ( [ 1 ] ) .figure 3 gives the free energy as a function of for a pressure difference ( 0.2 bar ) and compositions of monolayers of the tubular membrane given by and .these values for and yield the lowest free energy ; changing the compositions of the monolayers results in a similar graph to that shown in figure 3 , except that the value of at which a minimum occurs is higher .figure 3 shows the total free energy as well as the individual contributions of entropy and bending energy of the membrane , and the free energy of the surroundings . the scale is arbitrary and set so that the free energy vanishes at infinite r , zero , and . the pressure difference ( bar )is adjusted such that the free energy curve has a minimum at nm , which is close to the experimentally observed value .the pressure difference correponds to a concentration difference of mm . setting our scale in figure 3 so as to make the free energy vanish at infinite r , zero , and amounts to setting the quantities in ( [ 6 ] ) and in ( [ 7 ] ) to zero .their values are independent of , , and .setting them to zero will not change the location of the minimum of free energy .it follows that all three terms in ( [ 1 ] ) scale linearly with the length of the tubule .therefore our results are valid for arbitrary length .( 100,100)(0,0 ) ( 0,0 ) [ htb ] .values of the minimum free energy as a function of the pressure difference , along with the optimum values for , , and .[ cols= " < , > , > , > , > , > " , ] at each value of , the free energy was minimized and the results are tabulated in table 1 which lists values of , , and , that minimize the free energy , for various values of the pressure difference .figure 4 shows how varies as a function of along this locus of minimum free energy .interestingly , the radius of 10 nm is reached in the elbow " of the curve . increasing the pressure by one order of magnitude decreases the radius by half .however , decreasing the pressure by one order , increases the radius by a factor of five or so .as expected , the value of the free energy decreases with increasing radius .it will be zero at infinite radius and zero pressure difference . at these values , . at a finite pressure ,more dope than average is found in the inner layer and more dopc in the outer layer . at the highest pressure difference (4 bar )the absolute value of the compositions of the layers in the tubular membrane differ by 17 % .( 100,100)(0,0 ) ( 0,0 ) figure 5 shows the variation of the free energy as a function of lipid distribution . a sharp increase in free energycan be observed when the composition deviates from its optimum at and .a weakness of the current approach is that the helfrich energy ( [ 25 ] ) is only valid for small deviations from the spontaneous curvature .curvatures of the inner and outer monolayers of the tubules differ by up to 100 percent from the spontaneous curvatures .it follows that ( [ 24 ] ) is only approximately valid .currently , we are performing monte carlo simulations in hopes of obtaining the higher - order corrections to this equation . using these in the calculations will improve the results as will adding the effects of other membrane components on the spontaneous curvature and on the elastic moduli .in the present paper we have considered a two - lipid model of the inner mitochondrial membrane and examined the changes in free energy for the tubular parts caused by variations in shape and composition .the analysis led to two predictions : ( 1 ) the observed radius of 10 nm implies that there is a 0.2 atmosphere osmotic pressure difference across the inner membrane with the higher pressure in the matrix and ( 2 ) lipids redistribute themselves to give different compositions on the two sides of the tubular membrane , since the resulting decrease in bending energy is smaller than the entropic penalty . using a two lipid model, we found that for crista tubules of the observed size the absolute lipid compositions on the two sides of the membrane differ by about 7 percent .although the possibility that composition drives shape changes has been discussed before , most approaches in the literature neglect a composition dependence . without such a dependence , the second term in equation ( [ 1 ] )is absent and the minimum of the free energy results from a competition between the bending term and the term due to pressure difference . instead of using expression ( [ 28 ] ) ,a helfrich term ( [ 25 ] ) is then used to model the bending energy . as can be seen in figure 3 , in our modelthe entropic term is almost constant , since the composition of the membrane is more or less uniform .hence , as first approximation , it should be possible to express the bending energy as a helfrich term. the bending energy can then be obtained from a measurement of the bending modulus of the inner membrane made on swollen mitoplasts . the measured value for will yield a value for the bending energy that accounts for all membrane components including cardiolipin and high concentrations of a variety of integral membrane proteins .our model predicts that changes in the radii of tubules and junctions correspond to variations in pressure difference .this might be tested experimentally by manipulating the osmotic pressure in preparations of purified mitochondria and observing changes in the radii of tubular components .it has been suggested that the junctions act as a barrier to the diffusion of cytochrome c. indeed scorrano _ et al . _ have observed that during apoptosis ( programmed cell death ) the inner boundary membrane remodels , and the radii of the tubules increase . in certain types of mitochondria, they can increase in these _ in vitro _experiments to 20 nm .as seen in table 1 , this corresponds , according to our model , to a large change in the osmotic pressure difference . on the other hand ,purified mitochondria that have been induced to undergo a permeability transition in buffer of low osmolarity experience an increased that causes the matrix to swell .the crista junctions in these mitochondria are slightly smaller with radii of 8.5 nm .although the model at hand succesfully describes some of the features of the observed morphology , it fails to explain some crucial issues .for instance , as can be seen in figure 3 , the minimum value of the free energy for tubules is positive and hence these structures are unstable. the tubules will tend to shrink and vanish in the flat membrane regions .additional mechanisms must be at work that prevent them from doing so .one possibility is that such a mechanism is provided by proteins and skeletal elements .however , we can envision an alternative mechanism .since the inner membrane is confined by an outer one , its area can only grow by buckling or by creating protrusions .it is very likely that the confinement causes tensile stresses .currently we are investigating the possibility that , thermodynamically , the combined effects of osmotic pressure differences and tensile stresses , account for the observed coexistence of cylindrical tubes of finite radius and flat lamellar structures .in addition , since the membrane is fluid , tubules will continuously arise , grow , shrink , and eventually vanish back into the flat portions of the membrane .it is quite likely that not just the structural organization of mitochondria , but also temporal variations of this structure , are of importance to understanding mitochondrial functionality .this research is supported by a grant to ap and arb from the donors of the petroleum research fund , administered by the american chemical society and a blasker science and technology grant from the san diego foundation to tgf .we thank bjarne andersen and christian renken for helpful conversations .frey , t. g. and manella , c. a. 2000 , 319 - 324 .
|
the inner mitochondrial membrane has been shown to have a novel structure that contains tubular components whose radii are on the order of 10 nm as well as comparatively flat regions . the structural organization of mitochondria is important to understanding their functionality . we present a model that can account , thermodynamically , for the observed size of the tubules . the model contains two lipid constituents with different shapes . they are allowed to distribute in such a way that the composition differs on the two sides of the tubular membrane . our calculations make two predictions : ( 1 ) there is a pressure difference of 0.2 atmospheres across the inner membrane as a necessary consequence of the experimentally observed tubule radius of 10 nm . and ( 2 ) migration of differently shaped lipids causes concentration variations between the two sides of the tubular membrane on the order of 7 percent .
|
in a recent paper , maslov and zhang addressed the following problem : we are given agents , each one represented by an -dimensional real vector ; suppose we know of the scalar products with .in this situation , can we predict the value of an unknown scalar product ?this question is relevant for instance to the problem of extracting information from the vast amount of data generated by a commercial website .the may represent in that context the interests of a person , and the mutual appreciation of persons and ; the problem is then to predict the mutual appreciation of two persons that do not know each other .maslov and zhang called the network of interactions and overlaps a `` knowledge network '' .one of their main results is the following : there exists a critical density of known overlaps above which almost all the a priori unknown overlaps are completely determined by the known ones .this transition is a realization of the so - called rigidity percolation .however , their treatment leaves several important issues aside , and assumes that we have at our disposal much more information that we typically do .for instance , the size of the vectors describing each agent is a priori unknown ; the problem of estimating from the data was addressed in .more drastically , the data on the overlaps is necessarily noisy : if and model the interests of persons and , their mutual appreciation is certainly not completely determined by the overlap of their interests , although it is probably biased by it . in this more realistic case of noisy information ,the questions are : does the `` phase transition '' noted by maslov and zhang survive ? and how to retrieve the information contained in the noisy knowledge network ?we address these issues in the following by studying a simple model of this situation .the outline of the paper is as follows : we present in section [ sec : model ] the details of the model we are going to study , and the mapping onto a disordered statistical mechanics problem , which happens to be the one studied in and more recently in .this mapping opens the door to the use of many analytical and numerical methods . in section[ sec : cavity ] , we give the solution of this problem at the replica symmetric level , using the cavity method .we then check these analytical results against numerical simulations in section [ sec : num ] , and real data from the united states senate in section [ sec : us_senate ] .we present now the noisy version of maslov and zhang s `` knowledge network '' which we are going to study ; for simplicity , the variable describing each agent is discrete , and one dimensional .we consider agents ; each one is characterized by an opinion , with ; the may take different values , and are a priori unknown .the may be for instance political opinions , as in the example of section [ sec : us_senate ] .we suppose we have some information on the , given by a an analog of the `` overlaps '' of : for a certain number of pairs we know a number associated to it , constructed as follows .if , then with probability , and with probability ; if , then with probability , and with probability .we take . is then a measure of the noise in the information ; in the limit , the network does not convey any information on the .the basic questions we ask are : how well can we reconstruct the actual opinions knowing the ?do we have an effective algorithm to do so ?we are interested in the probability of any set of opinions , given the representing our knowledge ; from bayes formula , we can write : the factor is the prior probability on the ; we suppose from now on that it is flat , so that this term is independent of the .it would be possible however to consider another prior probability .the factor is difficult to compute , as the are correlated in an intricated way ; however , it is in any case independent of the , so it acts as a normalization factor for the distribution ( [ eq : proba ] ) . finally , the is easy to compute , since once the are given , the are independent .let us consider two agents and with opinions and ; then from simple algebra one checks that since the are independent once the are given , eq .( [ eq : proba ] ) may be rewritten as where the index means that the product runs over the pairs that are connected by a known .taking the logarithm , we have : =-log \left [ p \left ( \{s_i\}|\{j_{ij}\ } \right ) \right ] = \mbox{cste}-b\sum_{<i , j > } j_{ij}(2\delta_{s_i , s_j}-1)~ , \label{eq : ham}\ ] ] with eq .( [ eq : ham ] ) can be seen as the hamiltonian of a disordered potts model , which opens the door to the use of many analytical and numerical tools to study it . from now on ,we will concentrate for simplicity on the ising case , where each agent may have only two opinions , or . in this ising case ,the hamiltonian reads : =-log \left [ p \left ( \{s_i\}|\{j_{ij}\ } \right ) \right ] = \mbox{cste}-b\sum_{<i , j > } j_{ij}s_is_j~ , \label{eq : ham_is}\ ] ] the sets with maximum probability are the minimizers of eq .( [ eq : ham ] ) ; the minimizer is not necessarily unique .the question , how well can we reconstruct the real opinions knowing the is then rephrased as : given a minimizer of eq .( [ eq : ham ] ) , how far is it from the real opinions ?we answer this question in the next section .we note that this rephrasing of the problem bears some resemblance with the community detection , or clustering problem as stated in ; in this work however , the probabilistic analysis yields a potts - like model without disorder .hamiltonian ( [ eq : ham_is ] ) is not as well - suited for analytical treatment as it seems to be .it is a disordered ising model , but the probability distribution of the couplings is not known , and actually very complicated : the relevant information we want to extract is precisely hidden in the correlations between the .the following gauge transformation , somewhat miraculously , yields a tractable problem .we define and ( the are the true opinions of the agents ) ; then = -b\sum_{<i , j > } \tilde{j}_{ij}\tilde{s}_i\tilde{s}_j~. \label{eq : ham2}\ ] ] the distribution of the does not depend any more on the : with probability and with probability : all correlations in the couplings have disappeared .furthermore , given a set , it is easy to know how far the corresponding set is from the original : it is enough to compute the number of equal to .thus we are left with the study of hamiltonian ( [ eq : ham2 ] ) , which is that of a ferromagnetically biased ising spin glass .we would like to compute the magnetization of the ground state of such a hamiltonian . from now on ,we remove the on the s and the s .let us note that the ground state does not depend on , so we may take for simplicity ( as long as , that is ) . all explicit dependence on then removed , which is very convenient for practical purposes , as is a priori unknown : the knowledge of the is sufficient to determine the minimizers of ( [ eq : ham2 ] ) .we need however to keep as a parameter in the theoretical analysis , and will turn later to the issue of estimating it .it turns out that the ferromagnetically biased ising spin glass given by hamiltonian ( [ eq : ham2 ] ) has been studied recently by castellani et al . in for fixed connectivity graphs . in the present context , it is more natural to consider random graphs of erds - rnyi type , with a poissonian distribution of connectivity .however , such a change from fixed to poissonian connectivity usually does not induce any qualitative change in the phase diagram .castellani et al .use the cavity method to compute , among other quantities , the one we are interested in : the ground state magnetization as a function of the parameter .let us summarize briefly their main results : at low , the ground state is replica symmetric and magnetized , the ground state magnetization approaching when goes to ; at some critical , the replica symmetry is broken , but the ground state is still magnetized ; finally , for , the ground state looses its magnetization . when the connectivity of the graph increases , this picture is unchanged , but the value of and increase .we give now the replica symmetric solution of ( [ eq : ham2 ] ) , for an erds - rnyi random graph , with a poissonian connectivity distribution , of degree . calculations closely follow those of for fixed connectivity .the cavity messages sent by the sites along the links take only the values , and . at the replica symmetric level ,the system is then described by a single probability distribution : we write a recursion relation for the probability distribution as follows : where sgn is the sign function , taken to be zero when the argument is zero ; means `` expectation '' over the coupling .[ eq : recursion1 ] straightforwardly translates into three fixed point equations for and ( see fig .[ fig : recursion ] for an explanation of and ) : \nonumber \\q_-&=&1-q_+-q_0~. \label{eq : recursion}\end{aligned}\ ] ] once and are known , the ground state magnetization is given by the expression : ~.\ ] ] to compute the ground state energy , one computes the energy shifts due to the addition of a site , and due to the addition of a link .one gets after straightforward calculations : the ground state energy is then given by the qualitative picture emerging from this replica symmetric analysis is the following : for each mean connectivity , there is a critical value such that for , it is possible to extract information from the knowledge network .the error rate in the limit is directly related to the ground state magnetization : for , it is not possible any more to extract meaningful information from the data in the limit : the error rate tends to .we compare these replica symmetric analytic results to numerical simulations in the next section .we can make however some a priori remarks on the validity of the calculation .first , we expect the calculations to be exact at small enough ; we then expect a replica symmetry breaking transition at some . for ,the replica symmetric results are not reliable any more .we expect that the phase transition described above towards a non magnetized ground state is shifted to some .however , the qualitative result of a transition between one phase which contains some information and another one which does not should still hold true .another word of caution is in order : the authors of note strong finite size effects for a fixed connectivity network ; this is likely to be the case also for a poissonian network , and it may smear out somewhat the transition for finite . as already noted above , eq .( [ eq : ham2 ] ) only depends on through the parameter , so an a priori knowledge of is not necessary to carry out the minimization .this is an interesting practical advantage .however , the amount of errors contained in the minimizer strongly depends on , as explained above .so it would be useful to have some information about the value of , to get an estimate of the amount of errors contained in the ground state .it is indeed in some cases possible to estimate from the only available data , the s .suppose we are given a network .it is possible to compute for this network , the ground state energy as a function of , by randomly choosing the s with probability ; this can be done analytically in some cases with the cavity method , or numerically .then one computes the ground state of the network with the real s from the data ; comparing with the , one gets an estimate of , provided the curve is not flat .we now compare the analytical prediction of the previous section to data generated randomly : we randomly assign a value or to spins ; we randomly draw a network connecting these spins , and randomly assign a value or to each link connecting spins and , following the rule : we then numerically minimize the corresponding hamiltonian . for this purpose , we may use simulated annealing .it is simple to program , but not very fast , and does not perform well in the replica symmetry broken phase .however , the structure of the problem may suggest to use another class of algorithm , intensively studied in different contexts recently ( see for instance for a pedagogical introduction in the context of error correcting codes ) : belief propagation ( bp ) .bp is not expected to perform better than simulated annealing in the replica symmetry broken phase , and it may sometimes fail to converge . however , it performs overall very well , and is much faster than simulated annealing , which allows to reach higher : this is crucial to deal with large data sets . on fig .[ fig : energy_magn ] , one sees that the agreement between simulations using bp and replica symmetric calculations is very good for low . for larger , there are important discrepancies , that may have two origins .first , one expects a replica symmmetry breaking , as in ; this means that the replica symmetric calculation is not exact any more , and that bp is not expected to perform well .second , as already noticed in , finite size effects are strong . however , the numerical results seem compatible with the main analytical finding : the presence of a transition between a low phase which contains information , and a high one that does not .we also note that the error rate obtained with bp is always smaller than the theoretical one estimated from the replica symmetric analysis .finally , it is interesting to compare quantitatively these results with those of for regular graphs : both theory and numerics predict a significantly higher threshold between the informative and non informative phases for a poissonian network , for a given mean connectivity .bp does have another big advantage over simulated annealing : its outcome is a magnetization for each site ; so we also have an indication on which sites are most likely to be wrongly guessed ( those with magnetization close to zero ) . as a final remark , it could be possible to improve performance in the replica symmetry broken phase by using a survey propagation algorithm .the analytical results of section [ sec : cavity ] are strengthened by the numerical simulations of section [ sec : num ] ; however , unlike the numerical data , any real data set does not follow exactly the probabilistic model underlying our study .it is thus important to assess how robust are the results with respect to some uncertainty in the model . in this section, we will analyze data from the united states senate votes , and show that the strategy of minimizing hamitonian ( [ eq : ham2 ] ) does allow to retrieve some information from the data ; the amount of information retrieved is in reasonable quantitative agreement with the predictions of section [ sec : cavity ] .we consider here as agents the 100 us senators serving in 2001 .the party of each senator plays the role of the unknown opinion ; say if senator is a democrat , and if senator is a republican .on the us senate website ( http://www.senate.gov/ ) , the voting positions of all senators are available for the so - called `` roll call votes '' .we expect that senators from the same party tend to cast the same vote , and senators from different parties tend to vote differently , although it is of course not an absolute rule .we construct an instance of the `` knowledge network problem '' as follows : * we pick up a random network with given parameter , and the senators as nodes . * for each edge of the network , linking two senators with labels and , we pick up randomly one roll call vote in early 2001 and consider the voting positions of the two senators and .if they casted the same vote , we set ; if they casted a different vote , we set . varying the random network and the random pick of the roll call votes for each link , we can generate many different instances of the `` knowledge network '' for each . as senators from the same ( resp .different ) party tend to cast the same ( resp .different ) vote , they tend to be linked by edges with positive ( resp .negative ) s .the fact that senators do not always vote like the majority of their colleagues from the same party plays the role of a noise .we crudely model this situation as in section [ sec : model ] , assuming that with probability , and with probability , being unknown , smaller than .we now want to retrieve some information about the s ( ie the party of each senator ) , using the method described in this paper .based only on the set of the , we run the bp algorithm for each instance of the `` knowledge network '' , without using any a priori knowledge on the parameter ; we then split the senate in republicans and democrats , according to the bp results .we can check how many errors we have , and compare with the theory of section [ sec : cavity ] .note that we can choose the connectivity of the random network .we have no control however on the parameter .the results are presented in fig .[ fig : ussenate ] , and compared to the replica symmetric analytical calculations .they seem to be consistent with the main qualitative analytical result : the existence of a threshold separating a phase containing almost no information ( low ) and a phase which contains some ( high ) .we also see on fig .[ fig : ussenate ] , that there is a strong sample to sample variability ; for small error rates however ( large values of the mean connectivity ) , the agreement is rather good ; for smaller , the agreement is poor .there are two explanations for that , besides the fact that the votes are not random : replica symmetry is probably broken , and , which is more important for such small systems ( ) , finite size effects create large bias .we note however that the practical error rate is usually smaller than the analytical one .we have extended the `` knowledge network '' formalism of to the more realistic case of noisy data .we have shown that there is a phase transition between an information - rich phase , and a phase that essentially contains no information . in the former situation, the information may be efficiently retrieved through a belief propagation algorithm .there are several possible extensions to this work .the most direct ones are the study of non - binary opinions ( potts - like models ) , or multidimensionnal opinions . with the applications to commercial websites in mind presented in , it would also be interesting to consider bipartite networks .for all these cases , it seems that the disordered statistical mechanics point of view used in this paper may be fruitful , by suggesting the use of some powerful analytical as well as numerical techniques .99 s. maslov and y .- c .zhang 2001 `` extracting hidden information from knowledge networks '' , _ phys .lett . _ * 87 * , 248701 . f. bagnoli , a. berrones and f. franci 2004 `` de gustibus disputandum ( forecasting opinions by knowledge networks ) '' , _ phys . a _ * 332 * , 509 - 518 . c. kwon and d. thouless 1988 `` ising spin glass at zero temperature on the bethe lattice '' , _ phys .rev b _ * 37 * , 7649 - 7654 .t. castellani , f. krzakala and f. ricci - tersenghi 2005 `` spin glass models with ferromagnetically biased couplings on the bethe lattice : analytic solutions and numerical simulations '' _ europhys .j. b _ * 47 * , 1434 . m. mzard , g. parisi 2003 `` the cavity method at zero temperature '' , _ j. stat. phys . _ * 111 * , 1 .m. b. hastings 2006 `` community detection as an inference problem '' _ physe _ * 74 * , 035102 .m. mzard and a. montanari 2006 `` reconstruction on trees and spin glass transition '' , _ j. stat ._ * 124 * , 1317 - 1350 . m. mzard and r. zecchina 2002 `` random k - satisfiability problem : from an analytic solution to an efficient algorithm '' _ physe _ * 66 * , 056126 .a. montanari 2005 `` two lectures on iterative coding and statistical mechanics '' , delivered in les houches ; cond - mat/0512296 .
|
we address the problem of retrieving information from a noisy version of the `` knowledge networks '' introduced by maslov and zhang . we map this problem onto a disordered statistical mechanics model , which opens the door to many analytical and numerical approaches . we give the replica symmetric solution , compare with numerical simulations , and finally discuss an application to real data from the united states senate . _ keywords _ : communication , supply and information networks ; random graphs , networks ; message - passing algorithms .
|
the analysis of network data is an open statistical problem , with many potential applications in the social sciences [ ] and in biology [ ] . in such applications ,the models tend to pose both computational and statistical challenges , in that neither their fitting method nor their large sample properties are well understood .however , some results are becoming known for a model known as the stochastic blockmodel , which assumes that the network connections are explainable by a latent discrete class variable associated with each node . for this model , consistency has been shown for profile likelihood maximization [ ] , a spectral - clustering based method [ ] , and other methods as well [ ] , under varying assumptions on the sparsity of the network and the number of classes .these results suggest that the model has reasonable statistical properties , and empirical experiments suggest that efficient approximate methods may suffice to find the parameter estimates .however , formally there is no satisfactory inference theory for the behavior of classical procedures such as maximum likelihood under the model , nor for any procedure which is computationally not potentially np under worst - case analysis . in this note, we establish both consistency and asymptotic normality of maximum likelihood estimation , and also of a variational approximation method , considering sparse models and restricted sub - models . to some extent, we are following a pioneering paper of celisse et al .[ ] , in which the dense model was considered , and consistency was established , but only for a subset of the parameters .we consider a class of latent variable models considered by various authors [ ] , which we describe as follows .let be latent random variables corresponding to vertices , taking values in \equiv \{1,\ldots , k\} ] , and let be a symmetric matrix in ^{k\times k} ] , which satisfies and is given by ^n } f(z , a).\ ] ] it is data from gm which we assume we observe .we will allow to be parameterized by taking values in some restricted space , so that parametric submodels of the blockmodel may be considered .we will consider parameterizations of the form , in which where is a nonnegative scalar ; is a euclidean parameter ranging over an open set ; is a symmetric matrix in ; and the map is assumed to be smooth .let .the interpretation of these parameters is that ] denote a permutation of ] , let , and for , let .it then holds for any permutation that and hence showing that when is latent , the stochastic blockmodel is nonidentifiable .specifically , is equivalent to for any permutation .let denote this equivalence class , which corresponds to a relabeling of the latent classes . by an estimate under the gm blockmodel, we will mean the equivalence class . by consistency and asymptotic normality of , we will mean that contains an element that converges to the generative , or has error that is asymptotically normal distributed for some rate . in our analysis, we will assume that the generative has no identical rows , as we can not expect to successfully distinguish classes which behave identically .if did contain identical rows , then an additional source of nonidentifiability would exist .also , the generative model would be equivalent to a stochastic blockmodel of smaller order .we do not treat such cases here .we note that for some restricted submodels , identifiability can be restored by imposing a canonical ordering of the latent classes .for example , the submodel may restrict so that depends only on whether or not ; this assumption could reflect homogeneity of the classes , and is explored in .this submodel is identifiable under ordering of , and the latent structure might be more gracefully described as a partition , that is , a variable satisfying iff . as a second example , the latent classes could be ordered by decreasing expected degree .if the submodel restricts the expected degrees to be unique , the submodel is identifiable ; further discussion can be found in .an interesting class of submodels , discussed in , are the `` degree - corrected '' blockmodels with -many classes obtained by considering , for , which take values ; where takes values with probabilities ; and given parameters ] , is a distribution on , and is replaced by a symmetric map ] which is generally intractable .however , note that we have added new parameters . intuitively , we expect the variational estimate to approximate the maximum likelihood estimate when there exists which is close to .we remark that is upper and lower bounded by for any ^n ] .theorem [ thconsistency ] from states that under the conditions of this lemma , this implies that . by markovs inequality , this implies that , which can be rewritten as combining ( [ eqvar3 ] ) and ( [ eqvar4 ] ) establishes ( [ eqvarlemma ] ) .our result for the variational estimates is theorem [ thvar ] .[ thvar ] let denote ] , it holds that so that theorem [ thvar ] implies to upper bound the same quantity , we observe that using lemma [ lepnass ] .thus , the arguments used to bound also imply the parametric bootstrap is also valid for .the algorithm is : estimate by .generate graphs of size according to the blockmodel with parameter , producing .fit by variational likelihood to get .compute the variance covariance matrix of these vectors and use it as an estimate of the truth , or similarly , use the empirical distribution function of the vectors . under the conditions of theorem [ thnormality ], the parametric bootstrap distribution of and converges to the gaussian limits given by lemma [ lecgmnormality ] . without loss of generalitywe take , so that we are asking that when the underlying parameter is , the random law of and converges with probability tending to 1 to the gaussian limits of and as generated under .let have the distribution of the cg mle based on the data that we have generated from . by standard exponential theory such as our lemma [ lecgmnormality ] , we observe that since the convergence is uniform on contiguous neighborhoods of and the mapping is smooth .as theorem [ thvar ] implies local asymptotic normality , a theorem of le cam [ , corollary 12.3.1 ] implies that with probability tending to , where denotes contiguity . as a result ,le cam s first contiguity lemma ( stated below ) in conjunction with theorem [ thvar ] implies that using this result with ( [ eqbootstrap1 ] ) , it follows that establishing the theorem . for completeness , we state le cam s first contiguity lemma as found in , lemma 6.4 .let and be sequences of probability measures on measurable spaces .then the following statements are equivalent : .if converges in distribution under to along a subsequence , then .if converges in distribution under to along a subsequence , then .for any statistics : if , then .in this paper , we have studied stochastic block and extended blockmodels , such that the average degree tends to at least at a polylog rate , and the number of blocks is fixed .we have shown : subject to identifiability restrictions , methods of estimation and parameter testing on maximum likelihood have exactly the same behavior as the same methods when the block identities are observed , such that an easily analyzed exponential family model is in force .the approach uses the methods of slightly corrected .unfortunately , computation of the likelihood is as difficult as the np - complete computation of modularities , which also yield parameter estimates that are usable in the same way .we also show that the variational likelihood , introduced in this context by , has the same properties as the ordinary likelihood under these conditions ; hence , the procedures discussed above but applied to the variational likelihood behave in the same way . the variational likelihood can be computed in operations , making this a more attractive method .these results easily imply that classical optimality properties of these procedures , such as achievement of the information bound , hold .a number of major issues still need to be resolved . hereare some : since the log likelihoods studied are highly nonconcave , selection of starting points for optimization seems critical .the most promising approaches from both a theoretical and computational point of view are spectral clustering approaches [ ] .blockmodels play the role of histogram approximations for more complex models of the type considered in , and if observed covariates are added for models such as those of .this implies permitting the number of blocks to increase , which makes perfect classification and classical rates of parameter estimation unlikely .issues of model selection and regularization come to the fore .some work of this type has been done in , but statistical approximation goals are unclear . we have indicated that our results for -parameterized blockmodels also apply to submodels which are sufficiently smoothly parameterizable .it seems likely that our methods can also apply to models where there are covariates associated to vertices or edges .we adopt the convention of and let denote . recall that . let . let . for any ^n ] by \bigl(a , a'\bigr ) = \frac{1}{n } \sum _ { i } 1\bigl\{{{\mathbf e}}_i = a , { { \mathbf c}}_i = a'\bigr\}.\ ] ] we observe that for fixed , is constrained to the set .let abbreviate .let .let denote the full data likelihood of the stochastic blockmodel , let denote the likelihood modularity [ ] , defined as .we observe that equals .\end{aligned}\ ] ] for , it is shown in that where the function is given by for and in the -simplex .the result of establishes that the following properties hold for [ also see for a reworked derivation ] : the function is maximized by any .the function is uniformly continuous if and are restricted to any subset bounded away from 0 .let .given , it holds for all that the directional derivatives are continuous in for all in a neighborhood of .we will use an bernstein inequality result , similar to that shown in .[ lepnas ] let . for , and \\[-8pt ] & & \qquad \leq2\pmatrix{n \cr m } k^{m+2 } \exp \biggl(-\frac { n}{m(8c_s+2)}\varepsilon^2 \mu_n \biggr)\nonumber\end{aligned}\ ] ] for . is a sum of independent zero mean random variables bounded by .thus by a bernstein inequality , we may bound and to yield for fixed that a union bound establishes ( [ eqpnaseq1 ] ) .similarly , is a sum of independent zero mean random variables bounded by .thus , we may bound and to yield for fixed that a union bound establishes ( [ eqpnaseq2 ] ) , where we use that for fixed .proof of theorem [ thconsistency ] the proof can be separated into four parts . herewe show , for some , that is suboptimal by at least for all in a set .this will imply that . by ( [ eqpnaseq1 ] ) , uniformly over ; hence , by continuity of there exists such that as a result , given the sets it holds for all that .we may choose to additionally satisfy where we require slowly enough that .we wish to show for a result similar to part 1 . however , as some will be very close to , we must bound the suboptimality of more carefully . by ( [ eqpnaseq2 ] ), it holds that it follows that we may choose such that where the final equality holds because , so that we may choose such that .it follows that here we bound the suboptimality of in similar fashion to part 1 .recall with .let abbreviate .property 3 implies that for all , where denotes that is bounded below ( in probability ) by times a constant factor . as , this implies for all , as converges in probability to , properties 3 and 4 together imply for all , and thus for , and hence also that it can be seen that . as a result , by ( [ eqpart2 ] ) , for all , and hence manipulation yields for all , where the term is uniform over . as a result, it follows from ( [ eqlastrevision ] ) that for , where the is uniform over .it follows that } \nonumber\\ & & \qquad\leq \sum_{m=1}^n \sum _ { { { \mathbf e}}:|\bar{{{\mathbf e}}}-{{\mathbf c}}|=m } e^{\mu_n f ( { o({{\mathbf c}})}/{\mu_n},\pi({{\mathbf c } } ) ) } e^{- { \mu_n}\omega_p(m)/{n } } \\ & & \qquad\leq \sum_{m=1}^n e^{\mu_n f ( { o({{\mathbf c}})}/{\mu _ n},\pi ( { { \mathbf c } } ) ) } k^k n^m k^m e^{-{\mu_n}\omega_p(m)/{n } } \nonumber\\ & & \qquad\leq \sum_{m=1}^n e^{\mu_n f ( { o({{\mathbf c}})}/{\mu _ n},\pi ( { { \mathbf c } } ) ) } k^k e^{m ( \log n + \log k - \omega_p(\mu _n / n ) ) } \nonumber\\ & & \qquad = e^{\mu_n f ( { o({{\mathbf c}})}/{\mu_n},\pi({{\mathbf c } } ) ) } o_p(1).\nonumber\end{aligned}\ ] ] combining ( [ eqpart3 ] ) and ( [ eqpart1 ] ) yields that since is unimodal in , it holds that if , then , and hence by lemma [ lecgmnormality2 ] , for any nonidentity permutation .it follows that \\[-8pt ] & = & \max_{\theta ' \in\mathcal{s}_\theta } f(a,{{\mathbf c}};\theta ) \bigl(1+o_p(1 ) \bigr).\nonumber\end{aligned}\ ] ] combining ( [ eqpart4 - 1 ] ) and ( [ eqpart4 - 2 ] ) yields letting abbreviate , and using , where in the last equality we have used the fact that for all since equals the likelihood of the mle under the cgm model , it holds that converges in distribution , and hence .we may therefore substitute to yield which proves the theorem .we would like to thank the reviewers for their help in fixing an earlier version of the paper .
|
variational methods for parameter estimation are an active research area , potentially offering computationally tractable heuristics with theoretical performance bounds . we build on recent work that applies such methods to network data , and establish asymptotic normality rates for parameter estimates of stochastic blockmodel data , by either maximum likelihood or variational estimation . the result also applies to various sub - models of the stochastic blockmodel found in the literature . , ,
|
we consider for simplicity an isotropic random walk embedded in the one - dimensional finite domain with initial position and absorbing boundaries ( so implicitly assuming that surrounding targets are located at and ) .we will choose arbitrarily , so can be interpreted as the initial distance of the searcher to the nearest target .the searcher moves continuously with constant speed and performs consecutive flights whose duration is distributed according to a multiexponential distribution function in the form so yielding a -scale msrw characterized by the persistence times and their corresponding weights , that satisfy the normalization condition .we now define as the probability that the walker starts at time from a single flight characterized by the distribution ( in the following , using a particular distribution is termed as _ being in state _ ) .the vector accounts for the set of initial conditions in all states , with the probability of being in state at . using this notation , the multi - scale ( non - markovian )walk gets reduced to a set of markovian states which satisfy ( according to standard prescriptions of the continuous - time random walk ) the mesoscopic balance equations ( for ) , where we have introduced the compact notation . the corresponding probability that the walker , passing through at time , is performing at that instant a flight in state given by here we have used the relation , valid for exponential distributions , which gives the probability that a single flight in state will last at least a time . due to the markovian embedding used , the general propagator of the random walk in an infinite media can be written in the laplace space ( with the laplace argument ) as with the probability density for state , , given by a sum of exponentials where and are positive constants to be determined from the solution of the system ( [ varphi]-[p ] ) .hence , the solution in the interval of interest with periodic boundary conditions reads finally , the exact mfpt can be computed from eq .( [ periodic ] ) by extending known methods for markovian processes ; in particular , we employ here the renewal method for velocity models . according to this ,we define as the first - passage time probability rate for a walker through any of the boundaries while being in state .the renewal property of markovian processes allows then to write the recurrence relations where is defined as the probability rate with which the walker hits ( not necessarily for the first time ) the boundary at time while being in state .the term has the same meaning but for a walker starting its path at state from the boundary ( so with ) . according to ( [ fpt ] ) the hitting rate gets divided into those trajectoriesfor which this is the first hitting rate ( ) plus those trajectories that hit the boundary for the first time at a previous time in any of the possible states ( second term on the lhs of ( [ fpt ] ) ) .the total first - passage distribution of the msrw will read then ( where the s are to be determined from the system of equations ( [ fpt ] ) ) , and the general expression for the mfpt will be by definition then , to find a closed expression for one just needs to express the hitting rates and in terms of the solutions of the random - walk ( [ varphi]-[periodic ] ) .this is given , in analogy to previous works , by here clearly a different behaviour for the case when the walker starts from the boundaries is introduced by convenience to make explicit that the walker can not get trapped by the boundary immediately at , but hittings are only possible for . in the following we study how the different scales considered in the msrw contribute to the search efficiency as a function of the two prominent spatial scales present in the problem ( i.e. and ) .the msrw scheme described above reduces trivially in this case to a classical correlated random walk ( see , e.g. , ) for which the free propagator ( equation ( [ infinite ] ) ) reads \ ] ] using the derivations in equations ( [ periodic]-[rate ] ) , the mfpt in ( [ mfpt ] ) yields the exact expression obtained by weiss thirty years ago so , assuming that , are fixed by the external or environmental conditions , we observe that the search optimization of the 1-scale random walk turns out to be trivial : faster searches ( i.e. larger values of ) and straighter trajectories ( i.e. ) will monotonically reduce the search time . in particular , note that for one recovers the result , which coincides with the result for a ballistic strategy .it is clear then that in 1-scale random walks the exploration - exploitation tradeoff ( vs ) is always trivially optimized through a ballistic strategy ( in agreement with the results in ) .as we shall see in the following , at least 2 scales are necessary in the random walk to observe such effects .the exact analytical solution for this case can still be found easily , albeit the general expression for the mfpt obtained is cumbersome ; details are provided in the supplementary information ( si ) file .a first survey on this solution ( which was implemented in maple ) allows us to observe that for _ large _ values of the balistic - like strategy ( i.e. , ) is again the one which minimizes .however , for _ small _ values we find now the emergence of an asymmetric regime in which the optimal is attained for one of the two scales ( either or ) being much larger than the time required to cover the domain , with the other scale exhibiting a smaller value .the threshold at which this transition occurs ( so , the value of for which the optimum becomes smaller than ) turns out to be , a value which is confirmed by random - walk simulations too . at the sight of these results, we will focus now our interest in providing some limit expressions which can help us to understand how this transition occurs and how the system behaves in the _ asymmetric _ regime .first we note that , in solving the exploration - exploitation tradeoff , the exploration part will be always optimized through flights much longer than the typical time to cover the whole domain , which explains why one of the two scales ( say , ) should be expected to be as large as possible , in particular . regarding the second scale ,the exploitation side of the tradeoff ( corresponding to exploring the surrounding area searching for nearby targets ) should intuitively benefit from choosing a scale of the order of , the time required to reach the nearest target .scales much larger than this would promote exploration instead of exploitation , while scales much smaller would lead to an unnnecessary overlap of the searcher s trajectory around its initial position . since the _ asymmetric_ regime must emerge necessarily from the asymmetric condition we can thus consider that this second scale should satisfy .taking the two limits ( and ) into account , our general solution for the mfpt reduces to \right ) .\label{mfpt2}\ ] ] visual inspection of this expression already shows that values of the mfpt below the ballistic threshold can be now obtained for appropriate combinations of , and . in particular , in the limit when the previous expression gets minimized for the value where we use the asterisk to denote values that are _optimal_. after minimizing also with respect to we find that the global optimum of the mfpt corresponds to now , in the limit we observe that and .altogether , these results provide a clear and simple description of the search dynamics in the _ asymmetric _ regime for 2-scale msrws which confirms our discussion above . the optimum strategy in the _ asymmetric_ regime will combine a very large scale ( for exploration purposes ) with a shorter scale of the order of ( for better exploitation of the nearest target ) .it is particularly interesting that the optimal weight must be rather small , so the searcher just needs occasional ballistic flights while spending the rest of the time searching intensively its surroundings .so , the optimal strategy does not consist just on an appropriate choice of the scales and but also on using them in an adequate proportion .figures [ fig1 ] and [ fig2 ] show the comparison between random - walk simulations ( symbols ) and our method , both for the exact case ( solid lines ) and for the approximations ( [ mfpt2]-[optimal ] ) ( dotted lines ) . note in figure [ fig1 ] that the optimum value of the mfpt clearly improves ( specially for very small ) the value obtained for a ballistic strategy or a lvy walk strategy ( dashed and dashed - dotted horizontal lines , respectively ) , so revealing that an appropriate combination of only two move length scales can be actually more efficient than a scale - free strategy . the range of validity of the approximated results ( [ mfpt2]-[optimal ] ) is also shown in the plots , as well as the scaling derived in eq .[ optimal0 ] ( see figure [ fig2 ] ) . despite finding a set of combinations of and outperforming both lvy and ballistic strategies ,these results show that it is necessary for the searcher to have some information about the domain scales ( i.e. and ) in order to fine - tune search and get effective strategies . without thisknowledge lvy or ballistic patterns look as robust strategies , that could be even more effective than searching with badly adjusted movement scales , as suggested by the comparison in figure [ fig1 ] .this fact is also confirmed when observing the dependence of on ( figure [ fig3 ] , see also figure s1 in the si ) in order to assess the range width at which and lead to optimality . in figure [ fig3 ] we provide the results of our exact solution for different values of and different weights ( here the approximated results and simulations are not shown in order to facilitate understanding ) . in accordance to our analytical results above, we observe that values of close to minimize the mfpt .so , there are certain values of for which the mfpt becomes lower than the balistic value ( but we we stress that the most critical parameter for getting below this threshold is clearly ) .actually , for the two upper panels ( which correspond to and ) we observe that any choice of and would result in a better ( or as good as ) performance than a balistic strategy , while the region where the lvy strategy is outperformed is relatively small .we stress finally that we have carried out studies , both analytically and numerically , for the case when the initial position is not fixed but is distributed according to an exponential or a gaussian distribution , so a range of values is allowed ( results not shown here ) . the results for all these cases coincide qualitatively with those reported above , so whenever values are predominant the _ asymmetric _ regime is recovered .this confirms that the emergence of the _ asymmetric _ regime is not an artefact caused by the choice of a fixed initial condition , in agreement with recent numerical studies .provided that the initial time to reach any of the targets is given by the two timescales and , it could be intuitively expected that these are the only ones necessary to reach an optimal strategy .to check this we have solved analytically the 3-scale case ( see si ) and , given that the expression obtained is extremely cumbersome , we have used markov chain monte carlo algorithms in order to determine numerically the global minimum of the mfpt as a function of the parameters and .the results so obtained are conclusive and confirm the idea that indeed only two scale are needed to minimze the mfpt .we find that for large values of the optimal strategy is again ballistic - like ( so only displacements with should be performed in order to minimize ) .instead , for small enough the optimum arises through the combination of only two scales which coincide with those found for the optimal 2-scale case ; this means that two of the three scales involved ( say , and ) will eventually have the same value after minimization .even more surprising is that when the initial condition is governed by two different scales ( by combining two different values of , each with a given probability ) the optimum still corresponds to a 2-scale random - walk ; in this case the optimum value is in between the optimum values that one would find for each of the two values alone .further studies are thus needed to confirm to what extent the combination of only two scales is universally robust and effective enough , independently of the number of prominent spatial scales present in the domain ; this point will be the focus of a forthcoming work .the main result extracted from the theoretical analysis reported here is that msrws with only two movement characteristic scales can represent a mathematical optimum ( in terms of mfpt minimization ) for random search strategies .this has been proved by checking that additional scales do not allow to improve the optimum achieved for 2 scales .the optimal solution outperforms both ballistic and lvy strategies but only for specific intervals of the characteristic parameters and which depend on the characteristic scales of the domain ( namely , and ) . in particular , the global optimum turns out to be given by and , which can be intuitively justified in terms of optimizing the tradeoff between exploring for faraway targets and exploiting nearby resources .while the theoretical analysis provided here has been restricted to the one - dimensional case ( for which an exact solution for the mfpt is attainable ) , we think that these arguments are generally valid and so we expect them to hold in higher dimensions , and probably in more complicated situations as for instance in biased searches too . in the context of animal foraging ,the fact that fine - tuned 2-scale random - walks outperform lvy walks represents a convenient extension of the lvy flight paradigm from the completely uninformed scenario to that in which domain scales are ( partially or completely ) available to the organism . in the uninformed case , where the characteristic scales of the search problem are unknown , a scale - free strategy represents a convenient ( albeit sub - optimal ) solution .however , in cognitive systems search optimization programs should be adjustable on the basis of accumulated evidence . as we show here , 2-scale walks would be optimal provided that the searcher has previous available information ( at least some crude guess ) about the values of the scales and .let us stress that we are considering that such prior guess about target distances is limited ( by the searcher cognitive capacity ) or not informative enough ( e.g. landscape noise , insufficient cumulative evidences ) to set up a deterministic search strategy ; so , the random - walk hypothesis is still meaningful . accordingly , as more and more information about the domain scales becomes integrated by the searcher we should observe a tendency towards a reduction ( and an adjustment ) in the number ( and magnitude ) of movement scales used , respectively .this process should go on up to the point where barely one or two scales would persist .furthermore , we note that for the extreme case of perfectly informed ( deterministic ) walkers no characteristic search scales at all would be necessary since in that case the search process is plainly directed towards the target .our results add then some new dilemmas and perspectives on the fundamental problem of what biological scales could be relevant in terms of a program driving animal paths to enhance foraging success . within the uninformed scenario, weierstrassian walks involving a relatively low number of scales in a geometric progression have been proposed as an efficient mechanism to implement lvy - like trajectories .this itself builds on the more general idea of reproducing power - law paths through a hierarchical family of random walks .these weierstrassian walks provide , due to its relative simplicity , a promising approach to bring together the ideas from the lvy flight paradigm and those from msrws , although we stress that many alternative markovian embeddings for power - laws do exist in the literature . within this context, the existence of a correlation between the number of movement scales and the informational gain we suggest here may pose new challenges for experimentalists and data miners .for example , provided that we can conveniently interpret animal trajectories in terms of a combination of scales , can we infer something about the informational capacity of the individual from the number of scales observed and from the relation between their values ?how can we differentiate informationally - driven scales from the internally - driven ones ? while we are not yet in position to provide a definite answer to these questions , we expect that the ideas provided in this work can stimulate research in this line and can assist experimentalists towards new experimental designs for a better understanding of the interplay between animal foraging , landscape scales , and information processing. * acknowledgements . *this research has been partially supported by grants no .fis 2012 - 32334 ( vm , dc ) , sgr 2013 - 00923 ( vm , fb , dc ) and the the human frontier science program rgy0084/2011 ( fb ) .epr acknowledges cnpq and facepe .raposo , e.p . ,bartumeus , f. da luz , m.g.e . , ribeiro - neto , p.j . , souza , t.a . ,viswanathan , g.m .2011 how landscape heterogeneity frames optimal diffusivity in searching processes ._ plos comp_ 7 , e1002233 . , , and , and for different initial conditions .the plot shows the exact analytical solution ( solid lines ) , random - walk simulations averaged over realizations ( circles ) and the approximation given by eq .( [ mfpt2 ] ) ( dotted lines ) .the values obtained for ballistic and lvy strategies are also given for comparison ( dashed and dashed - dotted horizontal lines , respectively ) . ]( dotted lines ) in comparison to random - walk simulations ( symbols ) .different values of the weight are shown : ( circles ) , ( triangles ) , ( inverted triangles ) and ( diamonds ) .full symbols denote the values above which the _ symmetric _ regime appears and so there is no optimal persistence .inset : the same results are shown with in the vertical axis .the collapse observed confirms the scaling analytically derived . ] , and with different initial conditions .the plot shows the exact analytical solution for different values ( circles ) , ( triangles ) , ( inverted triangles ) and ( diamonds ) .the values obtained for ballistic and lvy strategies are also given for comparison ( dashed and dotted lines , respectively ) . ]
|
an efficient searcher needs to balance properly the tradeoff between the exploration of new spatial areas and the exploitation of nearby resources , an idea which is at the core of _ scale - free _ lvy search strategies . here we study multi - scale random walks as an approximation to the scale- free case and derive the exact expressions for their mean - first passage times in a one - dimensional finite domain . this allows us to provide a complete analytical description of the dynamics driving the asymmetric regime , in which both nearby and faraway targets are available to the searcher . for this regime , we prove that the combination of only two movement scales can be enough to outperform both balistic and lvy strategies . this two - scale strategy involves an optimal discrimination between the nearby and faraway targets , which is only possible by adjusting the range of values of the two movement scales to the typical distances between encounters . so , this optimization necessarily requires some prior information ( albeit crude ) about targets distances or distributions . furthermore , we found that the incorporation of additional ( three , four , ... ) movement scales and its adjustment to target distances does not improve further the search efficiency . this allows us to claim that optimal random search strategies in the asymmetric regime actually arise through the informed combination of only two walk scales ( related to the exploitative and the explorative scale , respectively ) , expanding on the well - known result that optimal strategies in strictly uninformed scenarios are achieved through lvy paths ( or , equivalently , through a hierarchical combination of multiple scales ) . search theory aims at identifying optimal strategies that help to promote encounters between a searcher and its target . statistical physics approaches often identify searchers as random walkers , capitalizing on the idea of search as movement under uncertainty . the assumption that the searcher lacks any information about target locations leads to the fundamental question of how individual paths should be orchestrated to enhance random encounter rates . based on these assumptions , a proper measure of search efficiency is given by the mean - first passage time ( mfpt ) of the random walker through the target location , a quantity which is also the focus of interest in many other areas of physics and science . random search theory was spurred in the nineties as a result of a series of works suggesting that lvy patterns ( particularly , lvy walks ) can be optimal strategies for uninformed space exploration ( see , e.g. , ) , and it has continually been developed since then . lvy walks can be defined as random paths composed of statistically identical flights whose length probability distribution decays asimptotically as ( with ) , where is frequently known as the _ lvy exponent_. so , lower values of the lvy exponent comparatively imply a higher frequency of long flights . as a remarkable result , viswanathan and colleagues identified two different regimes associated to random search dynamics depending on whether targets are completely or uncompletely depleted after encounter , the so - called _ destructive _ and _ non - destructive _ dynamics , respectively . more recently it has been emphasized that these two dynamical regimes are much general and should be renamed and directly associated to searcher - to - target distances . in the _ symmetric _ regime targets are expected to occur at an average , characteristic distance from the searcher . for example , a _ destructive _ search dynamics tend to deploy targets locally and promote targets being faraway ( on average ) from the searcher , at least in low - density and homogeneous landscapes . a _ symmetric _ regime may also correpond to high target density scenarios where ( on average ) most targets can be assumed to be closeby . in the _ asymmetric _ , instead , a wide variety of searcher - to - target distances exist ( i.e. heterogeneous landscapes ) , and both near and faraway targets may coexist in different proportions . a more convenient understanding and interpretation of these regimes can be attained if linked to the general binomia exploitation - exploration . according to it , three different scenarios should be distinguished : ( i ) those situations in which exploration is clearly preferred over exploitation ( so a ballistic strategy , defined as a straight trajectory without changes of direction , is then trivially expected to be optimal ) . this is the case where revisiting areas is worthless and the optimization requires performing displacements as long as possible without changing direction ; a ballistic strategy is thus preferable . this happens if targets are uniformly distributed and can be fully depleted or if all targets are faraway ( on average ) from the searcher . ( ii ) those situations where exploitation prevails over exploration ( so local , spatially bounded or diffusive search is optimal ) . this is the case of a searcher being nearby a set of targets ( patch ) that is never depleted . the random searcher always has the possibility to come back to the patch and the strategy of sticking around it is much preferable because no other targets are available in the landscape . and ( iii ) those situations where a true exploitation - exploration tradeoff emerges because the search necessarily requires the ability to reach both nearby and distant targets . for example if target distribution is patchy and while walking one can have nearby and faraway patches . while the dynamics in the _ symmetric _ regime is thus straightforward to understand in terms of maximization ( scenario i ) or minimization ( scenario ii ) of the area explored , the details driving optimization in the _ asymmetric _ regime ( scenario iii ) , in particular how movement scales determine search efficiency , have remained partially obscure to date . this is so because analytical methods for the determination of the mfpt ( often valid just for markovian processes ) are difficult to extend to lvy or other superdiffusive dispersal mechanisms . in the last years , much effort has been devoted to overcome this limitation . for instance , in the asymptotic behaviour of the first - passage distribution of lvy flights in semi - infinite media was obtained . other authors have derived expressions and scaling properties of mfpts for moving particles described either by fractional brownian motion or fractional diffusion equations . finally , the alternative approach to approximate lvy paths through an upper bound truncation ( so that lvy properties hold just over a specific set of scales ) has been explored too . but despite these advances , analytical arguments able to explain the different optimization dynamics observed in the _ asymmetric _ compared to the _ symmetric _ regime are still lacking . lvy or scale - free paths can be conveniently approximated through a combination of multiple scales . this is tantamount to expressing power - law functions as a combination of exponentials or providing a markovian embedding for lvy stochastic processes . composite random walks and , more in general , multi - scale random walks ( msrw ) have also emerged recently as an alternative to the presence of scale - free signatures in animal trajectories . it is not clear yet whether the emergence of multi - scaled movement behaviour in biology responds to exploratory behaviour tuned to uncertainty ( lvy as the limiting case ) , or else to informed behavioural processes linked to landscape through sensors and/or memory . it is thus important to understand how this multi - scaled behaviour should be coupled with other relevant landscape magnitudes like target distributions and searcher - to - target average distances . inspired by these ideas , in the present work we derive an exact analytical method for the determination of the mfpt of msrws as an approximation to the scale - free case . while the method proposed becomes increasingly complicated as more scales are considered , we show that 2-scale random walks can effectively resolve the explotation - exploration tradeoff emergent in the _ asymmetric _ regime by adjusting movement scales to target distances . furthermore , the comparison between the 2-scale and the 3-scale random walk suggests that incorporating a third scale does not produce any advantage . therefore , we conclude that an optimal random search strategy in the _ asymmetric _ regime consists on combining two informed movement scales that should approximately correspond to nearby / faraway target distances . hence , an informed adjustement of movement scales improves search efficiency compared to any non - informed strategy ( where scales are imposed at random ) . in the case of non - informed strategies , however , msrws aproximating the lvy strategy are the best solution to solve exploitation - exploration tradeoffs .
|
the following list shows a complete list of commands from singularity theory that have so far been implemented in singularity : * ` verify ` ; section [ secverify ] , * ` normalform ` ; section [ secnormalform ] , * ` universalunfolding ` ; section [ secuniversalunfolding ] , * ` recognitionproblem ` ; section [ secrecognitionproblem ] , * ` checkuniversal ` ; subsection [ seccheckuniversal ] , * ` transformation ` ; section [ sectransformation ] , * ` transitionset ` ; subsection [ sectransitionset ] , * ` persistentdiagram ` ; subsection [ secpersistentdiagram ] , * ` nonpersistent ` ; subsection [ secnonpersistent ] , * ` intrinsic ` ; section [ secintrinsic ] , * ` algobjects ` , ` rt ` , ` t ` , ` p ` , ` s ` , ` tangentperp ` , ` sperp ` , ` intrinsicgen ` ; section [ secalgobjects ] .+ the following enlists all implemented tools from computational algebraic geometry : * ` multmatrix ` ; section [ secmultmatrix ] , * ` division ` ; section [ secdivision ] , * ` standardbasis ` ; section [ secstandardbasis ] , * ` colonideal ` ; section [ seccolonideal ] , * ` normalset ` ; section [ secnormalset ] .+ in this user guide we will explain the capabilities , options and how each of these commands work .after installing singularity on maple software running on your computer , the above list is accessible by using the command ` with(singularity ) ` .in fact , maple enlists all the commands in the above two tables as its output .the terminology `` singularity theory '' has been used to deal with many different problems in different mathematical disciplines .singularity theory here refers the methodologies in dealing with the qualitative behavior of local zeros where is a state variable and is a distinguished parameter .the cases of multi - dimensional parameters are dealt with through the notions of unfolding .singularity will be soon enhanced to deal with the cases of multi - dimensional state variables . in many real life problems at certain solutions , is _ singular _, i.e. , a singular germ subjected to smooth changes demonstrates surprising changes in the _ qualitative properties _ of the solutions , e.g. , changes in the number of solutions .this phenomenon is called a _ bifurcation_.we define the _ qualitative properties _ as the invariance of an equivalence relation .the equivalence relation used in singularity is _ contact equivalence _ and is defined by where while is locally a diffeomorphism such that and .in this section we describe how to determine the permissible computational ring and the truncation degree .in fact , each smooth germ with a nonzero - infinite taylor series expansion must be truncated at certain degree .further , there are four different options in singularity for computational rings , i.e. , polynomial , fractional , formal power series and smooth germ rings , that they each can be used by commands in singularity for bifurcation analysis of each singular germ .the command ` verify(g ) ` derives the following information about the singular germ for correct and efficient computations : 1 .the permissible computational rings for the germ .the least permissible truncation degree for computations involving the germ .in other words , the computations modulo degrees of higher than ( but not equal to ) does not lead to error .3 . our recommended computational ring .we are also interested in the above information for the following purposes : * a list of germs in two variables for either division , standard basis computations , multiplication matrix , or intrinsic part of an ideal . * a parametric germ with for either transition set computation or persistent bifurcation diagram classification .+ l|x * command * & * description * + ` verify`( ) & derives the permissible computational rings and permissible truncation degree .+ default upper bound for truncation & a permissible truncation degree is computed as long as it is less than or equal to * ` ideal ` ; ` persistent ` ; the command ` verify`( , ` ideal ` , ` vars ` ) deals with an ideal generated by , i.e. , where is a list of germs .it returns the permissible computational ring and a permissible truncation degree when the ideal is of finite codimension .otherwise , it remarks that the ideal is of infinite codimension ." however , the command ` verify`( , ` persistent ` , ` vars ` ) determines the least permissible truncation degree so that the computations associated with either persistent bifurcation diagram classification or transition sets would be correct . * ` fractional ` ; ` formal ` ; ` smoothgerms ` ; ` polynomial ` ; the command uses either the rings of fractional germs , formal power series or ring of smooth germs . *upper bound for truncation degree ; this lets the user to change the default upper bound truncation degree from to . `verify`( ) gives the following rings are allowed as the means of computations : ring of smooth germs ring of formal power series ring of fractional germs the truncation degree must be : 3 ` verify`( , 2 ) gives the following warning message : `` increase the upper bound for the truncation degree ! '' ` verify`( , \verb"ideal " , [ x , \lambda] ] ) gives the least permissible truncation degree to be 2 .a germ is called a normal form for the singular germ when has a minimal set of monomial terms in its taylor expansion among all contact - equivalent germs to .therefore , it is easier to analyze the solution set of while it has the same qualitative behavior as zeros of do .+ l|x * command / the default options * & * description * + ` normalform`( ) & this function derives a normal form for .+ the computational ring & the default ring is the ring of fractional germs . + verify / warning / suggestion & it automatically verifies if fractional germ ring is sufficient for normal form computation of the input germ otherwise , it writes a warning note along with a cognitive suggestion for the user .+ truncation degree & it , by default , detects the maximal degree in which thus , normal forms are computed modulo degrees higher than or equal to + input germ & it , by default , takes the input germ as a polynomial or a smooth germ .it truncates the smooth germs modulo the degree * specifies the degree so that computations are performed modulo degree .when the input degree is too small for the input singular germ , the computation is not reliable .thus , an error warning note is returned to inform the user of the situation along with cognitive suggestions to circumvent the problem . * ` fractional ` , ` formal ` , ` smoothgerms ` ; ` polynomial ` ; the command uses either the rings of fractional germs , formal power series or ring of smooth germs .when it is necessary , warning / cognitive suggestions are given accordingly .* ` list ` ; this generates a list of possible normal forms for the germ .different normal forms may only occur due to possible alternative eliminations in intermediate order terms .` normalform`( , 10 , ` smoothgerms ` ) generates x^3- . while ` normalform`( , 10 , ` formal ` ) gives rise to x^4-^2 . using ` normalform`( , 10 , ` polynomial ` ) gives the following suggestion and warning note .warning : the polynomial germ ring is not suitable for normal form computations .suggestion : use the command ` verify ` to find the appropriate computational ring .the following output might be wrong .the germ is an infinite codimensional germ .in fact the above statement is wrong since the high order term ideal contains ^6+^4+^2 . nowthe command ` verify`( gives rise to fractional germ ring ; formal power series ring ; smooth germ ring .thus , we use ` normalform`( , 10 , ` fractional ` ) to obtain x^5+x^3+^2 .generally in dealing with singular problems , extra complications are experienced in the laboratory data than what are predicted by the modeling theoretical analysis . the problem here is due to _ modeling imperfections _ ; natural phenomena can not be perfectly modeled by a mathematical model .in fact one usually neglects the impact of many factors like friction , pressure , and/or temperature , etc . , to get a manageable mathematical model .otherwise one will end up with a mathematical modeling problem with too many or infinite number of parameters .the imperfections around singular points may cause dramatic qualitative changes in the solution set of the model .universal unfolding gives us a natural way to circumvent the problem of imperfections .a parametric germ is called an _ unfolding _ for when an unfolding for is called a _versal unfolding _when for each unfolding of there is a smooth germ so that is contact - equivalent to roughly speaking , a versal unfolding is a parametric germ that contains a contact - equivalent copy of all small perturbations of .a versal unfolding with insignificant parameters is not suitable for the bifurcation analysis .so , we are interested in a versal unfolding that has a minimum possible number of parameters , that is called _universal unfolding_. in other words , universal unfolding has the minimum possible number of parameters so that they accommodate all possible qualitative types that small perturbations of may experience .l|x * command / the default options * & * description * + ` universalunfolding`( ) & this function computes a universal unfolding for .+ the computational ring & by default , singularity uses the ring of fractional germs . + verify / warning / suggestion & this automatically derives the least sufficient degree for truncations and also verifies if fractional germ ring is sufficient for computation .otherwise , it writes a warning note along with guidance on the suitable rings for computations and hints at other possible capabilities of singularity .+ degree & it , by default , detects the maximal degree in which terms of degree higher than or equal to can be ignored .thus , the computations are performed modulo degree + input germ & the default input germ is a polynomial or a smooth germ . for an input smooth germ ,the default procedure ` universalunfolding ` truncates the smooth germs modulo i.e. , modulo degrees higher than and equal to * ` normalform ` ; a universal unfolding for normal form of is derived by this option . * ` list ` ; this function provides the list of possible universal unfoldings for . * ; the degree determines the truncation degree so that all computations are performed modulo for low degrees of it may derive wrong results .thus , it gives a warning error and a suggestion for the user when must be a larger number for correct result .* ` fractional ` ; ` formal ` ; ` smoothgerms ` ; ` polynomial ` ; this determines the computational ring .the command ` universalunfolding ` gives a warning note when the user s choice of computational ring is not suitable for computations involving the input germ and writes a suggestion to circumvent the problem . `universalunfolding` gives rise to & x^3-x+ _ 1+_2 & + & x^3-x+ _ 1+_2 x^2 . & ` universalunfolding` leads to & x^3-x+_2x^2+_1 & + & x^3-x+_2+_1 . & now consider ` universalunfolding` gives the following warning error and suggestion : _ warning : the ring of polynomial germs is not suitable for normal form computations of __ suggestion :the permissible computational ring options are _ ` fractional ` , ` smoothgerms ` _ and _ ` formal ` .we describe the command ` recognitionproblem ` on how it answers the recognition problem , that is , what kind of germs have the same normal form or universal unfolding for a given germ ? ** low order terms*. low order terms refer to the monomials in which do not appear in any contact - equivalent copy of . ** high order terms*. the ideal represents the space of negligible terms that are called high order terms .these terms are eliminated in normal form of * * intermediate order terms*. a monomial term are called an intermediate order term when it is neither low order nor high order term .intermediate order terms may or may not be simplified in normal form computation of smooth germs .the answer for the recognition problem for normal form of a germ is a list of zero and nonzero conditions for certain derivatives of a hypothetical germ when these zero and nonzero conditions are satisfied for a given germ the germ and are contact - equivalent .each germ with a minimal list of monomial terms in its taylor expansion constitutes a normal form for consider a parametric germ and a germ then , is usually a universal unfolding for when and certain matrix associated with has a nonzero determinant .thus , the answer of the recognition problem for universal unfolding is actually a matrix whose components are derivatives of a hypothetical parametric germ satisfying .+ l|x * command/ default * & * description * + ` recognitionproblem`( ) & returns a list of zero and nonzero conditions on certain derivatives of a hypothetical germ . a given germ is contact - equivalent to when those conditions are satisfied .+ computational ring & the default is fractional germ ring .the lack of warning notes is a confirmation that the fractional germ ring is suitable for computation .+ truncation degree & it automatically computes an optimal truncation degree and performs the remaining computations modulo degrees of higher than ( but not equal to ) .+ verification / warning & a warning note of possible errors is given when the computational ring is not suitable for the germ the truncation degree is also checked and if it is not sufficiently large enough , a warning note is given .warning notes are accompanied with cognitive suggestions to circumvent the problem .* ; this number represents the truncation degree . * computational ring : ` fractional ` , ` formal ` , ` smoothgerms ` ; ` polynomial ` ; the command accordingly uses either the rings of fractional germs , formal power series or smooth germs . * ` universalunfolding ` ; it returns a matrix .the matrix components consists of certain derivatives of a hypothetical parametric germ .then , a parametric germ is a universal unfolding for when and the associated matrix has a nonzero determinant .` recognitionproblem`( , 6 , ` formal ` ) gives rise to `` nonzero condition= '' , ] . ` recognitionproblem`( , universalunfolding , 6 , ` smoothgerms ` ) gives rise to ( ccccc 0 & 0 & 0 & g_x , x , x(0 ) & g_x , x,(0 ) + 0 & g _ , ( 0 ) & 0 & g_x , x,(0 ) & g_x , , ( 0 )+ g__1(0 ) & g _ , _1(0 ) & g_x , _1(0 ) & g_x , x , _1(0 ) & g_x , , _1(0 ) + g__2(0 ) & g _ , _ 2(0 ) & g_x , _2(0 ) & g_x , x , _ 2(0 ) & g_x , , _2(0 ) + g__3(0 ) & g _ , _3(0 ) & g_x , _3(0 ) & g_x , x , _3(0 ) & g_x , , _3(0 ) + ) 0 . + let be a parametric germ where .+ l|x * command * & * description * + ` checkuniversal`( ) & this function checks if a parametric germ is a universal unfolding for ` checkuniversal`( ) gives `` yes ''for each two contact - equivalent germs and there are diffeomorphic germs and smooth germ such that and . + l|x * command / option * & * description * + ` transformation`( ) & this function computes the smooth germs transforming the germ into its normal form modulo degree where terms of degree higher than or equal to are high order terms .+ transformation( ) & this function computes suitable smooth maps for transforming the germ into modulo high order terms .* this number specifies a degree so that computations are performed modulo degrees of higher or equal to .when is less than the degrees of high order terms a warning note is given . `transformation`( ) gives rise to x&=&x++x + ^2 , ( ) = , + s&=&1 - 3x^2 - 3x -^2 - 3x^3 - 9 x^2 - 9x ^2 - 3 ^3 .bifurcation diagram analysis of a parametric system is performed by the notion of _ persistent _ and _ non - persistent _ bifurcation diagrams .bifurcation diagram of is defined by a bifurcation diagram is called _ persistent _ when the bifurcation diagrams subjected to small perturbations in parameter space remain self contact - equivalent . the classification of persistent bifurcation diagrams are performed by the notion of transition sets . in fact , a subset of parameter space is called _ transition set _ when the associated bifurcation diagrams are non - persistent . _transition set _ is denoted by and is usually a hypersurface of codimension one for germs of finite codimension .then , one choice from each connected components of the complement of the transition set makes a complete persistent bifurcation diagram classification of a given parametric germ .this provides a comprehensive insight into the persistent zero solutions of a parametric germ .the parameters associated with non - persistent bifurcation diagrams are split into three categories : _ bifurcation _ , _ hysteresis _ , and _double limit point_. these are defined and denoted by the transition set is now given by .suppose that is a singular parametric germ .+ l|x * command / the default options * & * description * + ` transitionset`( ) & this function estimates the transition set in terms of parameters of the default is to eliminate and variables from the equations given by .+ truncation degree & for non - polynomial input germs , by default , it automatically computes a suitable truncation degree and truncates the input germ at degree , i.e. , preserving degrees of less than and equal to .* ] ) gives rise to & : = & \{(_1 , _ 2 , _ 3)|_2 ^ 4+_2 ^ 2_3+_1=0 } , + & : = & \{(_1 , _ 2 , _ 3)|128_2^ 2_3 ^ 3 + 3_3 ^ 4 + 72_1_3 ^ 2 + 432_1 ^ 2=0 } , + & : = & \{(_1 , _ 2 , _3)|-_3 ^ 2 + 4_1=0,_30}. ` transitionset`( ] ) generates figure [ 1 ] .` transitionset`( , [ x , \lambda , \alpha_1 , \alpha_2 , \alpha_3] ] ) generates a list from which the list of inequivalent bifurcation diagrams are chosen in figure [ fig2 ]. extra sources of non - persistent is caused by singular boundary conditions of a parametric scalar map restricted to a bounded domain .let be a closed disk and be two closed intervals .next , consider where and ; see ( * ? ? ?* pages 154 - 158 ) .the new non - persistent sources are defined by _c&:= & \{wf(x,,)=0 ( x , ) ul } , + _ sh&:= & \{wf = f_x=0 ( x , ) ul } , + _ sv&:= & \{wf = f_x=0 ( x , ) ul } , + _ t&:= & \{wf = f_=0 ( x , ) ul } , _ 1&:= & \{wf=0 ( x_0 , , ) ( x_0,)u l , + & & x_0x f = f_x=0 ( x , , ) ( x , ) u l } , + _ 2&:= & \{w(x_1 , ) , ( x_2 , ) ul ,x_1x_2 f = 0 + & & ( x_i , , ) i=1 , 2 } , & : = & \{wf = f_x = f_=0 ( x , , ) + & & ( x , ) ul } , + & : = & \{wf = f_x = f_xx=0 ( x , , ) + & & ( x , ) ul } , + _ d&:= & \{w(x_1 , ) , ( x_2 , ) ul , x_1x_2 f= f_x=0 + & & ( x_i , , ) i=1,2}. in this case , the transition set is given by , here & : = & _ c_sh_sv_t , + & : = & _ d_1_2 . for a finite codimension singular germ , is a hypersurface of codimension one and each two choices from a connected component in the complement of are contact - equivalent .therefore , we can classify the persistent bifurcation diagrams by merely choosing one representative parameter from each components of the complement set of and plotting the associated bifurcation diagrams .the command ` nonpersistent ` is designed for this purpose .+ l|x * command / default option * & * description * + ` nonpersistent`( , , ) & this function computes transition set for where bifurcation diagrams are limited on . here , and are only taken as closed intervals .further , it plots the transition set .+ box of figures & it plots transition set in \times [ -1 , 1] ] . * ` vertical ` ( ` horizontal ` is also similar ) ; this assumes that the boundary conditions is i.e. , there is only singular boundary conditions on vertical boundary lines . `nonpersistent`( , [ 1,3] ] {\langle x , \lambda\rangle} ] ; see .now we describe how to compute the multiplication matrix defined by [ mult ] _ u , j : , _ u , j(f+j):= uf+ j , where is an ideal generated by a finite set i.e. , , and is a monomial ; also see ( * ? ? ? * equation 3.4 ) .+ l|x * command / option * & * description * + ` multmatrix`( , ) & this function derives where is a monomial , is defined by equation .+ default computational ring & the fractional germ ring .+ truncation degree & when the input set of germs only includes polynomials , ` multmatrix`( , ) does not need truncation degree .however , for non - polynomial input germs , a truncation degree needs to be included .+ * ; determines the truncation degree .the user is advised to use the command ` verify ` to find an appropriate truncation degree * computational ring : ` fractional ` , ` formal ` , ` smoothgerms ` ; ` polynomial ` ; the command uses either the rings of fractional germs , formal power series or ring of smooth germs .the command ` verify ` is an appropriate tool to find / verify the appropriate computational ring .the command ` multmatrix`( , x , 6 , \verb"formal" ] , 8,\verb"smoothgerms" ] , formal power series , ] ) computes the standard basis of the set of germs \{x^5+x^2(+x)+^2 , x^3 ^ 2+()x , ^6+x^4-x}as colon ideal refers to the ideal defined by i : g_= \{f : f g_i}. using the arguments on ( * ? ? ?* page 22 ) , we have where is a standard basis for the ideal }. ] ) leads to the ideal generated by .for computing universal unfolding of a singular germ , we need to compute a basis for a complement vector space for the tangent space associated with this is equivalent to computing a basis for the quotient space more generally , the command ` normalset`( ) computes a monomial basis for , when is either an ideal or a vector space with finite codimension in the local ring .+ l|x * command * & * description * + ` normalset`( ) & computes a monomial basis for , when is a list of germs generating an ideal .for example the command ` normalset`( ] leads to m^3+^2 . as for a second example` intrinsic`( , [ \lambda x^3 + 2\lambda^2 , x^3 + 2\lambda , x^4+\frac{3}{5}\lambda x^2 , \lambda^2 , x^5]$ ] ) results in m^5+m^3+^2 .singularity theory defines and uses many algebraic objects in the bifurcation analysis of zeros of smooth germs .these include restricted tangent space , tangent space , high order term ideal , smallest intrinsic ideal associated with a singular germ , a basis for complement of the tangent space , and low order terms ; see .these can be computed in singularity using the command ` algobjects ` as well as the individual commands ` rt ` , ` t ` , ` p ` , and ` tangentperp ` .the individual commands ` rt ` , ` t ` , and ` p ` have the same default and non - default options as ` algobjects ` has as follows .+ l|x * command / option * & * description * + ` algobjects`( ) & this function computes , , , , , and intrinsic generators of for given . + ` rt`(g ) & this derives the restricted tangent space associated with a scalar smooth germ . + ` t`(g ) & this command provides a nice representation of the tangent space associated with the singular smooth germ the representation uses intrinsic ideal representation as for the intrinsic part of .+ ` p`(g ) & this computes the high order term ideal associated with the germ .+ ` tangentperp`(g ) & this first computes , i.e. , the tangent space of the germ and then returns a monomial basis for the complement space of . + ` s`(g ) & computes the smallest intrinsic ideal containing the germ .+ ` sperp`(g ) & this derives a set of monomials of low order terms for the germ .+ ` intrinsicgen`(g ) & this derives the intrinsic generators of that determine the nonzero conditions for recognition problem for normal forms .+ computational ring & the default computational ring is the ring of fractional germs .+ default degree & for non - polynomial input germs , it computes the least degree so that truncations at degree is permissible .next , the germ is truncated and all algebraic objects are computed modulo degrees higher than or equal to . * ; this option enforces the computations modulo degree * ` formal ` ; ` smoothgerms ` ; ` polynomial ` ; ` fractional ` ; this determines the computational germ ring .it checks and verifies / gives warning notes of possible errors .now we present three examples of singular germs of high codimension ; see and compare these examples with the examples in ( * ? ? ?* page 4 ) .for example we consider a codimension 10 singularity and use ` algobjects`( ) .it gives & = & m^6+m^3 , + rt & = & m^6+m^3 + x^4 , 3 ^ 2x^3 + 5x^5 , ^2x^3+x^5+^3 , + t&= & m^5+^3+^2x^2+x^4 , x^3+^2 , + /t & = & 1 , , x , ^2 , x^2 , x^3 , x^2 , ^2 x , ^2x^2 , x , + & = & m^5+^3 , + ^ & = & 1 , , x , ^2 , x^2 , x^3 , x^4 , x^2 , ^2 x , ^2 x^2 , x , x^3 , + & = & x^5 , ^3 .a codimension 20 singularity : ` tangentperp`( ) derives the following t&= & m^9+^3+x^7 , x^8 , x^7 , -2 ^ 2 , + & = & 1 , , x , x^2 , x^3 , x^4 , x^5 , x^6 , x , x^2 , x^3 , x^6 , + & & ^2 x , ^2 x^2 , ^2 x^5 , x^3 ^ 2 , x^4 , x^4 ^ 2 , x^5 , x^6 ^ 2 .
|
this is a user guide for the first version of our developed maple library , named singularity . the first version here is designed for the qualitative study of local real zeros of scalar smooth maps . this library will be extended for symbolic bifurcation analysis and control of different singularities including autonomous differential singular systems and local real zeros of multidimensional smooth maps . many tools and techniques from computational algebraic geometry have been used to develop singularity . however , we here skip any reference on how this library is developed . this package is useful for both pedagogical and research purposes . singularity will be updated as our research progresses and will be released for public access once our draft paper is peer - reviewed in a refereed journal . this is a user guide on how to use the first version of our developed maple library ( named singularity ) for local bifurcation analysis of real zeros of scalar smooth maps ; see for the main ideas . we remark that the term _ singularity theory _ has been used in many different mathematics disciplines with essentially different objectives and tools but yet sometimes with similar terminologies ; for examples of these see . for more detailed information , definitions , and related theorems in what we call here _ singularity theory _ , we refer the reader to . * verification and warning note*. ` singularity ` is able to check and verify all of its computations . however , this sometimes adds an extra computational cost . this happens mainly for finding out the correct and suitable truncation degree and computational ring . therefore , it is beneficial to skip the extra computations when it is not necessary . for an instance of benefit , consider that you need to obtain certain results for a large family of problems arising from the same origin . therefore , you might be able to only check a few problems and conclude about the suitable truncation degree and computational ring for the whole family . thereby , the commands of ` singularity ` check and verify the output results unless it requires extra computation . in this case , a warning note of not verified output or possible errors is given ; in these cases , a recommendation is always provided on how to verify or circumvent the problem . lack of warning notes always indicates that the output results have been successfully verified .
|
the pattern classification problem is a problem of assigning a discrete class label to a given data sample represented by its feature vector .it has many applications in various fields , including bioinforamtics , biometrics verification , computer networks , and computer vision .for example , in the face recognition problem , given a face image , the target of pattern classification is to assign it to a person who has been registered in a database .this problem is usually composed of two different components feature extraction and classification .feature extraction refers to the procedure of extracting an effective and discriminant feature vector from a data sample , so that different samples of different classes could be separated easily .this procedure is usually highly domain - specific .for example , for the face recognition problem , the visual feature should be extracted using some image processing technologies , whereas for the problem of predicting zinc - binding sites from protein sequences , the biological features should be extracted using some biological knowledge . in terms of feature extraction of this paper, it is highly inspired by a hierarchical bayesian inference algorithm proposed in [ 24 ] .this new method created in has advanced the ground - truth feature extraction field and has provided a more optimal method for this procedure . on the other hand , different from feature extraction ,classification is a much more general problem .we usually design a class label prediction function as a classifier for this purpose .to learn the parameter of a classifier function , we usually try to minimize the classification error of the training samples in a training set and simultaneously reduce the complexity of the classifier . for example, the most popular classifier is support vector machine ( svm ) , which minimizes the hinge losses to reduce the classification error , and at the same time minimizes the norm of the classifier parameters to reduce the complexity . in this paper, we focus on the classification aspect .mutual information is defined as the information shared between two sets of variables .it has been used as a criterion of feature extraction for pattern classification problems .however , surprisingly , it has never been directly explored in the problem of classifier learning .actually , mutual information has a strong relation to kullback - leibler divergence , and there are many works using kl - divergence for classifiers .moreno et al . proposed a novel kernel function for support vector classification based on kullback - leibler divergence , while liu and shum proposed to learn the most discriminating feature that maximizes the kullback - leibler divergence for the adaboost classifier .however , both these methods do not use the kl - divergence based criterion to learn parameters of linear classifiers . to bridge this gap , in this paper , for the first time , we try to investigate using mutual information as a criterion of classifier learning .we propose to learn a classifier by maximizing the mutual information between the classification response variable and the true class label variable . the classification response variable is a function of classifier parameters and data samples .the insight is that mutual information is defined as the information shared between and . from the viewpoint of information theory ,if the two variables are not mutually independent , and one variable is known , it usually reduces the uncertainty about the other one .then mutual information is used to measure how much uncertainty is reduced in this case . to illuminate how the mutual information can be used to measure the classification accuracy , we consider the two extreme cases : * on one hand , if the classification response variable of a data sample is randomly given , and it is independent of its true class label , then knowing does not give any information about and vice versa , and the mutual information between them could be zero , i.e. , . * on the other hand ,if is given so that and are identical , knowing can help determine the value of exactly as well as reduce all the uncertainty about .this is the ideal case of classification , and knowing can reduce all the uncertainty about . in this case , the mutual information is defined as the uncertainty contained in ( or ) alone , which is measured by the entropy of or , denoted by or respectively , where is the entropy of a variable .since f and y are identical , we can have . naturally , we hope that the classification response can predict the true class label as accurately as possible , and knowing can reduce the uncertainty about as much as possible .thus , we propose to maximize the mutual information between and with regard to the parameters of a classifier . to this end, we proposed a mutual information regularization term for the learning of classifier parameters .an objective function is constructed by combining the mutual information regularization term , a classification error term and a classifier complexity term .the classifier parameter is learned by optimizing the objective function with a gradient descend method in an iterative algorithm .the rest parts of this paper are organized as follows : in section [ sec : met ] , we introduce the proposed classifier learning method . the experiment results are presented in section [ sec : exp ] . in section [ sec : con ] the paper is concluded .in this section , we introduce the proposed classifier learning algorithm to maximize the mutual information between the classification response and the true class label .we suppose that we have a training set denoted as , where is the -dimensional feature vector for the -th training sample , and is the number of training samples . the class label set for the training samples is denoted as , where is the class label of the -th sample .to learn a classifier to predict the class label of a given sample with its feature vector , we design a linear function as a classifier , where is the classifier parameter vector , is the classification response of given the classifier parameter , and is the signum function which transfers the classification response to the final binary classification result .we also denote the classification response set of the training samples as where is the classification response of the -th training sample . to learn the optimal classification parameter for the classification problem , we consider the following three problems : to learn the optimal classification parameter , we hope the classification response of a data sample obtained with the learned can predict its true class label as accurately as possible .to measure the prediction error , we use a loss function to compare a classification response against its corresponding true class label . given the classifier parameter , the loss function of the -th training sample with its classification response and true class label is denoted as .there are a few different loss functions which could be considered .hinge loss : : is used by the svm classifier , and it is defined as + + where is defined as + squared loss : : is usually used by regression problems , and it is defined as + logistic loss : : is defined as follows , and it is also popular in regression problems , + = \log \left [ 1 + \exp(-y_i { { \textbf{w}}}^\top { { \textbf{x}}}_i)\right ] .\end{aligned}\ ] ] exponential loss : : is anther popular loss function which could be used by both classification and regression problems , which is defined as + obviously , to learn an optimal classifier , the average loss of all the training samples should be minimized with regard to . thus the following optimization problem is obtained by applying a loss functions to all training samples , to reduce the complexity of the classifier to prevent the over - fitting problem , we also regularize the classifier parameter by a norm term as we also propose to learn the classifier by maximizing the mutual information between the classification response variables and the true class label variables .the mutual information between two variables and is defined as where is the marginal entropy of , which is used to measure the uncertainty about , and is the entropy of conditional on , which is used as the measure of uncertainty of when is given . to use the mutual information as a criterion to learn the classifier parameters , we first need to estimate and .estimation of : : we use the training samples to estimate , and according to the definition of entropy , we have + + where is the probability density of .it could be seen that the entropy of is the expectation of .the non - parametric kernel density estimation ( kde ) is used to estimate the probability density function , + + where is the gaussian kernel function and is the bandwidth parameter .estimation of : : we also use the training samples to estimate , and according to its definition , we have + + where is the probability density of class label , is the number of samples with the class label equal to , and + + is the conditional entropy of given the class label .we also use the kde to estimate the conditional probability density function + + substituting it to ( [ equ : h_fy ] ) , we have the estimated , + with the estimated entropy and the conditional entropy , the mutual information between the variable and could be rewritten as the function of parameter by substituting , to learn the classifier parameter , we maximize the mutual information with regard to , * remark * : it should be noted that similar to our method , the algorithm proposed in maximizes kl - divergence between the class pdf , , and the total pdf , , therefore , has relation to method in kullback - leibler boosting . however , different from our method , it uses kl - divergence as a criterion to select the most discriminating features , whereas our method uses mutual information as a criterion to learn the classifier parameter . by combining the optimization problems proposed in ( [ equ : ob2 ] ) , ( [ equ : ob3 ] ) and ( [ equ : ob1 ] ) , the optimization problem for the proposed classifier parameter learning method is obtained as where and are tradeoff parameters . in the objective function , there are three terms . the first one is optimizedso that the prediction error is minimized , the second term is used to contral the complexity of the classifier , and the last term is introduced so that the mutual information between the classification response and the true class label can be maximized .direct optimization to ( [ equ : og4 ] ) is difficult . instead of seeking a closed - form solution, we try to optimize it using gradient descent method in an iterative algorithm . in each iteration , we employ the gradient descent method to update . according to the optimization theory , if is defined and differentiable in a neighborhood of a point , then decreases faster if goes from in the direction of the negative gradient of at , .thus the new is obtained by where is the descent step .the key step is to compute the gradient of , which is calculated as where and are the gradient of and respectively .they are given analytically as follows .we give the analytical gradients of different definitions of as follows : hinge loss : : is not a smooth function , but we can first update using previous as in ( [ equ : tau ] ) , and then fix it when we derivate , + squared loss : : is a smooth function , and its gradient is + logistic loss : : is also smooth with its gradient as + exponential loss : : is also smooth , and its gradient can be obtained as + the gradient of is computed as \\ = & - \sum_{i=1}^n \left(\log p({{\textbf{w}}}^\top { { \textbf{x}}}_i)+ 1 \right ) \nabla p({{\textbf{w}}}^\top { { \textbf{x}}}_i ) \\ & + \sum_{c\in \{+1,-1\ } } \frac{n_c}{n } \left ( \sum_{i : y_i = c } \left ( \log p({{\textbf{w}}}^\top { { \textbf{x}}}_i|y = c ) + 1 \right ) \nabla p({{\textbf{w}}}^\top { { \textbf{x}}}_i|y = c ) \right ) , \end{aligned}\ ] ] where the gradients of and are computed as this section , we evaluate the proposed classification method on two real world pattern classification problems . zinc is an important element for many biological processes of an organism , and it is closely related to many different diseases .moreover , it is also critical for proteins to play their functional roles .thus functional annotation of zinc - binding proteins is necessary to biological process control and disease treatment . to this end, predicting zinc - binding sites of proteins shows its importance in bioinformatics problems . in the first experiment, we evaluate the proposed classification method on the problem of predicting zinc - binding sites . for the purpose of experiment , we collected a set of amino acids of four types , which are cys , his , glu and asp ( ched ) .these four types are the most common zinc - binding site types , which take up roughly 96% of the known zinc - binding sites . in the collected data set , there are 1,937 zinc - binding cheds and 11,049 non - zinc - binding cheds , resulting a data set of 13,986 data samples . given a candidate ched , the problem of zinc - binding site prediction is to predict if it is a zinc - binding site or a non - zinc - binding site . in this experiment, we treated a zinc - binding ched as a positive sample , and a non - zinc - binding ched as a negative sample . to extract features from a ched , we computed the position specific substitution matrices ( pssm ) , the relative weight of gapless real matches to pseudocounts ( rw - grmtp ) , shannon entropy , and composition of -spaced amino acid pairs ( cksaap ) , and concatenated them to form a feature vector for each data sample. please note that the value of each feature was scaled to the range between -1 and 1 , so that the performance does not depend on the selection of scaling . to conduct the experiment, we used the 10-fold cross validation protocol .the entire data set was split into ten non - overlapping folds , and each fold was used as a test set in turn , while the remaining nine folds were combined and used as a training set .the proposed algorithm was performed to the training set to learn a classifier from the feature vectors of the training samples , and then the learned classifier was used to predict the class labels of the test samples .please note that the tradeoff parameters of the proposed algorithm was tuned within the training set .the averaged value of the hyper - parameters and are 5.8 and 44.8 .the parameter was computed as , where was the median value of distances between pairs of training samples , and the averaged value of was 0.451 .the classification performance was measured by comparing the predicted labels against the true labels .the receiver operating characteristic ( roc ) and recall - precision curves were used as performance metrics .the roc curve was obtained by plotting true positive rates ( tprs ) against false positive rates ( fprs ) , while recall - precision curve was obtained by plotting precision against recall values .tpr , fpr , recall , and precision are defined as where , , and represent the number of true positives , false positives , false negatives and true negatives , respectively .moreover , area under roc curve ( auc ) was used as a single performance measure .a good classifier should achieve a roc curve close to the top left corner of the figure , a recall - precision curve close to the top right corner , and also a high auc value . in this experiment, we compared the proposed mutual information regularized classifier against the original loss functions based classifier without mutual information regularization , so that the improvement achieved by maximum mutual information regularization could be verified .the four different loss functions listed in section [ sec : met ] were considered , and the corresponding classifiers were evaluated here .the roc and recall - precision curves of four loss functions based classification methods are given in fig .[ fig : zincroc ] .the proposed maximum mutual information regularized method is denoted as maxmutinf " after a loss function title in the figure .it turns out that maximum mutual information regularization improves all the four loss functions based classification methods significantly .although various loss functions achieved different performances , all of them could be boosted by reducing the uncertainty about true class labels , which could be measured by the mutual information between class labels and classification responses .therefore , the results show that maximizing mutual information is highly effective in reducing uncertainty of true class labels , and hence it can significantly improve the quality of classification .-regularization , and `` loss - maxmutinf '' stands for combination of classification loss , and maximum mutual information - regularization.,title="fig:",scaledwidth=70.0% ] + moreover , we also plotted aucs of different methods in fig . [fig : zincauc ] .again , we observe that maximum mutual information regularization improves different loss functions based classifiers. we also can see that among these four loss functions , hinge loss achieves the highest auc values , while squared loss achieves the lowest .the auc value of classifiers regularized by both hinge loss and mutual information is 0.9635 , while that of squared loss and mutual information is even lower than 0.95 .the performances of logistic and exponential loss functions are similar , and they are between the performances of hinge loss and squared loss .+ since the mutual information is used as a new regularization technique , we are also interested in how the proposed regularization alone works . we therefore compared the following three cases . 1 . * conventional case * which only uses the classification loss regularization .this case is corresponding to setting in ( [ equ : og4 ] ) . in this case, we only used the hinge loss since it has been shown that this loss function obtains better accuracy than other loss functions .* mutual information regularization case * which is corresponding to the problem in ( [ equ : og4 ] ) when the first term is ignored .* hybrid regularization case * which is the proposed framework which combines the classification loss minimization and mutual information regularization .the comparison results are given in fig .[ fig : figzinccompare ] .it can be seen that the conventional case which only uses the hinge loss function achieved better results than the method with only mutual information regularization , and the hybrid regularization achieved the best results .this means mutual information regularization can not obtain good performance by itself and should be used with traditional loss functions .antinuclear autoantibodies ( ana ) test is a technology used to determine whether a human immune system is creating antibodies to fight against infections .ana is usually done by a specific fluorescence pattern of hep-2 cell images .recently , there is a great need for computer based hep-3 cell image classification , since manual classification is time - consuming and not accurate enough . in the second experiment, we will evaluate the performance of the proposed classifier on the problem of classifying hep-2 cell images . in this experiment, we used the database of hep-2 cell images of the icip 2014 competition of cell classification by fluorescent image analysis . in this data set , there are 13,596 cell images , and they belong to six cell classes , which are namely centromere , golgi , homogeneous , nucleolar , nuclearmembrane , and speckled . each cell image is segmented by a mask image showing the boundary of the cell .moreover , the entire data set is composed of two groups of different tensity types , which are intermediate and positive .overall , the intermediate group outnumbers the positive group , with an exception that , for the cases of centromere and speckled , the latter marginally outnumbers the former . the number of images in different classes of two groups are given in fig .[ fig : hep2data ] . to present each image for the classification problem , we extracted shape and texture features and concatenated them to form a visual feature vector .+ experiments were conducted in two groups respectively .we also adopted the 10-fold cross validation for the experiment . to handle the problem of multiple class problem, we used the one - against - all strategy .each class was treated as a positive class in turn , while all remaining five classes were combined to form a negative class .a classifier was learned for each class to discriminate it from other classes .a test sample was assigned to a class with the largest classification response .the classification accuracy was used as a classification performance metric .the boxplots of accuracies of the 10-fold cross validation on the two groups of hep-2 cell image data set are given in fig .[ fig : hep2result ] . from this figure, it could be observed that for both two groups of data sets , the proposed regularization method can improve the classification performances significantly , despite of the variety of loss functions .it can also be seen that the performances on the second group ( positive ) is inferior to that of the first group ( intermediate ) .this indicates that it is more difficult to classify cell images when their contrast is low .however , the improvement achieved by mutual information regularization is consistent over these two groups. + + + +can knowing the classification response of a data sample reduce uncertainty about its true class label ? in this paper , we proposed this question and tried to answer it by learning an optimal classifier to reduce such uncertainty .insighted by the fact that the reduced uncertainty can be measured by the mutual information between classification responses and true class labels , we proposed a new classifier learning algorithm , by maximizing the mutual information when learning the classifier . particularly , our algorithm adds a maximum mutual information regularization term .we investigated the classification performances when maximum mutual information was used to regularize the classifier learning based on four different loss functions .the the experimental results show that the proposed regularization can improve the classification performances of all these four loss function based classifiers . in the future , we will study how to apply the proposed algorithm on large scale dataset based on some distributed big data platforms and use it to signal and power integrity applications .this work was supported by grants from king abdullah university of science and technology ( kaust ) , saudi arabia .t. ojala , m. pietikinen , t. menp , multiresolution gray - scale and rotation invariant texture classification with local binary patterns , ieee transactions on pattern analysis and machine intelligence 24 ( 7 ) ( 2002 ) 971987 .q. sun , f. hu , q. hao , mobile target scenario recognition via low - cost pyroelectric sensing system : toward a context - enhanced accurate identification , ieee transactions on systems , man , and cybernetics .systems 44 ( 3 ) ( 2014 ) 375384 .j. wang , x. gao , q. wang , y. li , prodis - contshc : learning protein dissimilarity measures and hierarchical context coherently for protein - protein comparison in protein database retrieval , bmc bioinformatics 13 ( suppl 7 ) ( 2012 ) s2 .p. wang , intelligent pattern recognition and applications to biometrics in an interactive environment , in : grapp 2009 - proceedings of the 4th international conference on computer graphics theory and applications , 2009 , pp .is21is22 .k. roy , p. bhattacharya , c. y. suen , towards nonideal iris recognition based on level set method , genetic algorithms and adaptive asymmetrical svms , engineering applications of artificial intelligence 24 ( 3 ) ( 2011 ) 458475 . l. xu , z. zhan , s. xu , k. ye , an evasion and counter - evasion study in malicious websites detection , in : 2014 ieee conference on communications and network security ( cns ) ( ieee cns 2014 ) , san francisco , usa , 2014 .q. cai , y. yin , h. man , dspm : dynamic structure preserving map for action recognition , in : multimedia and expo ( icme ) , 2013 ieee international conference on , 2013 , pp . 16 . http://dx.doi.org/10.1109/icme.2013.6607606 [ ] .y. zhou , l. li , t. zhao , h. zhang , region - based high - level semantics extraction with cedd , in : network infrastructure and digital content , 2010 2nd ieee international conference on , ieee , 2010 , pp .404408 . p.jonathon phillips , h. moon , s. rizvi , p. rauss , the feret evaluation methodology for face - recognition algorithms , ieee transactions on pattern analysis and machine intelligence 22 ( 10 ) ( 2000 ) 10901104 .wang , i. almasri , x. gao , adaptive graph regularized nonnegative matrix factorization via feature selection , in : pattern recognition ( icpr ) , 2012 21st international conference on , ieee , 2012 , pp . 963966 .t. subbulakshmi , a. afroze , multiple learning based classifiers using layered approach and feature selection for attack detection , in : 2013 ieee international conference on emerging trends in computing , communication and nanotechnology , ice - ccn 2013 , 2013 , pp .308314 .z. chen , y. wang , y .- f .zhai , j. song , z. zhang , zincexplorer : an accurate hybrid method to improve the prediction of zinc - binding sites from protein sequences , molecular biosystems 9 ( 9 ) ( 2013 ) 22132222 .p. j. moreno , p. p. ho , n. vasconcelos , a kullback - leibler divergence based kernel for svm classification in multimedia applications , in : advances in neural information processing systems 16 , mit press , 2004 , pp . 13851392 .o. yildiz , e. alpaydin , statistical tests using hinge/-sensitive loss , in : computer and information sciences iii - 27th international symposium on computer and information sciences , iscis 2012 , 2013 , pp .153160 .s. bach , b. huang , b. london , l. getoor , hinge - loss markov random fields : convex inference for structured prediction , in : uncertainty in artificial intelligence - proceedings of the 29th conference , uai 2013 , 2013 , pp .3241 .a. elgammal , r. duraiswami , d. harwood , l. davis , background and foreground modeling using nonparametric kernel density estimation for visual surveillance , proceedings of the ieee 90 ( 7 ) ( 2002 ) 11511162 .s. zhong , d. chen , q. xu , t. chen , optimizing the gaussian kernel function with the formulated kernel target alignment criterion for two - class pattern classification , pattern recognition 46 ( 7 ) ( 2013 ) 20452054 .a. carvalho , p. ado , p. mateus , efficient approximation of the conditional relative entropy with applications to discriminative learning of bayesian network classifiers , entropy 15 ( 7 ) ( 2013 ) 27162735 .a. porta , g. baselli , d. liberati , n. montano , c. cogliati , t. gnecchi - ruscone , a. malliani , s. cerutti , measuring regularity by means of a corrected conditional entropy in sympathetic outflow , biological cybernetics 78 ( 1 ) ( 1998 ) 7178 .a. kumar , d. hati , t. thaker , l. miah , p. cunningham , c. domene , t. bui , a. drake , l. mcdermott , strong and weak zinc binding sites in human zinc-2-glycoprotein , febs letters 587 ( 24 ) ( 2013 ) 39493954 .z. liu , y. wang , c. zhou , y. xue , w. zhao , h. liu , computationally characterizing and comprehensive analysis of zinc - binding sites in proteins , biochimica et biophysica acta - proteins and proteomics 1844 ( 1 part b ) ( 2014 ) 171180 .s. menchetti , a. passerini , p. frasconi , c. andreini , a. rosato , improving prediction of zinc binding sites by modeling the linkage between residues close in sequence , in : research in computational molecular biology , springer , 2006 , pp .309320 .p. agrawal , m. vatsa , r. singh , hep-2 cell image classification : a comparative analysis , in : lecture notes in computer science ( including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics ) , vol .8184 lncs , 2013 , pp .195 202 .y. wang , y. su , g. agrawal , supporting a light - weight data management layer over hdf5 , in : cluster , cloud and grid computing ( ccgrid ) , 2013 13th ieee / acm international symposium on , ieee , 2013 , pp . 335342 .y. wang , w. jiang , g. agrawal , scimate : a novel mapreduce - like framework for multiple scientific data formats , in : cluster , cloud and grid computing ( ccgrid ) , 2012 12th ieee / acm international symposium on , ieee , 2012 , pp .443450 .q. sun , f. hu , h. qi , context awareness emergence for distributed binary pyroelectric sensors , in : multisensor fusion and integration for intelligent systems ( mfi ) , 2010 ieee conference on , ieee , 2010 , pp .162167 .h. liu , f. shi , y. wang , n. wong , frequency - domain transient analysis of multitime partial differential equation systems , in : vlsi and system - on - chip ( vlsi - soc ) , 2011 ieee / ifip 19th international conference on , ieee , 2011 , pp .160163 .y. wang , z. zhang , c .- k .koh , g. shi , g. k. pang , n. wong , passivity enforcement for descriptor systems via matrix pencil perturbation , computer - aided design of integrated circuits and systems , ieee transactions on 31 ( 4 ) ( 2012 ) 532545 .lei , y. wang , q. chen , n. wong , on vector fitting methods in signal / power integrity applications , in : proceedings of the international multiconference of engineers and computer scientists 2010 , imecs 2010 , newswood limited ., 2010 , pp .14071412 .y. wang , z. zhang , c .- k .koh , g. k. pang , n. wong , peds : passivity enforcement for descriptor systems via hamiltonian - symplectic matrix pencil perturbation , in : proceedings of the international conference on computer - aided design , ieee press , 2010 , pp .
|
in this paper , a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label . we argue that , with the learned classifier , the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible . the reduced uncertainty is measured by the mutual information between the classification response and the true class label . to this end , when learning a linear classifier , we propose to maximize the mutual information between classification responses and true class labels of training samples , besides minimizing the classification error and reducing the classifier complexity . an objective function is constructed by modeling mutual information with entropy estimation , and it is optimized by a gradient descend method in an iterative algorithm . experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization . pattern classification , maximum mutual information , entropy , gradient descend
|
advances in the capabilities of modern day computing have allowed carrying line - by - line calculations in a systematic manner for the field of spectroscopy modelling .various applications such as species concentrations and temperature measurements in low pressure plasmas , or the calculation of radiative fluxes for atmospheric entry flows , greatly benefit from such techniques .however , the accuracy of such spectral simulations may vary consequently , depending on the methods used for line - by - line calculations , but also on the applied spectroscopic datasets .the different methods used for the calculation of key spectral parameters such as line positions , intensities , and shapes are discussed in this paper .available state of the art models for each of these parameters are presented , and some simplifications to such spectral models are discussed , leading to lower memory requirements and calculation times for the used computing systems .the models presented and discussed in this paper are valid for diatomic rovibronic transitions , but also for linear polyatomic rovibrational transitions ( such as transitions from the co molecule ) .the second part of this work presents some simulations of experimentally measured high resolution spectra issued from low pressure and high enthalpy plasmas .the discrepancies that may derive from the selection of different simulation models and spectral datasets will be highlighted through the comparison of different simulated spectra with the measured spectra .it will be verified that a careful selection of adequate models , linked to a selection of accurate spectroscopic data , may yield a very accurate reproduction of high resolution measured spectra .discrete molecular radiation can be characterized unambiguously through three parameters : line position , intensity , and shape determination of line positions depends on the quantification of the energy levels of the molecule bound states .line intensities depend on the probabilities of transition between the different states , as well as on the population of these states . finally , line shapes depend on the local conditions of the gas in which the transition takes place .the first one allows a broad calculation of any number of vibrational and rotational levels for a given electronic transition of a molecule .calculations are performed using equilibrium constants , which give the vibrational and rotational constants for the given electronic transition in a broad range of vibration and rotation levels .the second one uses level - by - level spectroscopic constants and allows the calculation of any number of rotational levels for given electronic and vibrational transition levels of a molecule , as the band origin and rotational constants are set for each vibration level .this approach generally allows a better determination of specific vibration levels energies but prevents one from simulating further levels than those for which spectroscopic data is available .the third one , and the more accurate , requires diagonalising the corresponding hamiltonian matrix for each rovibronic state .this approach can be useful when the experimental spectra to be simulated is strongly perturbed although it leads to larger computational times .[ [ line - position - calculations - using - equilibrium - constants - in - matrix - form ] ] line position calculations using equilibrium constants in matrix form ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ klein - dunham coefficients allow a clear and unambiguous determination of level positions , compared to the traditional spectroscopic developments , prone to confusions and errors ( for instance , the parameter , useful for the calculation of the rotational constant can sometimes be confused with the spin - rotation interaction coefficient ) .the use of klein - dunham expansions , unlike traditional developments , may also prevent situations were neglecting higher order corrections leads to considerable shifts for the position of calculated lines compared to the experimental spectrum . setting a matrix of klein - dunham coefficients such as and in computer routines for line position calculations suffices in order to account for the available polynomial expansions as verified by a broad review of available spectroscopic coefficients by the author . for multiplet transitions ,the expression for the level energies differs slightly from eq .[ eq : eevj ] , but the general form of the klein dunham matrix can be used .expressions for the different multiplet level energies are presented in appendix [ sec : leven ] .when the level positions can no longer be accurately approximated through klein - dunham expansions ( as when vibrational perturbations of the spectra are present ) , spectroscopic constants for each vibrational level must be used .level spectroscopic constants obtained from fits of the rotational lines for each vibrational band are given in this case .using such level constants usually results in more accurate predictions of the level energies .however calculations are restricted to the vibrational levels were spectroscopic constants are available , unlike klein - dunham expansions which allow higher level extrapolations of the available spectroscopic data .when perturbations are present in the spectra , the polynomial expansions described previously no longer suffice for the accurate simulation of line positions .instead , one has to solve the hamiltonian matrix , taking into account the effects of the perturbing states using the perturbation method .this leads to very precise calculations of the line positions ( typically less than 0.1 ) .however , this method requires the calculation of the proper values of a matrix for each rovibronic state where is the level multiplicity .an overview of the different methods used for the calculation of line emission and absorption as well as the different difficulties and approximations seldom encountered will be presented in this section .[ eq : emicoeffdi ] highlights the additional difficulties related to line intensity calculations .these depend not only on the line positions ( through the accounting of the transition energy ) , but also on the transition probabilities and the number density of the initial state .although the two former quantities only depend on the transition parameters , being calculated according to quantum mechanics laws , the latter depends on the state of the studied gas . the radiative properties of a gas can be unambiguously known through the determination of it s wavelength - dependent emission and absorption coefficients and .these two quantities are not independent however , and line absorption coefficients can be determined from the line emission coefficients . the absorption coefficient including spontaneous and induced absorption ( adopting the normalization factor for the hnl - london coefficients and excluding broadening mechanisms ) is given by the relation one can therefore deduce the following relationships between the einstein spontaneous emission coefficient , the einstein spontaneous absorption coefficient , and the einstein induced absorption coefficients : as these relationships hold even in non - local thermodynamic ( nlte ) conditions , after simple algebraic manipulation, one can relate the line emission and absorption coefficients ( in wavenumber units ) through the expression for the more restrictive case of a boltzmann distribution of the internal levels populations where one recovers the relation between the emission and absorption coefficient defined by planck s law ( eq . [ eq : planck ] ) .[ [ determination - of - the - initial - quantum - levels - populations - n_u ] ] determination of the initial quantum levels populations ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ however , accurate calculations of atomic and molecular species partition functions requires a set of accurate spectroscopic constants up to higher quantum levels ( close to the dissociation limits for molecules and ionization limits for atoms ) .namely , the lowering of the ionisation threshold for atomic species has to be considered in the plasma state , when including the contribution of the atomic rydberg states to the overall partition function .also , for molecular partition functions calculations , it is necessary to determine the maximum rovibronic levels which can be achieved before the dissociation of the molecule occurs ( including superdissociative states ) . out of thermodynamic equilibrium , no straightforward method for calculating the levels populations exists , and one has to resort to state - to - state models which explicitly take into account the different possible discrete states of a gas species .the development of accurate state - to - state models has been carried by different research teams , and the reader should refer to the references for a more detailed description of such models .expressions for the electronic transition moment can be found in the literature , either from spectroscopic measurements , or from ab - initio " calculations .this last method is usually preferred , as nowadays , quantum methods have achieved a very good precision .the vibrational wavefunctions are determined by solving the radial schrdinger equation on the potential curves of the upper and lower level potential curves .potential curves can be either calculated using ab - initio " methods , or reconstructed through the rydberg klein rees ( rkr ) method according to experimental spectroscopic data .as modern spectroscopy is able to resolve line positions to less than the , level energies can be known to a greater accuracy than using ab - initio " methods . however , ab - initio " methods are able to reproduce the entire potential curve , whereas the rkr method can only yield the region of the potential curve where measured data is available . to overcome this problem ,the central part of the potential curve calculated through the rkr method is extrapolated by a repulsive potential at narrower internuclear distances , and by a hulburth and hirschfelder potential at larger internuclear distances , provided that the state dissociation energy is know .this method can only be applied for electronic states with a single potential with a shape close to a well , but this is fortunately the case for most of the electronic states for the molecules encountered in gas spectroscopy .a more detailed overview of the calculation of potential curves and vibronic wavefunctions can be found in .an example of a calculation of potential curves ( using the rkr method ) and vibrational wavefunctions ( solving the radial schrdinger equation ) is presented in fig .[ fig : rkr ] .[ [ simulation - of - linear - polyatomic - rovibrational - spectra - using - the - line - by - line - approach ] ] simulation of linear polyatomic rovibrational spectra using the line - by - line approach ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ calculations for linear polyatomic species present further difficulties compared to diatomic species , as molecules have several vibration modes ( bending , symmetric / asymmetric stretch , etc ... ) .emission spectra from these molecules results mainly from rovibrational transitions , and a set of vibrational equilibrium constants can still be defined , with a more complex formulation ( an example for the co molecule is presented in ) .however , estimation of these vibrational parameters is rather difficult , the resulting values giving inaccurate results in some cases . in practice ,a set of band - origin wavelengths and rotational constants for each vibrational transition is given , allowing the calculation of the line positions .the calculation of the lines intensities follows a similar approach than for diatomic spectra calculations .emission coefficients of rovibrational transitions of polyatomic spectra can be expressed through an expression similar to eq .[ eq : emicoeffdi ] : where the additional term designates the herman - wallis factor , which accounts for vibration - rotation interactions ( see p. 110 in ) .this factor has been omitted from eq .[ eq : emicoeffdi ] , as vibration - rotation interactions can be usually neglected in a rovibronic transition , owing to the usually large energy gap between the transition electronic levels .however , for rovibrational transitions , this interaction has to be accounted for , which explains it s inclusion in eq .[ eq : emicoeffdirovib ] .determination of this squared transition moment is rather complex .instead , values of the integrated intensity of a vibrational band are tabulated at a reference temperature ( usually 296 k ) .the value for the vibrational dipole moment ( in atomic units squared ) can then be deduced from the following expression : laux c. o. , gessman r. j. , kruger c. h. , roux f. , michaud f. , and davis s. p. , _ `` rotational temperature measurements in air and nitrogen plasmas using the first negative system of n '' _ , j. quant .transfer , vol .68 , pp . 473482 ( 2001 ) .zare r. n. , schemeltekopf a. l. , harrop w. j. and albritton d. l. , _ a direct approach for the reduction of diatomic spectra to molecular constants for the construction of rkr potentials _ , journal of molecular spectroscopy , no .46 , pp . 3766( 1973 ) .giordano d. , capitelli m. , and colonna g. , _ tables of internal partition functions and thermodynamic properties of high - temperature air species from 50 k to 100000 k _ , esa str237 , esa publications office ( 1994 ) . chernyi g. g. , losev s. a. , macheret s. o. , and potapkin b. v. , _ " physical and chemical processes in gas dynamics : cross sections and rate constants , vol . 1 _ , aiaa progress in astronautics and aeronautics , vol . 196 ( 2002 ). sarrete j.p . , gomes a.m ., bacri j. , laux c. o. , and kruger c. h. , _ `` collisional - radiative modelling of quasi - thermal air plasmas with electronic temperatures between 2000 and 13.000 k i. k '' _ , j. quant .transfer , vol .2 , pp . 125141 ( 1995 ) .capitelli m. , armenise i. , and gorse c. , _ `` state - to - state approach in the kinetics of air components under re - entry conditions '' _ , journal of thermophysics and heat transfer , vol .4 , pp . 570578 ( 1997 ) . armenise i. , capitelli m. , kustova e. , and nagnibeda e. , _ `` the influence of nonequilibrium kinetics on the heat transfer and diffusion near reentering body '' _ , journal of thermophysics and heat transfer , vol .2 , pp . 210218 ( 1999 ) . arnold j. o. , whiting e. e. , and lyle , g. c. , _ line by line calculation of spectra from diatomic molecules and atoms assuming a voigt line profile _ , j. quant .transfer , vol .9 , pp . 775798 ( 1969 ) .lino da silva m. , _ simulation des proprits radiatives du plasma entourant un vhicule traversant une atmosphre plantaire vitesse hypersonique : application la plante mars _ , phd thesis ( in french ) , universit .orlans ( 2004 ) .scutaru d. , _ etudes thorique et exprimentale de labsorption infrarouge par co haute temprature .application des modles de rayonnement des gaz _ , ph.d .thesis ( in french ) , laboratoire denergtique molculaire et macroscopique , combustion ( e.m2.c ) , ecole centrale de paris , france ( 1994 ) .taine j. , _ a line - by - line calculation of low - resolution radiative properties of co transparent nonisothermal gases mixtures up to 3000 k _ , j. quant . spectrosc .transfer , vol .4 , pp . 371379 ( 1983 ) . rothman l. s. , hawkins r. l. , wattson r. b. , and gamache r. r. , _ `` energy levels , intensities , and linewidths of atmospheric carbon dioxide bands '' _ , j. quant .transfer , vol .5/6 , pp . 537566 ( 1992 ) .lago v. , lebhot a. , dudeck m. , pellerin s. , renault t. , and echegut p. , _ entry conditions in planetary atmospheres : emission spectroscopy of molecular plasma arcjets _ , journal of thermophysics and heat transfer , vol .2 , pp . 168175 ( 2001 ) .lino da silva m. , lago v. , bedjanian e. , lebhot a. , mazouffre s. , dudeck m. , szymanski z. , peradzynski z. , boubert p. , and chickhaoui a. , _ modelling of the radiative emission of a plasma surrounding an atmospherical probe for mars exploration _ , high temperature material processes , vol .1 , pp . 115125 ( 2003 ) .cerny d. , bacis r. , guelachvili g. , and roux f. , _ extensive analysis of the red system of the cn molecule with a high resolution fourier spectrometer _ , journal of molecular spectroscopy , vol .73 , pp . 154167( 1978 ) .ito h. , ozaki y. , suzuki k. , kondow t. , and kuchitsu k. , _ analysis of the perturbations in the cn( ) main band system _ , journal of molecular spectroscopy , vol .127 , pp . 283303( 1988 ) .knowles p. j. , werner h .- j . ,hay j. , and cartwright d. c. , _ the red and violet systems of the cn radical : accurate multireference configuration interaction calculations of the radiative transition probabilities _ , j. chem .12 , pp . 73347343 ( 1988 ) .laux c. o. , and kruger c. h. , _ arrays of radiative transition probabilities for the first and second positive , no beta and gamma , first negative , and schumann - runge band systems _ , j. quant .transfer , vol .924 ( 1992 ) .nicolet m. , cieslik s. , and kennes r. , _ `` aeronomic problems of molecular oxygen photodissociation - v. predissociation in the schumann runge bands of oxygen '' _ , planet .space sci .37 , pp . 427458( 1989 ) .bud a. , _ ber die triplett - bandentermformel fr den allge meinen intermediren fall und anwendung derselben auf die , terme des -molekls _ , z. physik , vol .96 , pp . 219229( 1935 ) .lino da silva m. , passarinho p. , and dudeck m. , _ strong shock - wave interaction with an expanding plasma flow : influence on the cn molecule internal modes _ , 24th int . symposium rarefied gas dynamics , bari , italy , 1116 july 2004 .
|
line - by - line calculations are becoming the standard procedure for carrying spectral simulations . however , it is important to insure the accuracy of such spectral simulations through the choice of adapted models for the simulation of key parameters such as line position , intensity , and shape . moreover , it is necessary to rely on accurate spectral data to guaranty the accuracy of the simulated spectra . a discussion on the most accurate models available for such calculations is presented for diatomic and linear polyatomic discrete radiation , and possible reductions on the number of calculated lines are discussed in order to reduce memory and computational overheads . examples of different approaches for the simulation of experimentally determined low - pressure molecular spectra are presented . the accuracy of different simulation approaches is discussed and it is verified that a careful choice of applied computational models and spectroscopic datasets yields precise approximations of the measured spectra .
|
interstellar dust is of primary importance in determining the spectral energy distribution ( sed ) of the radiation escaping from galaxies at wavelengths ranging from the ultraviolet ( uv ) to the submillimetre ( submm ) and radio .dust attenuates and redistributes the light , originating mainly from stars and , if present , from an active galactic nucleus ( agn ) , by either absorbing or scattering photons .the absorbed luminosity is then re - emitted in the infrared regime .the dust may be situated in complex geometries with respect to these sources , affecting the observed structure of the galaxy at each wavelength as well as its integrated sed .modeling the propagation of light within real galaxies is thus a challenging task .nevertheless , it is essential to do such modeling , if physical quantities of interest , such as the distribution and properties of the stellar populations and the interstellar medium ( ism ) as traced by dust and the interstellar radiation fields , are to be derived from multiwavelength images and seds . + taking advantage of the approximate cylindrical symmetry of galaxies , 2d dust radiative transfer ( rt ) models , such as the one presented by popescu et al .( 2011 ) , already contain the main ingredients needed to predict integrated galaxy seds , average profiles , dust emission and attenuation for the case of normal star - forming disc galaxies .however , there are a number of reasons why 3d dust rt codes are desirable . first, spiral galaxies , although well modeled with 2d codes , show the presence of multiple and irregular features such as spiral structures , bars , warps and local clumpiness of the ism .also , galaxies may host a central agn whose polar axis may not be aligned with that of the galaxy . for mergers or post - merger galaxiesthere is clearly no fundamental symmetry of the distribution of stars and dust .finally , solutions for the distribution of stars and ism provided by numerical simulations of forming and evolving galaxies generally require processing with a 3d rt code in order to predict the appearance in different bands .+ the main challenge in realizing 3d solutions of the dust rt problem is the computational expense .the stationary 3d dust rt equation is a non - local non - linear equation : non - local in space ( photons propagate within the entire domain ) , direction ( due to scattering , absorption / re - emission ) and wavelength ( absorption / re - emission ) . even using a relatively coarse resolution in each of the six fundamental variables , namely the three spatial coordinates , the two angles specifying the radiation direction and the wavelength , solving the 3d dust rt problem require an impressive amount of both memory and computational speed , at the limits of the capabilities of current computers .+ possibly the quickest way to calculate an image of a galaxy in direct and scattered light in a particular direction is by using monte carlo ( mc ) methods ( including modern acceleration techniques , see steinacker et al .there is a rich history of applications of mc codes to dust rt problems , starting with the pioneering works of e.g. mattila ( 1970 ) , roark , roark & collins ( 1974 ) , witt & stephens ( 1974 ) and witt ( 1977 ) . in the following decadesthe mc rt technique was further developed by many authors such as e.g. witt , thronson & capuano ( 1992 ) , fischer , henning & yorke ( 1994 ) , bianchi , ferrara & giovanardi ( 1996 ) , witt & gordon ( 1996 ) and dullemond & turolla ( 2000 ) .nowadays , this method can be considered as the mainstream approach to 3d dust rt calculations ( see e.g. gordon et al .2001 , ercolano et al .2005 , jonsson 2006 , bianchi 2008 , chakrabarti & whitney 2009 , baes et al .2011 , robitaille 2011 , but also see table 1 of steinacker et al .2013 for a recent list of published 3d dust rt codes ) .the mc approach to dust rt consists of a simulation of the propagation of photons within a discretized spatial domain , based on a probabilistic determination of the location of emission of the photons , their initial propagation direction , the position where an interaction event ( absorption or scattering ) occurs and the new propagation direction after a scattering event .thus , the mc technique mimics closely the actual processes occurring in nature which shape the appearance of galaxies in uv / optical light .however , since it is based on a probabilistic approach to determine the photon propagation directions , an rt mc calculation does not necessarily determine the radiation field energy density ( rfed ) accurately in the entire volume of the calculation .the reason is that regions which have a low probability of being illuminated are reached by only few photons unless the total number of photons in the rt run is substantially increased. nonetheless , in the case of disc galaxies , accurate calculation of radiation field intensities throughout the entire volume is needed , in particular for the calculation of dust emission . indeed ,far - infrared / submm observations of spiral galaxies show that most of the dust emission luminosity is emitted longwards of 100 m ( see e.g. sodroski et al .1997 , odenwald et al .1998 , popescu et al .2002 , popescu & tuffs 2002 , dale et al .2007 , 2012 , bendo et al . 2012 ) through grains situated in the diffuse ism which are generally located at very considerable distances from the stars heating the dust .another method to solve the rt problem in galaxies , alternative to the mainstream mc approach , is by using a ray - tracing algorithm .this method consists in the calculation of the variation of the radiation specific intensity along a finite set of directions , usually referred to as `` rays '' .ray - tracing algorithms can be specifically designed to calculate radiation field intensities throughout the entire volume considered in the rt calculation .also , it should be pointed out that mc codes already make large use of ray - tracing operations ( see steinacker et al .it is thus interesting to pursue in the developing of pure ray - tracing 3d rt codes , which can be sufficiently efficient for the modelling of galaxies with 3d arbitrary geometries , if appropriate acceleration techniques are implemented .similar to mc codes , ray - tracing dust rt codes have had a rich history in astrophysics ( see e.g. hummer & rybicki 1971 , rowan - robinson 1980 , efstathiou & rowan - robinson 1990 , siebenmorgen et al .1992 , semionov & vansevicius 2005 , 2006 ) .application to analysis of galaxies started with the 2d code of kylafis & bahcall ( 1987 ) .although originally implemented only for the calculation of optical images ( see also xilouris et al .1997 , 1998,1999 ) , this algorithm was later adapted by popescu et al .( 2000 ) for the calculation of radiation fields and was coupled with a dust emission model ( including stochastic heating of grains ) to predict the full mid - infrared ( mir)/fir / submm sed of spiral galaxies ( see also misiriotis et al .2001 , popescu et al .thus far , extensions of the ray - tracing technique to 3d have been implemented but are specifically designed for solving the rt problem for star forming clouds ( e.g. steinacker et al .2003 , kuiper et al .2010 ) , heated by few dominant discrete sources , rather than for very extended distributions of emission and dust as encountered in galaxies .+ in this paper , we present dart - ray , a new ray - tracing algorithm which is optimized for the solution of the 3d dust rt problem for galaxies with arbitrary geometries and moderate optical depth at optical / uv wavelengths .the main challenge faced by this model is the construction of an efficient algorithm for the placing of rays throughout the volume of the galaxy .in fact , a complete ray - tracing calculation between all the cells , used to discretise a model , is not a viable option , since it is by far too computationally expensive even for relatively coarse spatial resolution .our algorithm circumvents the problem by performing an appropriate pre - calculation , whose goal is to provide a lower limit to the rfed distribution throughout the model . in this way, the ray angular density needed in the actual rt calculation can be dynamically adjusted such that the ray contributions to the local rfed are calculated only within the fraction of the volume where these contributions are not going to be negligible .+ furthermore , the code we developed can be coupled with any dust emission model .applications of the 3d code for calculation of infrared emission from stochastically heated dust grains of various sizes and composition , including heating of polycyclic aromatic hydrocarbons molecules , utilizes the dust emission model from popescu et al .( 2011 ) , and will be given in a future paper .the paper is structured as follows . in 2 we provide some background information and motivation behind our particular ray - tracing solution strategy . in 3 we give a technical description of our code . in 4 we provide some notes on implementation and performance of the code . in 5 we compare solutions provided by our code with those calculated by the 1d code dusty and the 2d rt calculations performed by popescu et al .( 2011 ) . in 6we show the application of the code on a galaxy model including logarithmic spiral arms .a summary closes the paper .a list of definitions for the terms and expressions used throughout the paper can be found in table [ tab_terms ] ..tables of terms and definition .the subscript denotes a dependence from the wavelength of the radiation . [ cols="<,<",options="header " , ] and plots represent the exact values obtained by the 3d code .for all the other plots , the plotted values are obtained through interpolation within the 3d grid.,title="fig : " ] and plots represent the exact values obtained by the 3d code .for all the other plots , the plotted values are obtained through interpolation within the 3d grid.,title="fig : " ] , b band . in this model the thin dust disc is not included .also , only the contribution from direct stellar light is considered .same symbols as in fig.[md_tau0_b].,title="fig : " ] , b band . in this modelthe thin dust disc is not included .also , only the contribution from direct stellar light is considered .same symbols as in fig.[md_tau0_b].,title="fig : " ] , b band . in this modelthe thin dust disc is not included .also , only the contributions from direct stellar light and the first order scattered light are included .same symbols as in fig.[md_tau0_b].,title="fig : " ] , b band . in this modelthe thin dust disc is not included .also , only the contributions from direct stellar light and the first order scattered light are included .same symbols as in fig.[md_tau0_b].,title="fig : " ] , b band . in this modelthe thin dust disc is not included .both direct stellar light and all order scattered light contributions are included .same symbols as in fig.[md_tau0_b].,title="fig : " ] , b band . in this modelthe thin dust disc is not included .both direct stellar light and all order scattered light contributions are included .same symbols as in fig.[md_tau0_b].,title="fig : " ] , b band with both the thick and thin disc included .only direct stellar light is included .same symbols as in fig.[md_tau0_b].,title="fig : " ] , b band with both the thick and thin disc included . only direct stellar lightis included .same symbols as in fig.[md_tau0_b].,title="fig : " ] , b band with both the thick and thin disc included .only direct stellar light and first order scattered light are included .same symbols as in fig.[md_tau0_b].,title="fig : " ] , b band with both the thick and thin disc included .only direct stellar light and first order scattered light are included .same symbols as in fig.[md_tau0_b].,title="fig : " ] , b band with both the thick and thin disc included . both direct stellar light and all order scatteredlight contributions are included .same symbols as in fig.[md_tau0_b].,title="fig : " ] , b band with both the thick and thin disc included .both direct stellar light and all order scattered light contributions are included .same symbols as in fig.[md_tau0_b].,title="fig : " ] .,title="fig : " ] .,title="fig : " ] , b band .same symbols as in fig.[md_tau0_b].,title="fig : " ] , b band .same symbols as in fig.[md_tau0_b].,title="fig : " ]for most practical applications to large statistical samples of galaxies ( e.g. driver et al .2007 , 2008 , 2012 , gunawardhana et al . 2011 , silva et al .2011 , grootes et al .2013 ) , rt modelling of the observed spatially integrated direct and dust - reradiated starlight necessarily ( in the absence of detailed images ) adopts 2d axisymmetric approximation of the distribution of stars and dust .however , the distribution of light at uv and short optical wavelengths from young massive stars is well known in real galaxies to be biased towards a spiral pattern of enhanced dust density , rather than the smooth exponential disc function typically assumed by these models .the question therefore arises whether this effect introduces any systematic bias into 2d model predictions of dust attenuation of integrated starlight and averaged rfed in spiral galaxies . to evaluate this bias ,we have performed a rt calculation for a galaxy model including logarithmic spiral arms .we considered a typical model galaxy from p11 , consisting of a disc with and a thin stellar disc with , and two dust discs with ( see sect .5.2 for a description of the model parameters of p11 ) .we modified this model using the same procedure as that adopted in p11 for the inclusion of circular spiral arms .thus , we considered the same double exponential distribution for the thick stellar and dust disc but we redistributed the thin disc stellar luminosity and dust mass within spiral arms .as shown in schechtman - rook et al .( 2012 ) , the implementation of logarithmic spiral arms can be done by multiplying a logarithmic spiral disc perturbation to the double exponential formula describing the stellar volume emissivity and dust density ( see eqn .[ double_exp ] ) .we adopted the expression for in their formula 10 ( two spiral arms ) : \ ] ] with the fraction of stellar light or dust within the spiral arms , the pitch angle determining how tightly the spirals turn around each other and the exponent which regulates the relative size of the arm and interarm regions .for these parameters , we adopted the values , and .then , we performed an rt calculation for a galaxy model ( including both old and young stellar discs ) in the b - band with face - on central optical depth and disc parameter values as in table [ table_p11_parameters ] .fig.[spiral_model_comparison ] shows the comparison for the output surface brightness images at different inclinations between a pure double exponential model ( upper row ) and for the model including spirals ( lower low ) .the images show the different morphology of the stellar emission for the face - on and low - inclination images. however , the edge - on images are remarkably similar for the two models .we also made a comparison for the total attenuation as a function of galaxy inclination for the two models , which is shown in fig.[plot_att_model ] .the attenuation curves are quite close to each other , within 0.02 dex , showing that the spiral pattern does not affect much the total attenuation of the galaxy for the adopted parameters . finally , we compared the rfed profiles in the galaxy plane .fig.[plot_urad_prof_model ] shows the profiles for the pure double exponential model ( squares ) , a cut along the x - axis of the model including spiral arms ( blue line ) and its azimuthally averaged rfed profile .interestingly enough , although the rfed along the x - axis shows the variation due to the spiral arms , the azimuthally averaged profile is very close to the profile for the model without spiral arms . although p11 have already shown that the spatially integrated dust and pah emission sed of a typical spiral galaxy does not depend on whether the young stellar population and associated dust is distributed in a circular spiral arm or in a disc , herewe show for the first time that it is at the level of the radiation fields that heat the dust that the distributions in the spirals start to resemble the disc distributions on the average . the results on the global attenuation, images and radiation fields , all suggest that double exponential models can be a quite good representation for spiral disc galaxies , and that the spatially integrated seds of spirals can be accounted by 2d models .although this is in qualitative agreement with previous works ( misiriotis et al .2000 , semionov et al . 2006 ,popescu et al . 2011 ) , a more extensive study is needed to see how the different parameters , e.g. the face - on optical depth , affect the attenuation curve and the radiation fields in models with and without spiral arms . , and from left to right ) for a pure double exponential disc galaxy model ( upper row ) and a model including spiral arms ( lower row ) .the models are for a central face - on optical depth .see text for details . ]in this paper we present a new ray - tracing dust radiation transfer algorithm which is able to handle arbitrary 3d geometries and it is specifically designed to calculate accurate rfed within galaxy models .the main optimization characteristics of this algorithm are the following : + 1 ) an adaptive 3d grid ( see 3.1 ) + 2 ) a ray - tracing algorithm based on the pre - calculation of a lower limit for the rfed ( see 3.2 and 3.3 ) + 3 ) an iterative procedure for the optimization of the angular density for the rays departing from each emitting cell ( 3.3 ) . + furthermore , parallelized versions of the code have been written for its use on shared - memory machines and computer clusters .+ in order to verify the code accuracy , we performed comparisons with the results provided by other codes . specifically , we used our code to calculate solutions for a spherical dusty shell illuminated by a central point source and for an axis symmetric galaxy model .for the first configuration , we considered as benchmark a set of solutions calculated by using the dusty 1d code ( ivezic & elitzur 1997 ) .we showed that the equilibrium dust temperature radial profiles and the outgoing flux spectra derived by our 3d calculations agree within a few percent with the benchmark solutions for models with low radial optical depths ( ) but present larger discrepancies for more optically thick models ( ) .the residual discrepancies , especially for the models with higher optical depths , are most probably due to the lack of dust self - heating in our code and the lower spatial resolution of the 3d calculations compared to the 1d ones . since the geometry of the source emission / opacity of star / dust shell does not reproduce that for which our algorithm was developed , namely that of an extended distribution of stellar emission and dust , we also used a second benchmark .thus , we considered the 2d calculations by popescu et al .( 2011 ) for the rfed distribution within their galaxy model .we calculated the contribution to the rfed distribution due to an old stellar disc and a young stellar disc separately and we compared the results for radial and vertical rfed profiles derived for a set of reference radii and vertical distances .we found a general good agreement between the 3d and 2d calculations within a few percent in most of the cases .at least part of the residual discrepancy can be accounted by the relatively low spatial resolution of the grid used in the 3d calculation , which is not sufficient to properly resolve the thin disc component of the galaxy model of popescu et al .we showed an example of a 3d application of the code by performing rt for a spiral galaxy model where in one case the emissivity of the young stellar population and associated dust opacity are distributed in logarithmic spiral arms and in another case are distributed in exponential discs .we found that the edge - on images , the attenuation as a function of inclination and the azimuthally average rfed profiles on the galaxy plane are approximately the same for the two models .this suggests that the spatially integrated seds of spirals can be well described by 2d models .the tests we performed have shown that , in the conditions where dust self - heating is negligible and the 3d spatial resolution is high enough to resolve emission and opacity distributions , our code can be used to calculate accurate solutions for the rfed .this characteristic is particularly important for the calculation of stochastically heated dust emission , which requires both the overall intensity and the colour of the radiation field to be calculated in an accurate way . in a future workwe will show applications of the code for the calculation of infrared emission .this will be performed using our 3d rt code coupled with the dust emission code used by popescu et al .( 2011 ) , which self - consistently calculates the stochastic emission from small grains and pah molecules . in this way, it will be possible to use our code to obtain both integrated seds and images in the mid- and fir - infrared for galaxies with arbitrary geometries .in addition , an important step will be to further optimize the code in order to make it possible to run on grids containing millions of cells in a reasonably short time .this will allow us to improve further the accuracy of the calculation for rfed within multi - scale structures spanning at least three orders of magnitude , such as from to in the case of a galaxy ism .we acknowledge support from the uk science and technology facilities council ( stfc ; grant st / j001341/1 ) .gn thanks s. dalla , k. foyle , e. kafexhiu , t. laitinen , j. steinacker for useful suggestions and/or discussions .ccp thanks the max planck institute fr kernphysik for support during a sabbatical , when this work was completed .abel , t. , wandelt , b. d. 2002 , mnras , 330 , 53 baes , m. , verstappen , j. , de looze , i. , fritz , j. , saftly , w. et al .2011 , apjs , 196 , 22 bendo , g. j. , boselli , a. , dariush , a. , pohlen , m. , roussel , h. et al .2012 , mnras , 419 , 1833 bianchi , s. , ferrara , a. , giovanardi , c. 1996 , apj , 465 , 127 bianchi , s. 2008 , a&a , 490 , 461 bisbas , t. g. , bell , t. a. , viti , s. , yates , j. , barlow , m. j. 2012 , mnras , 427 , 2100 chakrabarti , s. , whitney , b. a. 2009 , apj , 690 , 1432 dale , d. a. , gil de paz , a. , gordon , k. d. , hanson , h. m. , armus , l. , bendo , g. j. et al . 2007 , apj , 655 , 863 dale , d. a. , aniano , g. , engelbracht , c. w. , hinz , j. l. , krause , o. et al . 2012 , apj , 745 , 95 draine , b. t. , li , a. 2007 , apj , 657 , 810 driver , s. p. , popescu , c. c. , tuffs , r. j. , liske , j. , graham , a. w. 2007 , mnras , 379 , 1022 driver , s. p. , popescu , c. c. , tuffs , r. j. , graham , a. w. et al .2008 , apj , 678 , 101 driver , s. p. , robotham , a. s. g. , kelvin , l. , alpaslan , m. , baldry , i. k. et al .2012 , mnras , 427 , 3244 dullemond , c. p. , turolla , r. 2000 , a&a , 360 , 1187 efstathiou , a. , rowan - robinson , m. 1990 , mnras , 245 , 275 ercolano , b. , barlow , m. j. , storey , p. j. , 2005 , mnras , 362 , 1038 fischer , o. , henning , th . ,yorke , h. w. 1994 , a&a , 284 , 187 gordon , k. d. , misselt , k. a. , witt , adolf n. , clayton , geoffrey c. 2001 , apj , 551 , 269 grski , k. m. , hivon , e. , banday , a. j. , wandelt , b. d. , hansen , f. k. , et al .2005 , apj , 622 , 759 grootes , m. w. , tuffs , r. j. , popescu , c. c. , pastrav , b. , andrae , e. et al .2013 , apj , 766 , 59 guhathakurta , p. , draine , b. t.,1989 , apj , 345 , 230 gunawardhana , m. l. p. , hopkins , a. m. , sharp , r. g. , brough , s. , taylor , e. et al .2011 , mnras , 415 , 1647 hummer , d. g. , rybicki , g. b. 1971 , mnras , 152 , 1 ivezic , z. , elitzur , m. 1997 , mnras , 287 , 799 ivezic , z. , groenewegen , m. a. t. , menshchikov , a. , szczerba , r. 1997 , mnras , 291 , 121 jonsson , p. 2006 , mnras , 372 , 2 kylafis , n. d. , bahcall , j. n. 1987 , apj , 317 , 637 kuiper , r. , klahr , h. , dullemond , c. , kley , w. ; henning , t. 2010 , a&a , 511 , 81 mattila , k. 1970 , a&a , 9 , 53 misiriotis , a. , popescu , c. c. , tuffs , r. , kylafis , n. d. 2001 , a&a , 372 , 775 misiriotis , a. , kylafis , n. d. , papamastorakis , j. , xilouris , e. m. 2000 , a&a , 353 , 117 odenwald , s. , newmark , j. , smoot , g. 1998 , apj , 500 , 554o pascucci , i. , wolf , s. , steinacker , j. , dullemond , c. p. , henning , th ., et al .2004 , a&a , 417 , 793 popescu , c. c. , misiriotis , a. , kylafis , n. d. , tuffs , r. j. , fischera , j. 2000 , a&a , 362 , 138 popescu , c. c. , tuffs , r. j. 2002 , mnras , 335 , 41 popescu , c. c. , tuffs , r. j. , vlk , h. j. , pierini , d. , madore , b. f. 2002 , apj , 567 , 221 popescu , c. c. , tuffs , r. j. , dopita , m. a. , fischera , j. , kylafis , n. d. , madore , b. f. 2011 , a&a , 756 , 138 popescu , c. c. , tuffs , r. j. 2013 , mnras , 436 , 1302 roark , t. , roark , b. , collins , g. w. , ii 1974 , apj , 190 , 67 robitaille , t. p. 2011 , a&a , 536 , 79 rowan - robinson , m. 1980 , apjs , 44 , 403 schechtman - rook , a. , bershady , m , a. , wood , k. 2012 , apj , 746 , 70 semionov , d. , vansevicius , v. 2005 , balta , 14 , 543 semionov , d. , vansevicius , v. 2006 , balta , 15 , 601 semionov , d. , kodaira , k. , stonkut , r. , vanseviius , v. 2006 , balta , 15 , 581 siebenmorgen , r. , kruegel , e. , mathis , j. s. 1992 , a&a , 266 , 501 silva , l. , schurer , a. , granato , g. l. , almeida , c. , baugh , c. m. et al .2011 , mnras , 410 , 2043 sodroski , t. j. , odegard , n. , arendt , r. g. , dwek , e. , weiland , j. l. , hauser , m. g. , kelsall , t. 1997 , apj , 480 , 173 steinacker , j. , henning , t. , bacmann , a. , semenov , d. 2003 , a&a , 401 , 405s steinacker , j. , baes , m. , gordon , k. 2013 , arxiv1303.4998s xilouris , e. m. , alton , p. b. , davies , j. i. , kylafis , n. d. , papamastorakis , j. , trewhella , m. 1998 , a&a , 331 , 894 xilouris , e. m. , byun , y. i. , kylafis , n. d. , paleologou , e. v. , papamastorakis , j. 1999 , a&a , 344 , 868 xilouris , e. m. , kylafis , n. d. , papamastorakis , j. , paleologou , e. v. , haerendel , g. 1997 , a&a , 325 , 135 weingartner , j. c. , draine , b. t. 2001 , apj , 548 , 296 witt , a. n. 1977 , apjs , 35 , 1 witt , a. n. , gordon , k. d. 1996 , apj , 463 , 681 witt , a. n. , stephens , t. c. 1974 , aj , 79 , 948 witt , a. n. , thronson , h. a. , jr . ,capuano , j. m. , jr .1992 , apj , 393 , 611
|
we present dart - ray , a new ray - tracing 3d dust radiative transfer ( rt ) code designed specifically to calculate radiation field energy density ( rfed ) distributions within dusty galaxy models with arbitrary geometries . in this paper we introduce the basic algorithm implemented in dart - ray which is based on a pre - calculation of a lower limit for the rfed distribution . this pre - calculation allows us to estimate the extent of regions around the radiation sources within which these sources contribute significantly to the rfed . in this way , ray - tracing calculations can be restricted to take place only within these regions , thus substantially reducing the computational time compared to a complete ray - tracing rt calculation . anisotropic scattering is included in the code and handled in a similar fashion . furthermore , the code utilizes a cartesian adaptive spatial grid and an iterative method has been implemented to optimize the angular densities of the rays originated from each emitting cell . in order to verify the accuracy of the rt calculations performed by dart - ray , we present results of comparisons with solutions obtained using the dusty 1d rt code for a dust shell illuminated by a central point source and existing 2d rt calculations of disc galaxies with diffusely distributed stellar emission and dust opacity . finally , we show the application of the code on a spiral galaxy model with logarithmic spiral arms in order to measure the effect of the spiral pattern on the attenuation and rfed . [ firstpage ]
|
in the last decades , drinkable water scarcity is progressively becoming a critical issue in several countries .hence , desalination technologies have been gradually attracted increasing scientific and commercial interest . however , current desalination processes are generally energy - intensive and designed for large installations , thus requiring high capital costs .more sustainable and energy efficient desalination methods should be then investigated to meet the increasing fresh water needs . in this work ,a modular and low - cost distiller is suggested , modeled and tested .here , modularity stems from the possibility of realizing multiple evaporation / condensation processes ( referred to as _ stages _ in the following ) , conveniently under isobaric ambient conditions , and without the need of any ancillaries : as a result , the proposed distiller is totally static .the realized lab - scale prototype can be powered by low - grade energy ( among others , non - concentrated thermal solar energy ) , and it automatically provides a continuous flow of fresh water from salty ones .we experimentally assess preliminary performance of a single- and a multi - stage ( here ) lab - scale distiller . a theoretical model ( here properly validated against experiments )is used to infer the distillate production rate obtainable by larger devices , in terms of both the number of stages and inlet water salinity .the experimental setup considered for evaluating the performance of the proposed distiller is depicted in figure [ figure1 ] . testing facilities consist in a laptop , a data acquisition ( daq ) board ( _ national instruments _ ) , a power supplier , a scale ( balance ) and a small distiller prototype .stage prototype of the proposed distiller . ]three thermocouples connected with the daq - board allow to record the ambient temperature and the temperature drop across the distiller .the scale is used to monitor the mass change in the fresh water basin ( marked with ( 6 ) in the figure [ figure1 ] ) ; whereas a refractometer has been used to measure salt concentration both in the salt water basin ( marked with ( 7 ) in the figure [ figure1 ] ) and certify purity of fresh water during operations . in figure[ figure1 ] , the basic working principle and main elements of a distiller ( marked with ( 8) ) are schematically reported .our prototype consists of two highly thermally conductive thin plates ( here , aluminum squares with size cm^2^ ) , each supporting a thin layer of hydrophilic material ( here , mm thick ) . while wet , the aforementioned hydrophilic layers are only communicating through their vapor phase and have the ability of keeping separated the two liquid phases .the latter liquid separation can be achieved either using hydrophobic micro - porous membranes ( as commonly done in membrane distillation - md ) or leaving a small air gap between the hydrophilic layers ( membrane - free solution ) .specifically , we will work with both approaches .as an example , for practical purposes , the air gap can be easily imposed by inserting a thin rigid spacer with large porosity between the hydrophilic layers .alternatively , a frame at the boundary of the thin metallic plates , with thickness larger than the two hydrophilic layers , can also be used to ensure the gap without compromising on the vapor mass flux ( reduced by spacer porosity ) . clearly , all possible combinations including both hydrophobic membranes and air gaps are also possible .possibly , the distillation process can be driven by solar radiation converted into heat by commercially available highly absorbent materials ( e.g. tinox^^ ) .the latter solar absorber may be also covered by a transparent thermal insulator ( e.g. bubble wrap layer , as done in ) to reduce convection losses .tinox^^ may be a convenient choice as it guarantees limited infrared emissivity and thus radiative losses .for the sake of simplicity , here a planar electrical resistance is embedded in the distiller just beneath the tinox^^ layer , to mimic solar energy deposition . moreover ,only experiments with energy flux well below one - sun ( i.e. ) are carried out in order to reproduce realistic conditions where optical losses are also present .finally , a heat sink is placed at the bottom of the distiller , aiming at rejecting low - temperature heat to the ambient which acts as a low - temperature thermostat .it is worth stressing that , in our work , the heat sink purposely operates under natural convection so that it does not require any additional power supply .interestingly , regardless of the adopted approach for separating the two liquid phases held by the hydrophilic layers ( air gap or membranes ) , a multistage configuration can be easily designed and realized by adding a series of identical stages to the above set - up : this is schematically reported in figure [ figure2 ] .hence , in addition to the configuration , a configuration of the distiller has also been realized and experimentally tested .we finally stress that , in possible future applications of the proposed distiller for floating installations , a natural choice for the low - temperature thermostat could be sea water itself . owing to enhanced heat transfer, the latter is expected to deliver even better performance as compared to the current one operated in air .the mechanism underlying the distillation process of the distiller depicted in figure [ figure1 ] is detailed below . here, the top hydrophilic layer is labeled with ( e ) , which stands for evaporator ; whilst the hydrophilic layer at the bottom is labeled with ( c ) , which stands for condenser .a snapshot of half a stage is reported in figure [ figure3]a .most of the hydrophilic thin material is glued to an aluminum plate .such a layer is also endowed with an hydrophilic stripe which is responsible of mass transport from / to the stage . under operating conditions ,salty water naturally rises from the basin ( 7 ) to the top hydrophilic layer ( e ) due to capillarity . the thermal power from the top , through the aluminum plate , heats up the salty water in the hydrophilic layer ( e ) , therefore promoting vapor flux in the gap or through the micro - porous membranes ( if any ) . due to a difference in the vapor pressure , driven by a temperature difference, a net water flux is established towards the hydrophilic layer ( c ) , where water condensation happens and the corresponding latent heat becomes available to drive additional stages ( in the multi - stage configuration ) .an increasing amount of distillate water accumulates and is finally guided by the hydrophilic layer ( c ) into the fresh water basin ( 6 ) by gravity . during our tests ,hydrophilic layers were separated by : * air gaps imposed by polypropylene spacers with large porosity ( figure [ figure3]b ) ; * hydrophobic polytetrafluoroethylene membrane with pore size of 0.1 m and thickness of ( figure [ figure3]c ) * hydrophobic polytetrafluoroethylene membrane with pore size of 3.0 m and thickness of ( figure [ figure3]d ) our prototypes were mainly realized for proof - of - concept purposes and our realizations were all far from optimal .hence , prototypes with membranes also included small air gaps and polypropylene spacers .m or ( * d * ) 3.0 m , polytetrafluoroethylene . ] finally , we notice that when using hydrophobic micro - porous membranes the pore size is critical for avoiding liquid water entry .in other words , as it is well known in md applications , the smaller the pore size the higher the liquid entry pressure - lep . however , as we are operating under isobaric ( ambient ) conditions , even a large size of the pores ( as in figure [ figure3]d ) is not problematic for our purposes .the driving force of the observed distillation process is the difference in water vapor pressure due to both temperature and salinity on the layer surfaces .the vapor pressure gradient is defined as : where denotes water activity ; and are the mass fractions ( ) of salt in the feed or permeate solution , respectively ; is the vapor pressure of water ; and are the temperatures of the feed and permeate solution , respectively .the activity of a water / nacl solution can be approximately estimated by the raoult s law as : where for nacl , while and are the molar masses ( expressed in [ g / mol ] ) of sodium chloride and water , respectively .the feed water processed in the experiments is a water / nacl solution that mimics seawater salinity , namely 35g / l ( ) ; therefore , equation ( [ raoult ] ) predicts . on the other side ,the activity of permeate solution is equal to 1 ( i.e. distilled water ) .the vapor pressure can be computed via the antoine s semi - empirical correlation : where a , b and c are the component - specific constants , and in this case estimated as 8.07 , 1730.63 and 233.42 , respectively .the specific mass flow rate ( , ] is the vapor diffusion coefficient in air at the mean temperature of a stage ( ] ) , is the air gap thickness while its average temperature in kelvin .as shown in equation ( [ pgap ] ) , the gap thickness is crucial and it strongly affects the permeate flux .clearly , in equation ( [ pgap ] ) , if the air gap is realized without spacers . in order to achieve high permeability and thus high mass fluxes , a trade - off between heat ( to be minimized ) and mass ( to be maximized ) transport between the layers is to be found .when hydrophobic membranes are used , more advanced formulas are typically adopted to estimate the permeability coefficient .the heat flux ( ) between the two hydrophilic layers is mainly due to water phase chance and conduction heat transfer : where is the effective thermal conductivity in the gap including conduction through air and possible spacer or membrane , whereas is the latent heat of vaporization ( [ kj / kg ] for water ) .a number of experiments have been performed to investigate the heat and mass transfer phenomena in the proposed distiller . in all experiments, a lab - scale device is heated by a heat flux of [ w / m^2^ ] , here conveniently provided by a planar electrical resistance thus mimicking realistic operation conditions under one - sun ( including some optical losses ) . finally , we discuss the results of a one - dimensional numerical model implementing the equations introduced in section [ theo ] , to validate the experimental results .the mass fluxes of distilled water were obtained by measuring the mass change over time in the fresh - water basin ( sampling period : minutes ) .each experiment was replicated twice to test the reproducibility .distillate fluxes for a single stage distiller are presented in figure [ figure4 ] .the air gap configuration produced a mass flow rate of [ g / m^2^/h ] ( see black square in the figure [ figure4 ] ) .the gray triangle is the corresponding value as predicted by the theoretical one - dimensional model . on the other hand ,the configurations with the hydrophobic membranes produced [ g / m^2^/h ] ( blue circle ) and [ g / m^2^/h ] ( red triangle ) in case of and pore sizes , respectively .( blue circle ) ; membrane with pore size ( red triangle ) .modeling result : air gap configuration ( gray triangle ) . in the inset ,a schematic of the tested single stage distiller is depicted . ] in addition , the experimental results for the distiller is presented in figure [ figure5 ] ( green square ) , together with the modeling results as a function of both the number of stages and feed water salinity ( color lines ) .a three - fold increase of the experimental mass flow rate is observed ( and theoretically predicted ) by a configuration . in summary ,the distiller produced up to [ g / m^2^/h ] . without loosing generality ,the experimental test of the configuration has been conducted using hydrophobic membranes with pore size 3 .similar tests can be also conducted using a multi - stage configuration with air gaps separating hydrophilic layers or membranes with different pore size .stage air gap configuration ( black square ) ; membrane configuration , with pore size ( blue circle ) ; membrane configuration , with pore size ( red triangle ) ; membrane configuration , with pore size ( green square ) .modeling results : membrane configuration with pore size and 35 [ g / l ] ( red solid line ) , 70 [ g / l ] ( blue dashed line ) or 135 [ g / l ] ( black dash - dot line ) nacl concentration . ] the results obtained with the one - dimensional model , validated by the and experimental results , show a rather linear scaling of the permeate flux as a function of the number of stages up to a threshold which varies with feed water salinity . beyond this threshold, the distiller performance gradually decays due to a reduction in the temperature difference across each distillation stage , which is gradually less effective in counteracting the difference in vapor pressure caused by salinity .three thermocouples were used to monitor the ambient temperature and the temperature drop between the top and bottom of the distiller . in figure[ figure6 ] , we plotted the temperature profiles obtained in the considered experiments . across the setup , the temperature gradient is close to , whilst in the case of a configuration a was observed . stage .( * b * ) membrane configuration , with pore size and .( * c * ) membrane configuration , with pore size and .( * d * ) membrane configuration , with pore size and .red , blue and black lines represent the evaporator , condenser and ambient temperatures , respectively . ]in this work , we presented a modular , low - cost and passive ( i.e. only driven by non - concentrated thermal solar energy ) device able to desalinate seawater exploiting the vapor pressure difference across a small gap between two hydrophilic thin layers . in our experiments , we observe that this process can be efficiently operated by a thermal power density less than and maximum temperature celsius .different gap separation means of the hydrophilic layers have been realized and experimentally tested , namely : air gap , hydrophobic membranes with pore size of 0.1 , and hydrophobic membranes with pore size of 3 . with a sub - optimal realization ,the device with best performance delivered a distillate mass flux of [ g / m^2^/h ] in a configuration , whereas in a configuration we observed a three - fold increase , namely [ g / m^2^/h ] .experimental results also served to validate a one - dimensional theoretical model of the proposed distiller .this model was used to predict the distillate mass flux as a function of the number of stages and salinity .the numerically predicted trend has been found to be almost linear up to a threshold ( which depends on the feed water salinity ) .beyond such a threshold , the distiller performance decays because of a lower temperature difference across each stage , which is gradually less effective in counteracting the vapor pressure difference imposed by salinity .we finally note that the temperature drop across the distiller might be increased by adopting a concentrated solar source . in this case, the distiller could eventually process feed water with higher salinity . however, this would also imply a higher degree of complexity due to optical concentration .p.a . initially suggested the study of a passive floating panel exploiting highly absorbing materials ( e.g. tinox ) and hydrophobic micro - porous membranes ( md ) for sea water desalination .e.c . conceived both the idea of thin hydrophilic layers separated by an air gap as a passive distiller unit ( membrane - free solution ) as well as the multi - stage idea to gain fresh water flux .p.a . developed the theoretical model .m.m . and m.f .assembled the lab - scale prototypes and conducted computations . m.m . with the help of f.v .conducted the experiments .e.c . and p.a .with the help of m.f . supervised the research .all authors contributed in writing the paper .o. edenhofer , r. pichs - madruga , y. sokona , k. seyboth , s. kadner , t. zwickel , p. eickemeier , g. hansen , s. schlmer , c. von stechow , __ , _ renewable energy sources and climate change mitigation : special report of the intergovernmental panel on climate change_. cambridge university press , 2011 .p. palenzuela , d .-alarcn - padilla , and g. zaragoza , `` large - scale solar desalination by combination with csp : techno - economic analysis of different options for the mediterranean sea and the arabian gulf , '' _ desalination _ , vol .366 , pp . 130138 , 2015 .r. saidur , e. elcevvadi , s. mekhilef , a. safari , and h. mohammed , `` an overview of different distillation methods for small scale applications , '' _ renewable and sustainable energy reviews _15 , no . 9 , pp .47564764 , 2011 .m. a. shannon , p. w. bohn , m. elimelech , j. g. georgiadis , b. j. marinas , and a. m. mayes , `` science and technology for water purification in the coming decades , '' _ nature _ , vol .452 , no .7185 , pp . 301310 , 2008 . g. ni , g. li , s. v. boriskina , h. li , w. yang , t. zhang , and g. chen , `` steam generation under one sun enabled by a floating structure with thermal concentration , '' _ nature energy _ ,vol . 1 , no . 16126 , pp .17 , 2016 .
|
drinkable water scarcity is becoming a critical issue in several regions of the world . in this context , sustainable desalination technologies are attracting increasing interest . while traditional desalination techniques , such as reverse osmosis , may be rather electricity intensive , thermally - driven separation processes ( such as membrane distillation - md ) offer the opportunity of efficiently exploiting low - temperature heat . in this work , a modular thermal distiller for salty water is presented and modeled . preliminary experiments are carried out to evaluate the performance at various configurations and different working conditions . the relevant figures of merit are assessed in sub - optimal realizations for proof - of - concept purposes . further significant improvements are also envisioned and experimentally tested in this work . _ keywords : _ classical physics ; heat transfer ; mass transfer
|
to find a lost key on a parking lot or a paper on an untidy desk are typical everyday experiences for search problems .search processes occur on many different scales , ranging from the passive diffusive search of regulatory proteins for their specific binding site in living biological cells over the search of animals for food or of computer algorithms for minima in a complex search space . herewe are interested in random , jump - like search processes .the searcher , that is , has no prior information on the location of its target and performs a random walk until encounter with the target . during a relocation along its trajectory ( a jump ) , the walker is insensitive to the target . in the words of movement ecology occupied with the movement patterns of animals ,this process is called _ blind search _ with _ saltatory motion_. it is typical for predators hunting at spatial scales exceeding their sensory range .for instance , blind search is observed for plankton - feeding basking sharks , jellyfish predators and leatherback turtles , and southern elephant seals .saltatory search is distinguished from _ cruise search _, when the searcher continues to explore its environment during relocations .the first studies on random search considered the brownian motion of the searcher as a default strategy , until shlesinger and klafter proposed that lvy flights ( lfs ) are much more efficient in the search for sufficiently sparse targets . in a markovian lf the individual displacement lengths of the walkerare power - law distributed , , where due to the second moment of the jump lengths diverges , .this lack of a length scale effects a fractal dimension of the trajectory , such that local search is interspersed by long , decorrelating excursions .this strategy avoids oversampling , the frequent return to previously visited points in space of recurrent random walk processes , such as brownian motion in one and two dimensions .the latter are indeed the relevant cases for land - based searchers . even for airborne or marine searchers ,the vertical span of their trajectories is usually much smaller than the horizontal span , rendering their motion almost fully two dimensional .the outstanding role of lfs for random search processes in one and two dimensions was formulated in the _ lf hypothesis : superdiffusive motion governed by fat - tailed propagators optimize encounter rates under specific ( but common ) circumstances : hence some species must have evolved mechanisms that exploit these properties ] , where is the fourier transform of .( [ sinkffpe ] ) is completed with the initial condition of placing the searcher at .if is positive , the drift is directed towards positive , that is , the dynamic equation ( [ sinkffpe ] ) describes the situation of fig . 1 . by rescaling of variables ( see app . [ dimensionless ] ) we obtain the dimensionless analog of eq .( [ sinkffpe ] ) , where and .the factor has the dimension of length and is chosen as the scaling factor of the lf jump length distribution , as detailed in app .[ dimensionless ] . in what follows we use the dimensionless variables throughout , but for simplicitywe omit the overlines . without loss of generalitywe assume that in the remainder of this work .integration of eq .( [ ffpedimles ] ) over the position produces the first arrival density thus , is indeed the negative time derivative of the survival probability . in analogy to the bias - free case it is straightforward to obtain the fourier - laplace transform of the distribution , here we express the laplace image of a function by explicit dependence on the laplace variable .integration of eq .( [ fks ] ) over the fourier variable yields where is the solution of eq .( [ sinkffpe ] ) without the sink term . asthis expression necessarily equals zero , the first arrival density can be expressed through where we use the abbreviation equation ( [ pfa ] ) without bias ( ) was obtained in ref .an important observation from eq .( [ pfa ] ) is that the first arrival density vanishes , , for any if only ( for the proof see app .[ proofflat ] and [ proofbias ] ) .thus lf search for a point - like target will never succeed for .this property reflects the transience of lfs with , where is the dimension of the embedding space . for the simulation of lfs we use the langevin equation approach , which in the discretized version with dimensionless units takes on the form where is the ( dimensionless ) position of the walker at the -th step , and is a set of random variables with lvy stable distribution and the characteristic function to obtain a normalized lvy stable distribution we employ the standard method detailed in ref . .the modeling of the search process proceeds in the following way .a walker starts from coordinate . then its position is updated every step according to the eq .( [ lediscretized ] ) until it reaches a target or modeling time exceeds some maximum simulation time limit .naturally , the target in simulations can not have size of a point , because then it will never be found .hence the target size in simulations should be small enough in order to get correspondence to results from eq .( [ pfa ] ) , but not infinitely small .we briefly digress to address an important technical issue .a brownian walker always explores the space continuously and therefore localizes any point on the line .however , in langevin equation simulations , we introduce discrete ( albeit small ) jump lengths and time steps . due to this , even for brownian motion there is always a non - vanishing probability to overshoot a point - like target .thus , even for the brownian downhill case the simulated value of probability to eventually find the target becomes less than 1 .this effect needs to be remedied by the appropriate choice of a finite target size .the tradeoff is now that the target needs to be sufficiently large to avoid the overshoot by the searcher . at the same timethe target should not be too large , otherwise inconsistencies with our theory based on a point - like target would arise .the likelihood for leapovers across the target is naturally even more pronounced for the lf case . as a consistency test for the target size used in the simulations we check the long time asymptotics of the first arrival density against the analytical form given by eq .( [ asymp ] ) .the results for this test are plotted in fig .[ pdffadiffalpha ] , showing excellent agreement between the simulations and the theoretical asymptotic behavior . in fig .[ pdffadifftargets ] we explicitly show the effect of a varying target size . as the target sizeis successively increased , the lf scaling of the first arrival density for a point - like target , , is seen to cross over to the universal sparre - andersen law for the first passage of a symmetric random walk process in the semi - infinite domain , .we see that it is possible to choose the target size appropriately such that the results of the langevin equation simulations are consistent with the theory . .the colored curves denote simulations results .the expected asymptotic behavior is depicted by the red lines .target sizes were chosen as 0.01 for both and , and 0.0005 for .,width=302 ] and .the red lines show fits to the asymptotic power - law form . according to eq .( [ asymp ] ) the expected slope for the first arrival is .thus the smallest target size in the figure leads to the correct value . increasing target sizeseventually lead to the universal sparre andersen scaling of the first passage process.,width=8 ]we first consider the case in absence of the bias and present the solution for the first arrival density .moreover we motivate our choice for the efficiency used to compare different parameter values for the lf search process . without external bias eq .( [ pfa ] ) can be expressed in terms of the fox -function , as detailed in app .[ hfuncv0 ] .inverse laplace transform ( [ pfasolutionflat ] ) of the -function allows us to obtain the solution ( [ pfatimedomain ] ) in the time domain . from the latter expression we get the long time asymptotic behavior of , where the constant is given by eq .( [ calpha ] ) . in this waywe find one of the central results of ref . by using the analytic approach of the -function formalism .an important quantity for the following is the search reliability , defined as the cumulative arrival probability it follows from eq .( [ pfasolutionflat ] ) that without a bias , i.e. the searcher will always find the target eventually as long as . in other cases it will turn out that , that is , the searcher will not always locate the target no matter how long the search process is extended . in the brownian case , the -function in eq .( [ pfasolutionflat ] ) according to app .[ brownderiv ] and [ brownlimit ] can be simplified to the well - known result for the first arrival in laplace domain , note that in the brownian case with finite variance of relocation lengths , the process of first arrival is identical to that of the first passage .how can one define a good measure for the efficiency of a search process ?on a general level , such a definition depends on whether saltatory or cruise foraging is considered , or whether a single target is present in contrast to a fixed density of targets . for saltatory motionas considered herein a typical definition of the search efficiency is the ratio of the number of visited target sites over the total distance traveled by the searcher , this definition works well when many targets with a typical inter - target distance are present .the mean number of steps taking in the search process is equivalent to the typical time over which the process is averaged . as we here consider the case of a single target , in a first attempt to define the efficiency we could thus reinterpret definition ( [ effdef0 ] ) as the mean time to reach the target and thus take , where now would correspond to the expectation , in contrast to the situation with a fixed target density , diverges for simple brownian search on a line without bias . for this reason we propose a different measure for the search efficiency , namely instead of the average search time, we average over the inverse search time .this can be shown to be a useful measure for situations when is both finite or diverging . using the relation it is straightforward to show that as an example , consider the efficiency of a brownian walker without bias . with eq .( [ pfabrown ] ) we find where for this equation we restored dimensionality .this is the classical result for a normally diffusive process : increasing diffusivity of the searcher improves the search efficiency per unit time .below we demonstrate the robustness of the new characteristic for several concrete cases .we mention that the definition leads to contradictory results for the biased case , as well .this will be shown in the next section .we now decree that a given search strategy is optimal when the efficiency of the corresponding search process is maximal . in our case of lf searchwe define the optimal search as the process with the value of the stable index for fixed initial condition and fixed bias velocity leads to the highest value of .as we will see , an optimal search defined by this criterion is not ( always ) the same as the most reliable process with maximal search reliability . for lf search without an external drift the density of first arrivalis given by eq .( [ pfasolutionflat ] ) .the search efficiency is obtained by integration it over , \right)\gamma(\alpha ) .\label{efflevyflat}\end{aligned}\ ] ] thus the search efficiency decays quadratically with the initial searcher - target separation and , depending on the value of , may become non - monotonic . in the brownianlimit the efficiency is , consistent with the above result ( [ browneff ] ) . in the cauchy limit the efficiency drops to zero . , given by eq .( [ efflevyflat ] ) , for three values of : , dotted black curve ; , red dashed curve ; brownian case , blue continuous curve.,width=302 ] fig .[ effflatvarx ] shows the efficiency as function of the initial searcher - target distance , for fixed values of the power - law exponent .we observe a strong dependence on , the strongest variation being realized for the brownian case with . for close initial distances ( ) the brownian strategyis the most efficient process .however , with increasing at first lfs with become more efficient than brownian motion , and for the strategy with outperforms all the others .this behavior is expected as for longer initial separations the occurrence of long jumps increases with decreasing , and thus fewer steps lead the searcher closer to the target . for short initial separationsthe occurrence of long jumps would lead to leapovers and thus to a less efficient arrival to the target . due to the strong dependence on the initial searcher - target separation the efficiency between different strategiesshould be compared for a given value of .this will be done in the following .additional insight can be obtained from the relative efficiency for a given which is the ratio of the efficiency for some given exponent over the maximum efficiency for this initial separation for the corresponding value . in fig .[ effflatvaralpha ] we show this relative efficiency as function of the stable exponent of the jump length distribution .the value is obviously assumed at .[ effflatvaralpha ] exhibits a very rich behavior .thus , when the searcher is originally close to the target ( here ) the brownian strategy turns out to be the most efficient , and the functional form of is completely monotonic . for growing initial separation , however , the highest efficiency occurs for smaller values of .for instance , the maximum efficiency shifts from for to for .in particular , for large separations the optimal stable index approaches the value obtained earlier for different lf search scenarios . according to eq .( [ efflevyflat ] ) , displayed for the initial searcher - target separations ( green dashed curve ) , ( red dotted curve ) , ( black dashed curve ) , and ( blue continuous curve).,width=302 ] the second striking observation is that for larger initial separations the dependence of on is no longer monotonic .an implicit expression for is obtained from the relation the result can be phrased in terms of the implicit relation here denotes the digamma function . from this relation we can use symbolic mathematical evaluation to obtain the functional behavior of the optimal lvy index as function of the initial searcher - target distance . the result is shown in fig .[ effflatoptalpha ] .two distinct phenomena can be observed : first , the behavior at long initial separations demonstrates the convergence of the optimal exponent to the cauchy value .second , the optimal search is characterized by an increasing value for when the initial separation shrinks , and we observe a transition at some finite value : for initial distances between searcher and target that are smaller than some critical value , brownian search characterized by optimizes the search . in our dimensionless formulation ,we deduce from the functional behavior in fig .[ effflatoptalpha ] that . as a function of the initial searcher - target distance , as described by eq .( [ optalpha]).,width=302 ]we now consider the case when an external bias initially either pushes the searcher towards or away from the target , the downhill and uphill scenarios . in the uphill regime, we can understand that both the brownian and the lf searcher may never reach to the target .however , as we will see , due to the presence of leapovers a lf searcher may also completely miss the target when we consider the downhill scenario .we can quantify to what extent a search process will ever locate the target in terms of the search reliability defined in eq .( [ reliable ] ) .we obtain this quantity from the first arrival density .we start with the brownian case for , for which the arrival density can be calculated explicitly ( see app .[ brownderiv ] and ref . for the derivation ) . in the laplace domain, it reads where we again turned back to dimensional variables to see the explicit dependence on the diffusivity .thus , in the downhill case with we find that due to the relation the search reliability will always be unity , : in the downhill case the brownian searcher will always hit the target . in the opposite , uphill case with , the result for the search reliability has the form of a boltzmann factor ( ) and exponentially suppresses the location of the target . in this brownian case we can therefore interpret as the probability that the thermally driven searcher crosses an activation barrier of height . for the general case of lfs we obtain from eq .( [ pfa ] ) via change of variables the laplace transform of the first arrival density , where we use the abbreviation in expression ( [ pfapeclet ] ) we introduced the generalized pclet number for the case of lfs , in the brownian limit and after reinstating dimensional units we recover the standard pclet number , where the factor two is a matter of choice . in fig .[ pe ] we depict the functional behavior of the search reliability for four different values of the lvy index including the brownian case .the cumulative probability depends only on the generalized pclet number , as can be seen from expression ( [ pfapeclet ] ) when we take the relevant limit . in both panels of fig .[ pe ] the left semi - axes with negative values correspond to the downhill case , in which the searcher is initially advected in direction of the target , while the right semi - axes pertain to the uphill scenario .the continuous lines correspond to the numerical solution of eq .( [ pfa ] ) , and the symbols represent results based on langevin equation simulations . in these simulations the values of the search reliability were obtained as a ratio of the number of searchers that eventually located the target over the overall number of the released 10,000 searchers . to estimate the error of the simulated value for the search reliability, we calculated for each consecutive 1000 runs and then determined the standard deviation of the mean value of these 10 results . according to fig .[ pe ] for the case of uphill search the search reliability is worst for the brownian walker and improves continuously for decreasing value of the stable index .this is due to the activation barrier ( [ barrier ] ) faced by the brownian walker . for lfs this barrieris effectively reduced due to the propensity for long jumps .the reduction of the resulting jump length , where is the typical duration of a single jump , becomes more and more insignificant for increasing jump lengths .this is why the efficiency continues to improve until the cauchy case is reached .quantitatively , however , we realize that for increasing generalized pclet number even for lfs the value of the search reliability quickly decreases to tiny values , and that the absolute difference between the different search strategies is not overly significant . in the downhill case fig .[ pe ] demonstrates that the brownian searcher will always locate the target successfully and thus return in agreement with previous findings .in contrast , the search reliability decreases clearly with growing magnitude .this decrease worsens with decreasing stable exponent .the reason for this is the growing tendency for leapovers of lfs with decreasing .once the lf searcher overshoots the target , it is likely to drift away quickly from the target and never return to its neighborhood .overall , the functional dependence of on the generalized pclet number becomes non - trivial once . from fig .[ pe ] we conclude that if the main criterion for the search is the eventual location of the target , that is , a maximum value of the search reliability , without prior knowledge the gain for a brownian searcher in the downhill case is higher than the loss in the opposite case : if we do not know the relative initial position to the target , the brownian search algorithm will on average be more successful . in fig .[ x30 ] we now turn to the dependence of the search efficiency on the lvy index for fixed magnitude of the external bias . in each casewe display the behavior for three different values of the initial distance between searcher and target . in the downhill scenariowe observe a remarkable non - monotonic behavior for larger value for .namely , the search efficiency drops when gets smaller than the brownian value , for which . while for the small initial separation this drop is continuous , for the larger values of this trend is turned around , and the search efficiency grows again .due to the extremely slow convergence of both the langevin equation simulations and the numerical evaluation of eq .( [ pfa ] ) despite all efforts we were not able to infer the continuation of the -curve for -values smaller than and thus , in particular , what the limiting value at the cauchy case is. what could be the reason for this non - monotonicity in the versus dependence ?similar to the existence of an optimal -value intermediate between the brownian and cauchy cases and , respectively , for the search reliability we here find a worst - case value for .this value represents a negative tradeoff of the target overshoot property and insufficient propensity to produce sufficiently long jumps to recover an accumulated activation barrier from a downstream location as seen from the target . in the uphill casethe dependence is monotonic : here long jumps become helpful to overcome the activation towards the target .thus the search efficiency increases when becomes smaller and approaches the cauchy value .the value for significantly drops with increasing value of the initial searcher - target separation . consistently with the previous observations on the brownian case fares worst and leads to the smallest value of . for a brownian searcher in the presence of an external bias ,the mean search time can be computed via for the downhill case this value is given by the classical result the diffusing searcher moves towards the target as if it were a classical particle , the search time being given as the ratio of the distance over the ( drift ) velocity .it is independent of the value of the diffusion constant . for the uphill case ( ) , we find the known result the latter value is in fact smaller than the one for the downhill case. how can this be ?the explanation of this seeming paradox comes from the qualitative difference in the nature of these averages . in the first scenario the search reliability is unity ,that is , the walker always arrives at the target . in the uphill caseonly successful walkers count , that is , the average is conditional .this explains the seeming contradiction with common sense . as we can see from this discussionis that the ready choice as a measure for the search efficiency would state that the uphill motion is more efficient than the downhill one .this definition would obviously not make much sense .we show that our definition of the search efficiency , eq .( [ eff ] ) , is a reasonable measure in this case . with the use of eqs .( [ effint ] ) and ( [ pfabrownbias ] ) we find \exp\left(-vx_0/k_2\right ) , & v\geq0\end{array}\right .. \label{effbrowv}\ ] ] indeed , we see that our expression for the efficiency shows that the downhill motion is more efficient than going uphill for the same initial separation . from numerical evaluation of eq .( [ effbrowv ] ) , for three values of the initial searcher - target separation : ( dotted black curve ) , ( red dashed curve ) , and ( blue continuous curve).,width=8 ] calculated from eq .( [ effbrowv ] ) for three values of the drift velocity : ( dotted black curve ) , ( blue dashed curve ) , and ( red continuous curve).,width=8 ] in figs .[ effbiasbrowx ] and [ effbiasbrowv ] the efficiency is plotted for different values of the drift velocity and the initial separation of searcher and target , respectively . as expected , the increase of the downhill velocity leads to an efficiency growth , and vice versa for the opposite case . by magnitude of the -dependence ,the decrease in the search efficiency for the uphill case is much more pronounced than the increase in efficiency for the downhill case .hence , the dependence on the initial distance becomes increasingly asymmetric . in the limit of a weak external biaswe can obtain analytical approximations for the search efficiency .namely , for sufficiently small values of the generalized pclet number and nonzero values of the laplace variable the denominators in both integrals of eq .( [ pfapeclet ] ) can be expanded into series .the first order expansion reads where we define the integrals appearing in expression ( [ pfaexp ] ) can be computed by use of the fox -function technique , as detailed in app .[ appgenexp ] . from the result ( [ pfaexpansiongen ] )we obtain the following expression for the search efficiency , , \label{efflevybias}\ ] ] where the first term in the square brackets corresponds to the result for the case without drift ( ) , eq .( [ efflevyflat ] ) . when , the brownian behavior in the small bias limit is recovered , namely , , consistent with the small expansion of eq .( [ effbrowv ] ) ) . from eq .( [ pfaexpansiongen ] ) it follows that for the brownian case , the first arrival density in the laplace domain has the approximate form which is valid for , i.e. , for short times .after transforming back to the time domain , we find as shown in app .[ brownderiv ] .this result corresponds to the short time and small pclet number limit of the general expression for reported by redner .thus our expansion ( [ pfaexpansiongen ] ) works only at short times .however , the approximate expression ( [ efflevybias ] ) for the search efficiency itself turns out to work remarkably well , as shown in fig .[ approxcomp1 ] . here , the behavior described by eq .( [ efflevybias ] ) is compared with results of direct numerical integration of eq .( [ pfa ] ) over .we see an almost exact match for an initial searcher - target separation .instead , for the agreement becomes worse ( not shown here ) .the explanation is due to the fact that for small initial separations short search times dominate the arrival statistic , while for the arrival is shifted to longer times , and the approximation underlying eq .( [ efflevybias ] ) does no longer work well . ) and from numerical integration of expression ( [ pfa ] ) over , for the initial searcher - target separation .results are shown for the cases of zero drift as well as uphill and downhill drift.,width=8 ] the presence of an external bias substantially changes the functional form of the efficiency as compared to the unbiased situation .[ effbiaslevy1 ] shows the search efficiency as function of the initial position for two different drift velocities and for a variety of values of the power - law exponent . as expected from what we said before , the dependencies of the search efficiency with respect to positive and negative initial separations is asymmetric , as this corresponds to the difference between uphill and downhill cases elaborated above .increasing magnitude of the bias effects a more pronounced asymmetry between values with the same absolute value . for the downhill case the advantage of the brownian search over lf search persists for all values of .as expected , the efficiency drops , however , the general behavior is similar for all values . for the uphill case we observe a remarkable crossing of the curves . for small initial separations brownian search efficiency is highest . herethe activation barrier is sufficiently small such that the continuous brownian searcher without leapovers locates the target most efficiently .when goes to increasingly negative values , successively lfs with smaller values become more efficient . in terms of the efficiencywe see that for sufficiently large barriers , that is , when the target is initially separated by a considerable uphill distance from the searcher , lfs with smaller fare dramatically better than processes with larger . as a function of initial searcher - target distance with bias velocity ( black dashed line ) and without bias ( red continuous line).,height=241 ] we further illustrate the behavior of the search efficiency by studying its functional dependence on the stable index for different initial distances between searcher and target as well as for different drift velocities in fig . [ efflevyvaralphas ] .thus , for short initial separation shown in fig . [ efflevyvaralphas ] on the left , the brownian searcher is always the most efficient for all cases : unbiased , downhill , and uphill . for larger as shown in the right panel of fig .[ efflevyvaralphas ] , the situation changes : in the downhill case the brownian searcher still fares best . however , already in the unbiased case the lf searchers produce a higher efficiency .an interesting fact is the non - monotonicity of the behavior of the search efficiency , leading to an optimal value for the stable index , whose value depends on the strength of the bias ( and the initial separation ) .this is shifting towards the cauchy value for increasing uphill bias .we can also find a qualitative argument for the optimal value of the power - law index in the small bias limit . if we denote , then . in comparison to the unbiased case ,that is , the efficiency is reduced in the uphill case and increased in the downhill case , as it should be .moreover , the correction due to the bias is more pronounced for larger values of .hence the optimal necessarily shifts to larger values for the downhill case in comparison with the unbiased situation , and vice versa in the uphill case .this can be perfectly illustrated with fig .[ newplot ] .the plot shows that application of a bias apparently breaks a symmetry in terms of initial position of a searcher : optimal values increase for downhill side and drop for uphill side .thus , the range of values where the brownian motion is optimal is effectively shifted .we briefly mention a different way to approach the first arrival problem in terms of an implicit expression for the corresponding density . from eq .( [ pfa ] ) , by inverse laplace transform we find with the functions defined in app .[ implicitapp ] , we rewrite this relation in the form such that we arrive at the simple form in terms of the laplace transforms . this is a familiar form for the first passage density for continuous processes , and is also known for the first arrival of lfs . for numerical evaluation or small bias expansionsthis expression turns out to be useful .we generalized the prominent lvy flight model for the random search of a target to the case of an external bias .this bias could represent a choice of the searcher due to some prior experience , a bias in an algorithmic search space , or simply an underwater current or airflow . to compare the efficiency of this biased lf search for different initial searcher - target separations and values of the external bias, we introduced the search efficiency in terms of the mean of the inverse search time , .we confirmed that this measure is meaningful and in fact more consistent than the traditional definition in terms of the inverse mean search time , .as a second measure for the quality of the search process we introduced the search reliability , the cumulative arrival probability .when this measure is unity , the searcher will ultimately always locate the target . when it is smaller than unity , the searcher has a finite chance to miss the target . as shown here, high search reliability does not always coincide with a high search efficiency .depending on what we expect from a search process , either measure may be more relevant . in terms of the efficiency we saw that even in absence of a bias the optimal strategy crucially depends on the initial separation between the searcher and the target . for small brownian searcher is more efficient , as it can not overshoot the target . with increasing , however , the lf searcher needs a smaller number of steps to locate the target and thus becomes more efficient . in the presence of a biasthere is a strong asymmetry depending on the direction of the bias with respect to the initial location of the searcher and the target . for the downhill scenario the brownian searcher always fares better , as it is advected straight to the target while the lf searcher may dramatically overshoot the target in a leapover event and then needs to makes its way back to the target , against the bias .the observed behaviors can be non - monotonic , leading to an optimal value for the power - law exponent . for strong uphill bias and large initial separationthe optimal value is unity , for short separations and downhill scenarios the brownian limit is best .there exist optimal values for in the entire interval , depending on the exact parameters .the search reliability for a given value of solely depends on the generalized pclet number . for unbiased searchthe searcher will always eventually locate the target , that is , the search efficiency attains the value of unity . in the presence of a bias , unity is returned for the search reliability for a brownian searcher in the downhill case .it decays exponentially for the uphill case . for lf searchers with smaller than two, the probability of leapovers reduces the value of the search reliability in the downhill case . in the opposite , uphill scenariothe search reliability is larger for lf searchers compared to the brownian searcher .the absolute gain in this case , however , was found to be smaller than the loss to a brownian competitor in the downhill case . without prior knowledge of the biasa brownian search strategy may turn out to be overall advantageous .we also found a non - monotonicity of the search reliability as function of the initial searcher - target separation .it will be interesting to see whether our results for both the search efficiency and reliability under an external bias turn out similarly for periodic boundary conditions relevant for finite target densities .we note here that we analyzed lf search in one spatial dimension . what would be expected if the search space has more dimensions ? for regular brownian motion we know that it remains recurrent in two dimensions , that is , the sample path is space - filling in both one and two dimensions . on the other hand lfs with recurrent in one dimension but always transient in two dimensions .hence in two dimensions lfs will even more significantly reduce the oversampling of a brownian searcher . at the same time , however , the search reliability will go to zero . in both one and two dimensions ( linearly or radially )lfs are distinct due to the possibility of leapovers , owing to which the target localization may become less efficient than for brownian search .many search processes indeed fall in the category of ( effectively ) one or two dimensions .for example , they are one - dimensional in streams , along coastlines , or at forest - meadow and other borders . for ( relatively ) unbounded search processes as performed by birds or fish , the motion in the vertical dimension shows a much smaller span than the radial horizontal motion , and thus becomes effectively two dimensional .if we modify the condition of blind search and allow the walker to look out for prey while relocating , in one dimension this would obviously completely change the picture in favor of lfs with their long unidirectional steps .however , in two dimensions the radial leapovers would still impede the detection of the target unless it is exactly crossed during a step .so what remains of the lf hypothesis ?conceptually , it is certainly a beautiful idea : a scale - free process reduces oversampling and thus scans a larger domain .if the target has an extended width , for instance , a large school instead of a single fish , lfs will then optimize the search under certain criteria .however , even when the stable index is larger than unity , in two or three dimensions an lf may also never reach the target , due to its patchy albeit scale - free exploration of the search space .thus , even under lf - friendly conditions such as extremely sparse targets and/or uphill search , the superiority of lfs over other search models depends on the exact scenario .for instance , whether it is important that the target is eventually located with certainty , or whether in an ensemble of equivalent systems only sufficiently many members need a quick target localization , for instance , the triggering of some gene expression process responding to a lethal external signal in the cells of a biofilm .the lf hypothesis even in the case of blind search without any prior knowledge is therefore not universal , and depending on the conditions of initial searcher target separation or the direction of a naturally existing gradient with respect to the location of the target the regular brownian motion may be the best search strategy .lfs are most efficient under the worst case conditions of blind search for extremely sparse targets and , as shown here , for uphill motion . while very rare targets certainly exist in many scenarios, we should qualify the result for the uphill motion .the above uphill lf scenario holds for abstract processes such as the blind search of computer algorithms in complex landscapes or for the topology - mediated lfs in models of gene regulation . for the search of animals moving against a physical air or water stream ,however , we have to take into consideration that any motion against a gradient requires a higher energy expenditure .unless the gradient is very gentle , this aspect relativizes the lf hypothesis further .having said all this , one distinct advantage of spatially scale free search processes remains .namely , they are more tolerant to gradually shifting environmental conditions , for instance , a change in the target distribution , or when the searcher is exposed to a new patch with conditions unknown to him . this point is often neglected in the analysis of search processes .a more careful study of this point may in fact turn back the wheel in favor of lf search .it should be noted that lfs are processes with a diverging variance , and may therefore be considered unphysical .there exists the closely related superdiffusive model of lvy walks , they have a finite variance due to a spatiotemporal coupling introducing a finite travel velocity , compare , for instance , ref .this coupling penalizes long jumps .however , both models converge in the sense that the probability density function of a lvy walker displays a growing lvy stable portion in its center , limited by propagating fronts .the trajectory of such a lvy walk appears increasingly similar to an lf : local search interspersed by decorrelating long excursions .we expect that at least qualitatively our present findings remain valid for the case of lvy walks. it will be interesting to investigate this quantitative statement in more detail .vvp wishes to acknowledge financial support from deutsche forschungsgemeinschaft ( project no .pa 2042/1 - 1 ) as well as discussions with j. schulz about simulation of random variables , a. cherstvy for help with numerical methods in mathematica and r. klages for pointing out ref .rm acknowledges support from the academy of finland within the fidipro scheme .avch acknowledges daad for financial support .we here show how to consistently introduce dimensionless units in the fractional fokker - planck equation . if we denote the dimensionless time and position coordinate respectively by and , such that and with the dimensional parameters and defined below .then we can rewrite eq .( [ sinkffpe ] ) in the form note that we are dealing with dimensional density functions so that and thus equation ( [ a1 ] ) then assumes the form where .we choose the parameters and as where and correspond to the scaling factors of the jump length and waiting time distributions of the continuous time random walk .these appear in the fourier and laplace transforms of the jump length and waiting time densities .the respective expansions used in the derivation of the continuous time random walk model for lvy flights are the diffusion coefficient in the fractional fokker - planck equation is then expressed in terms of these parameters as we thus obtain the dimensionless dynamic equation ( [ ffpedimles ] ) .to obtain the search reliability via the relation one first integrates eq .( [ pfa ] ) over and then takes the limit . if and , both integrals in the numerator and denominator of eq .( [ pfa ] ) converge . in this and the next appendices we prove that for and .hence , which means that the searcher never reaches the target in the case .taking in eq .( [ pfa ] ) we have for the integral in the denominator diverges at infinity .let us consider the integral in the numerator .we notice that ) , \label{cosint}\ ] ] for .thus the numerator of expression ( [ numer ] ) converges for . since the denominator diverges in this range , for all values of .thus , the search reliability vanishes , , and the searcher never reaches its target . the limiting case needs separate attention .we observe that the integral ( [ cosint ] ) diverges .for , expression for .the integrals are which logarithmically diverges for , and the second term in eq .( [ cossin ] ) converges .thus , in the ratio over the divergent integral ( [ logint ] ) it can be neglected .the first term can be modified to when the second term in eq .( [ cosint ] ) is the cosine integral at , and it vanishes .altogether , for any finite /s)}=0,\end{aligned}\ ] ] which completes the proof .let us start from the cauchy case .the expression for the first arrival density follows from eq .( [ pfa ] ) , )-vk\sin(kx_0 ) \}\overline{\beth}dk}{\int_{-\infty}^{\infty}(s+|k|)\overline{\beth}dk } \label{alpha1ratio}\ ] ] with alternatively , ) -vk\sin(kx_0)\}\overline{\beth}dk}{\int_0^a(s+|k|)\overline{\beth}dk}. \label{c2}\ ] ] let us first consider the integral in the denominator , at only the first term is significant , and it diverges logarithmically .the integral in the numerator converges due to the oscillating functions in the integrands . with the diverging denominator and the converging numerator in eq .( [ c2 ] ) , we have that for any finite . the search reliability vanishes . for in eq .( [ pfa ] ) with we have , and the proof is analogous to the case just considered .without a bias , eq .( [ pfa ] ) takes on the form where the abbreviation is defined in eq .( [ aleph ] ) .the integral in the denominator yields the integral in the numerator can be obtained in terms of the fox -function technique . since , \label{xalphaplusone}\ ] ] we find that }{1+y^{\alpha}}dk=\frac{\sqrt{\pi}}{\alpha}\frac { 1}{sx_0^\alpha}h^{12}_{31}\left[\frac{2}{s^{1/\alpha}x_0}\left|\begin{array}{l } ( 1/2,1/2),(0,1/\alpha),(0,1/2)\\(0,1/\alpha)\end{array}\right.\right ] , \label{cosnumer}\ ] ] where we used eq .( [ xalphaplusone ] ) and the integral ( 2.25.2.4 ) from ref . . transforming the -function by help of the properties ( 1.3 ) and ( 1.5 ) from ref . we obtain /\alpha,1/\alpha ) \\(0,1/2),([\alpha-1]/\alpha,1/\alpha),(1/2,1/2)\end{array}\right.\right ] , \label{pfasolutionflat}\ ] ] using the properties of the laplace transform of the -function ( see chapter 2 in ref . ) we get the first arrival density in the time domain , /\alpha,1),(1/2,\alpha/2)\end{array}\right.\right ] .\label{pfatimedomain}\end{aligned}\ ] ] expansion of eq .( [ pfatimedomain ] ) in the long - time limit yields eq .( [ asymp ] ) , where /2)\gamma(2-\alpha ) \gamma(2 - 1/\alpha)}{\pi^2(\alpha-1 ) } \label{calpha}.\ ] ]it is instructive to obtain the well - known first arrival density in the brownian case directly from eq .( [ pfa ] ) . for , eq .( [ pfa ] ) assumes the form where the denominators in these integrals are quadratic polynomials in and hence can be rewritten as , where then both integrals can be easily calculated by the method of residues , and we arrive at eq .( [ pfabrownbias ] ) . at short times ( )( [ pfabrownbias ] ) yields this result can be obtained by first expanding eq .( [ brown ] ) at small pclet numbers and then integrating each of the terms over .indeed , from eq .( [ brown ] ) we get with the three integrals in eq .( [ pfapecletbr ] ) become which yields the result ( [ pfaapproxbrownian ] ) .finally , we note that the inverse laplace transform of eq .( [ pfabrownbias ] ) leads to the expression in time domain , this result coincides with the solution obtained by either the green s function technique or the images method in ref . ( see eq .( 3.2.13 ) there ) .we show here how expansion ( [ pfaexp ] ) is obtained in terms of -functions .two out of three integrals in eq .( [ pfaexp ] ) were computed above in app .[ hfuncv0 ] as eqs .( [ v0denom ] ) and ( [ cosnumer ] ) .the last unknown integral from expression ( [ pfaexp ] ) can be computed in a similar way , \\ & = & \frac{8\mathrm{pe}_{\alpha}\sqrt{\pi}}{\alpha}\left(sx_0^{\alpha}\right)^2 h^{12}_{31}\left[2s^{1/\alpha}x_0\left|\begin{array}{l}(-1/2,1/2),(-1,1/\alpha ) , ( 0,1/2)\\(0,1/\alpha)\end{array}\right.\right].\end{aligned}\ ] ] with these results we obtain the following expression in laplace space , \right.\\ & & \left.-2^{{2-\alpha}}\mathrm{pe}_{\alpha}h^{12}_{31}\left[\frac{2}{s^{1/\alpha } x_0}\left|\begin{array}{l}(\alpha/2,1/2),(1/\alpha,1/\alpha),([\alpha+1]/2,1/2)\\ ( [ \alpha+1]/\alpha,1/\alpha)\end{array}\right.\right]\right ) .\label{pfaexpansiongen}\end{aligned}\ ] ] inverse laplace transform of eq .( [ pfaexpansiongen ] ) yields /\alpha,1),(1/2,\alpha/2)\end{array}\right.\right]\right.\\ & & \left.-\mathrm{pe}_{\alpha}h^{21}_{23}\left[\frac{x_0^{\alpha}}{2^{\alpha}t}\left| \begin{array}{l}(-1/\alpha,1),(0,1)\\(1-\alpha/2,\alpha/2),([\alpha-1]/\alpha,1 ) , ( [ 1-\alpha]/2,\alpha/2)\end{array}\right.\right]\right ) .\label{pfaexpansiongent}\end{aligned}\ ] ]we represent the first arrival density from eq .( [ pfaexpansiongen ] ) as , where the first and second contribution correspond to the first and second terms in the expression ( [ pfaexpansiongen ] ) .then for the order of -function is reduced by use of the properties 1.2 and 1.3 from chapter 1 in ref . ) as well as its definition via the mellin transform . this procedure yields (1/2,1/2 ) \end{array}\right.\right]=\frac{1}{2\sqrt{\pi}}h^{02}_{20}\left[\frac{2k_2^{1/2 } } { s^{1/2}x_0}\left|\begin{array}{l}(1,1/2),(1/2,1/2)\\[0.2cm]\rule{1.2cm}{0.02 cm } \end{array}\right.\right]=\\ & = & \frac{1}{2\sqrt\pi}h^{20}_{02}\left[\frac{s^{1/2}x_0}{2k_2^{1/2}}\left| \begin{array}{l}\rule{1.2cm}{0.02cm}\\[0.2cm](0,1/2),(1/2,1/2)\end{array}\right . \right]=h^{10}_{01}\left[\frac{s^{1/2}x_0}{k_2^{1/2}}\left|\begin{array}{l } \rule{1.2cm}{0.02cm}\\[0.2cm](0,1)\end{array}\right.\right]=\exp\left(- \frac{s^{1/2}{x_0}}{k_2^{1/2}}\right).\end{aligned}\ ] ] similar steps for lead to the result (3/2,1/2)\end{array } \right.\right]=-\mathrm{pe}_2\exp\left(-\frac{s^{1/2}{x_0}}{k_2^{1/2}}\right).\end{aligned}\ ] ] thus , which is the expansion of the general solution in the brownian case , expression ( [ pfaapproxbrownian ] ) .the same result can be obtained by calculations in -space (0,1),(1/2,1),(1/2,1)\end{array}\right .\right]=\frac{x_0}{\sqrt{4\pi t^3}}\exp\left(-\frac{x_0 ^ 2}{4k_2t}\right),\end{aligned}\ ] ] and (0,1),(1/2,1),(-1/2,1 ) \end{array}\right.\right]=-\frac{\mathrm{pe}_2x_0}{\sqrt{4\pi t^3}}\exp\left(- \frac{x_0 ^ 2}{4k_2t}\right).\end{aligned}\ ] ]the expressions for and in eq .( [ alter ] ) can be obtained by help of standard properties of -function and the identification for the exponential function , (0,1 ) \end{array}\right.\right].\ ] ] consequently , (0,1)\end{array}\right.\right]dk = \frac{\sqrt{\pi}}{|vt|}h^{11}_{21}\left[t\left(\frac{2}{|vt|}\right)^{\alpha } \left|\begin{array}{l}(1/2,\alpha/2),(0,\alpha/2)\\[0.2cm](0,1)\end{array}\right .\right]\end{aligned}\ ] ] and (0,1)\end{array}\right.\right ] .\label{h2tapp}\end{aligned}\ ] ] to construct the expression ( [ ratio ] ) for the first arrival density , we need the laplace transforms of the functions . for find (1/2 , \alpha/2),(1,\alpha/2)\end{array}\right.\right]\right\}\\ & = & \frac{\sqrt{\pi}}{2}s^{1/\alpha-1}h^{12}_{22}\left[s^{1-\alpha}\left(\frac{|v| } { 2}\right)^{\alpha}\left|\begin{array}{l}(1/\alpha,\alpha-1),(1 - 1/\alpha,1)\\[0.2 cm ] ( 0,\alpha/2),(1/2,\alpha/2)\end{array}\right.\right]\end{aligned}\ ] ] using the expansion of the -function at small arguments we find at which is exactly the same result as one can get by direct computation of the integral and subsequent laplace transform . at from eq .( [ h2tapp ] ) we get by direct laplace transform (0,1)\end{array } \right.\right ] = \frac{\sqrt{\pi}}{\alpha x_0s}h^{12}_{31}\left[\frac{2}{s^{1/\alpha}x_0}\left| \begin{array}{l}(0,1/\alpha),(1/2,1/2),(0,1/2)\\[0.2cm](0,1/\alpha)\end{array } \right.\right]\end{aligned}\ ] ] and hence (0,1/\alpha)\end{array}\right.\right]\\ & = & \frac{\sqrt{\pi}}{2\gamma(1/\alpha)\gamma(1 - 1/\alpha ) } h^{12}_{31}\left[\frac{2}{s^{1/\alpha}x_0}\left| \begin{array}{l}(1/\alpha,1/\alpha),(1,1/2),(1/2,1/2)\\[0.2cm](1/\alpha,1/\alpha ) \end{array}\right.\right].\end{aligned}\ ] ] we see that this expression is different from eq .( [ pfasolutionflat ] ) in the order of the first two brackets in the top row of the -function .however , these brackets can be exchanged due to property 1.1 of the -function in ref .thus , the -function solution for the unbiased case ( ) is obtained correctly .now let us derive the result for any in the limit .for that purpose we note that .\end{aligned}\ ] ] for the reduction formula for -functions ( property 1.2 of ref . ) yields where we restored the brownian diffusivity .alternatively , \right)\ ] ] since , \right).\ ] ] inverse laplace transform of the latter relation produces eq .( [ alpha2 ] ) .j. elf , g. w. li , and x. s. xie , science * 316 * , 1191 ( 2007 ) ; p. hammar et al , _ ibid . _ * 336 * , 1595 ; m. bauer and r. metzler , plos one * 8 * , e53956 ( 2013 ) ; o. pulkkinen and r. metzler , phys .lett . * 110 * , 198101 ( 2013 ) ; t. e. kuhlman and e. c. cox , mol .* 8 * , 610 ( 2012 ) .g. e. viswanathan , m. g. e. da luz , e. p. raposo , and h. e. stanley , the physics of foraging .an introduction to random searches and biological encounters ( cambridge university press , new york , ny , 2011 ) .o. bnichou , m. coppey , m. moreau , p. h. suet , and r. voiturierz , phys .94 , 198101 ( 2005 ) ; c. loverdo , o. bnichou , m. moreau , and r. voiturierz , nature phys . 4 , 134 ( 2008 ) ; compare also s. benhamou , ecology 88 , 1962 ( 2007 ) ; a. reynolds , physica a 388 , 561 ( 2009 ) .a. v. chechkin , o. yu .sliusarenko , r. metzler , and j. klafter , phys rev .e * 75 * 041101 ( 2007 ) ; o. yu .sliusarenko , v. yu .gonchar , a. v. chechkin , i. m. sokolov , and r. metzler , ibid . *81 * , 041119 ( 2010 ) .
|
we study the efficiency of random search processes based on lvy flights with power - law distributed jump lengths in the presence of an external drift , for instance , an underwater current , an airflow , or simply the bias of the searcher based on prior experience . while lvy flights turn out to be efficient search processes when relative to the starting point the target is upstream , in the downstream scenario regular brownian motion turns out to be advantageous . this is caused by the occurrence of leapovers of lvy flights , due to which lvy flights typically overshoot a point or small interval . extending our recent work on biased lf search [ v. v. palyulin , a. v. chechkin , and r. metzler , proc . natl . acad . sci . usa , * 111 * , 2931 ( 2014 ) . ] we establish criteria when the combination of the external stream and the initial distance between the starting point and the target favors lvy flights over regular brownian search . contrary to the common belief that lvy flights with a lvy index ( i.e. , cauchy flights ) are optimal for sparse targets , we find that the optimal value for may range in the entire interval and include brownian motion as the overall most efficient search strategy .
|
entanglement , arguably the most spectacular and counterintuitive manifestation of quantum mechanics , is observed in composite quantum systems .it signifies the existence of non - local correlations between measurements performed on particles that have interacted in the past , but now are located arbitrarily far away .we say that a two - particle state is entangled , or non - separable , if it can not be written as a simple tensor product of two states which describe the first and the second subsystems , respectively , but only as a superposition of such states : . when two systems are entangled , it is not possible to assign them individual state vectors .the intriguing non - classical properties of entangled states were clearly illustrated by einstein , podolsky and rosen ( epr ) in 1935 .these authors showed that quantum theory leads to a contradiction , provided that we accept ( i ) the reality principle : if we can predict with certainty the value of a physical quantity , then this value has physical reality , independently of our observation ; is an eigenstate of an operator , namely , , then the value of the observable is , using the epr language , an element of physical reality . ]( ii ) the locality principle : if two systems are causally disconnected , the result of any measurement performed on one system can not influence the result of a measurement performed on the second system ., where and are the space and time separations of the two events in some inertial reference frame and is the speed of light ( the two events take place at space - time coordinates and , respectively , and , ) . ]the epr conclusion was that quantum mechanics is an incomplete theory .the suggestion was that measurement is in reality a deterministic process , which merely appears probabilistic since some degrees of freedom ( hidden variables ) are not precisely known .of course , according to the standard interpretation of quantum mechanics there is no contradiction , since the wave function is not seen as a physical object , but just as a mathematical tool , useful to predict probabilities for the outcome of experiments .the debate on the physical reality of quantum systems became the subject of experimental investigation after the formulation , in 1964 , of bell s inequalities .these inequalities are obtained assuming the principles of realism and locality .since it is possible to devise situations in which quantum mechanics predicts a violation of these inequalities , any experimental observation of such a violation excludes the possibility of a local and realistic description of natural phenomena . in short, bell showed that the principles of realism and locality lead to experimentally testable inequality relations in disagreement with the predictions of quantum mechanics .many experiments have been performed in order to check bell s inequalities ; the most famous involved epr pairs of photons and was performed by aspect and coworkers in 1982 .this experiment displayed an unambiguous violation of a bell s inequality by tens of standard deviations and an excellent agreement with quantum mechanics .more recently , other experiments have come closer to the requirements of the ideal epr scheme and again impressive agreement with the predictions of quantum mechanics has always been found .nonetheless , there is no general consensus as to whether or not these experiments may be considered conclusive , owing to the limited efficiency of detectors .if , for the sake of argument , we assume that the present results will not be contradicted by future experiments with high - efficiency detectors , we must conclude that nature does not experimentally support the epr point of view . in summary ,the world is not locally realistic .i should stress that there is more to learn from bell s inequalities and aspect s experiments than merely a consistency test of quantum mechanics .these profound results show us that _ entanglement is a fundamentally new resource _ , beyond the realm of classical physics , and that it is possible to experimentally manipulate entangled states . a major goal of quantum information science is to exploit this resource to perform computation and communication tasks beyond classical capabilities .entanglement is central to many quantum communication protocols , including quantum dense coding , which permits transmission of two bits of classical information through the manipulation of only one of two entangled qubits , and quantum teleportation , which allows the transfer of the state of one quantum system to another over an arbitrary distance .moreover , entanglement is a tool for secure communication .finally , in the field of quantum computation entanglement allows algorithms exponentially faster than any known classical computation . for any quantum algorithm operating on pure states , the presence of multipartite( many - qubit ) entanglement is necessary to achieve an exponential speedup over classical computation .therefore the ability to control high - dimensional entangled states is one of the basic requirements for constructing quantum computers .random numbers are important in classical computation , as probabilistic algorithms can be far more efficient than deterministic ones in solving many problems .randomness may also be useful in quantum computation ._ random pure states _ of dimension are drawn from the uniform ( haar ) measure on pure states is the unique measure on pure -level states invariant under unitary transformations .it is a uniform , unbiased measure on pure states . for a single qubit ( ), it can be simply visualized as a uniform distribution on the _ bloch sphere _the generic state of a qubit may be written as where the states of the _ computational basis _ are eigenstates of the pauli operator .the qubit s state can be represented by a point on a sphere of unit radius , called the bloch sphere .this sphere is parametrized by the angles and can be embedded in a three - dimensional space of cartesian coordinates .+ we also point out that ensembles of _ random mixed states _ are reviewed in . ]the entanglement content of random pure quantum states is almost maximal and such states find applications in various quantum protocols , like superdense coding of quantum states , remote state preparation , and the construction of efficient data - hiding schemes .moreover , it has been argued that random evolutions may be used to characterize the main aspects of noise sources affecting a quantum processor .finally , random states may form the basis for a statistical theory of entanglement .while it is very difficult to characterize the entanglement properties of a many - qubit state , a simplified theory of entanglement might be possible for random states .the preparation of a random state or , equivalently , the implementation of a random unitary operator mapping a fiducial -qubit initial state , say , onto a typical ( random ) state , requires a number of elementary one- and two - qubit gates exponential in the number of qubits , thus becoming rapidly unfeasible when increasing . on the other hand , pseudo - random states approximating to the desired accuracythe entanglement properties of true random states may be generated efficiently , that is , polynomially in . in a sense ,pseudo - random states play in quantum information protocols a role analogous to pseudo - random numbers in classical information theory .random states can be efficiently approximated by means of random one- and two - qubit unitaries or by deterministic dynamical systems ( maps ) in the regime of quantum chaos .these maps are known to exhibit certain statistical properties of random matrices and are efficient generators of multipartite entanglement among the qubits , close to that expected for random states .note that in this case deterministic instead of random one- and two - qubit gates are implemented , the required randomness being provided by deterministic chaotic dynamics . a related crucial question , which i shall discuss in this review , is whether the generated entanglement is robust when taking into account unavoidable noise sources affecting a quantum computer . that is , decoherence or imperfections in the quantum computer hardware , that in general turn pure states into mixtures , with a corresponding loss of quantum coherence and entanglement content .this paper reviews previous work concerning several aspects of the relationship between entanglement , randomness and chaos .in particular , i will focus on the following items : ( i ) the robustness of the entanglement generated by quantum chaotic maps when taking into account the unavoidable noise sources affecting a quantum computer ( sec .[ sec : stabilitymultipartite ] ) ; ( ii ) the detection of the entanglement of high - dimensional ( mixtures of ) random states , an issue also related to the question of the emergence of classicality in coarse grained quantum chaotic dynamics ( sec .[ sec : detect ] ) ; ( iii ) the decoherence induced by the coupling of a system to a chaotic environment , that is , by the entanglement established between the system and the environment ( sec .[ sec : chaoticenvironments ] ) . in order to make this paper accessible also to readers without a background in quantum information science and/or in quantum chaos , basic concepts and tools concerning bipartite and multipartite entanglement ,random and pseudo - random quantum states and quantum chaos maps are discussed in the remaining sections and appendixes .in this section , we show that _ for pure states _ a good measure of bipartite entanglement exists : the von neumann entropy of the reduced density matrices . given a state described by the density matrix ,its von neumann entropy is defined as note that , here as in the rest of the paper , all logarithms are base unless otherwise indicated .first of all , a few definitions are needed : _ entanglement cost _ : let us assume that two communicating parties , alice and bob , share many einstein - podolsky - rosen ( epr ) pairs ] and that they wish to prepare a large number of copies of a given bipartite pure state , using only local operations and classical communication . if we call the minimum number of epr pairs necessary to accomplish this task , we define the entanglement cost as the limiting ratio , for ._ distillable entanglement _ : let us consider the reverse process ; that is , alice and bob share a large number of copies of a pure state and they wish to concentrate entanglement , again using only local operations supplemented by classical communication .if denotes the maximum number of epr pairs that can be obtained in this manner , we define the distillable entanglement as the ratio in the limit .it is clear that .otherwise , we could employ local operations and classical communication to create entanglement , which is a non - local , purely quantum resource ( it would be sufficient to prepare states from epr pairs and then distill epr states ) . furthermore , it is possible to show that , asymptotically in , the entanglement cost and the distillable entanglement coincide and that the ratios and are given by the reduced single - qubit von neumann entropies .indeed , we have where and are the von neumann entropies of the reduced density matrices and , respectively .therefore , the process that changes copies of into copies of is asymptotically reversible .moreover , it is possible to show that it is faithful ; namely , the change takes place with unit fidelity when . provides a measure of the distance between two , generally mixed , quantum states and : the fidelity of a pure state and an arbitrary state is given by which is the square root of the overlap between and . finally , the fidelity of two pure quantum states and is defined by , with when coincides with and when and are orthogonal .for further discussions on this quantity see , e.g. , .the average fidelity between random states is studied in . ]the proof of this result can be found in .we can therefore quantify the entanglement of a bipartite pure state as it ranges from for a separable state to for maximally entangled two - qubit states ( the epr states ) .hence , it is common practice to say that the entanglement of an epr pair is _ ebit_. the fact that is easily derived from the schmidt decomposition . _ the schmidt decomposition theorem _ : given a pure state of a bipartite quantum system , there exist orthonormal states for and for such that with positive real numbers satisfying the condition ( for a proof of this theorem see , e.g. , ) .it is important to stress that the states and depend on the particular state that we wish to expand .the reduced density matrices and have the same non - zero eigenvalues .their number is also the number of terms in the schmidt decomposition ( [ schmidtdec ] ) and is known as the _ schmidt number _ ( or the _ schmidt rank _ ) of the state . a separable pure state , which by definition can be written as has schmidt number equal to one .thus , we have the following entanglement criterion : a bipartite pure state is entangled if and only if its schmidt number is greater than one .for instance , the schmidt number of the epr state ( [ eprstate ] ) is .it is clear from the schmidt decomposition ( [ schmidtdec ] ) that if , and denote the dimensions of the hilbert spaces , and , with and , we have a maximally entangled state of two subsystems has equally weighted terms in its schmidt decomposition and therefore its entanglement content is ebits .for instance , the epr state ( [ eprstate ] ) is a maximally entangled two - qubit state .note that a maximally entangled state leads to a maximally mixed state .the purity of state described by the density matrix is defined as we have the purity is much easier to investigate analytically than the von neumann entropy .moreover , it provides the first non - trivial term in a taylor series expansion of the von neumann entropy about its maximum ., with and , we obtain (\rho_a).\ ] ] ] the purity ranges from for maximally entangled states to for separable states .one can also consider the _ participation ratio _ , which is the inverse of the purity .this quantity is bounded between and and is close to if a single term dominates the schmidt decomposition ( [ schmidtdec ] ) , whereas if all terms in the decomposition have the same weight ( ) .the participation ratio represents the effective number of terms in the schmidt decomposition .a natural extension of the discussion of this section is to consider _ bipartite mixed states _ , ) , instead of pure states .however , mixed - state entanglement is not as well understood as pure - state bipartite entanglement and is the focus of ongoing research ( for a review , see , e.g. , refs . ) . by definition , a ( generally mixed ) stateis said to be separable if it can be prepared by two parties ( alice and bob ) in a `` classical '' manner ; that is , by means of local operations and classical communication .this means that alice and bob agree over the phone on the local preparation of the two subsystems and .therefore , a mixed state is separable if and only if it can be written as where and are density matrices for the two subsystems .a separable system always satisfies bell s inequalities ; that is , it only contains classical correlations .given a density matrix , it is in general a non - trivial task to prove whether a decomposition as in ( [ sepdecomposition ] ) exists or not .we therefore need separability criteria that are easier to test .two useful tools for the detection of entanglement , the peres criterion and entanglement witnesses , are reviewed in appendix [ app : separability ] .a simple argument helps understanding why the bipartite entanglement content of a pure random state is almost maximal . in a given basis the density matrix for the state is written as follows : where are the components of the state in the basis . in the case of a random state the components are uniformly distributed , with amplitudes and random phases . here is the hilbert space dimension and the value of the amplitudes ensures that the wave vector is normalized . the density matrix can therefore be written as where is a zero diagonal matrix with random complex matrix elements of amplitude .suppose now that we partition the hilbert space of the system into two parts , and , with dimensions and , where .without loss of generality , we take the first subsystem , , to be the one with the not larger dimension : . the reduced density matrix is defined as follows : .using eq .( [ eq : rhototrandom ] ) , we obtain where is a zero diagonal matrix with matrix elements of ( sum of terms of order with random phases ) . neglecting in ( [ eq : rhoa ] ) ,the reduced von neumann entropy of subsystem is given by , the maximum entropy that the subsystem can have .the exact mean value of the bipartite entanglement is given by page s formula , obtained by considering the ensemble of random pure states drawn according to the haar measure on : where denotes the ( ensemble ) average over the uniform haar measure .for , is close to its maximum value .note that , if we fix and let , then tends to .remarkably , if we consider the _ thermodynamic limit _ , that is , we fix and let , then the reduced von neumann entropy concentrates around its average value ( [ epage ] ) .this is a consequence of the so - called _ concentration of measure _ phenomenon : the uniform measure on the -sphere in ( parametrized , for instance , by angles in the hurwitz parametrization ) concentrates very strongly around the equator when is large : any polar cap smaller than a hemisphere has relative volume exponentially small in .this observation implies , in particular , the concentration of the entropy of the reduced density matrix around its average value .this in turn implies that when the dimension of the quantum system is large it is meaningful to apply statistical methods and discuss typical ( entanglement ) behavior or random states , in the sense that almost all random states behave in essentially the same way . for random states ,the average value of the purity of the reduced density matrices and is giben by lubkin s formula : nothe that , if we fix and let , then tends to its minimum value . if we fix and let , then . for large , the variance so that the relative standard deviation tends to zero in the thermodynamic limit ( at fixed ) . for _ balanced bipartition _ , corresponding to , we have note that the fact that when is again a consequence of the concentration of measure phenomenon . a derivation of eqs .( [ eq : lubkin ] ) and ( [ eq : variancelubkin ] ) is presented in appendix [ app : lubkin ] .the generation of a random state is exponentially hard . indeed, starting from a fiducial -qubit state one needs to implement a typical ( random ) unitary operator ( drawn from the haar measure on ) to obtain .since is determined by real parameters ( for instance , the angles of the hurwitz parametrization ) , its generation requires a sequence of elementary one- and two - qubit gates whose length grows exponentially in the number of qubits .thus , the generation of random states is unphysical for a large number of qubits . on the other hand, one can consider the generation of pseudo - random states that could reproduce the entanglement properties of truly random states . in refs . it has been proven that the average entanglement of a typical state can be reached to a fixed accuracy within elementary quantum gate .this proof holds for a random circuit such that is the product , of a sequence of two - qubit gates independently chosen at each step as follows : * a pair of integers , with is chosen uniformly at random from ; * single - qubit gates ( unitary transformations ) and , drawn independently from the haar measure on , are applied ; * a gate with control qubit and target qubit is applied .gate acts on the states latexmath:[ ] , {0,x}=[m^{(2)}_{ij}]_{x,0}=0 ] for .the matrix has an eigenvalue equal to ( with multiplicity ) and all the other eigenvalues smaller than .the eigenspace corresponding to the unit eigenvalue of matrix is spanned by the column vectors the asymptotic equilibrium state is however uniquely determined by the constraints and , which impose ^ 2=\frac{1}{n^2},\;\ ; x_1\equiv \sum_{\alpha_0, ... ,\alpha_{n_q-1}\ne ( 0, ... ,0 ) } [ c_t^{(\alpha_0, ... ,\alpha_{n_q-1})}]^2=\frac{n-1}{n^2}.\ ] ] finally , we obtain and , after substitution of the components of this vector into eq .( [ puritypauli ] ) , ,\ ] ] which immediately leads to lubkin s formula ( [ eq : lubkin ] ) .the characterization and quantification of multipartite entanglement is a challenging open problem in quantum information science and many different measures have been proposed . to grasp the difficulty of the problem ,let us suppose to have parties composing the system we wish to analyze . in order to obtain a complete characterization of multipartite entanglement , we should take into account all possible non - local correlations among all parties .it is therefore clear that the number of measures needed to fully quantify multipartite entanglement grows exponentially with the number of qubits . therefore , in ref . it has been proposed to characterize multipartite entanglement by means of a function rather than with a single measure .the idea is to look at the probability density function of bipartite entanglement between all possible bipartitions of the system . for pure statesthe bipartite entanglement is the von neumann entropy of the reduced density matrix of one of the two subsystems : .it is instructive to consider the smallest non - trivial instance where multipartite entanglement can arise : the three - qubit case .here we have three possible bipartitions , with qubit and qubits . for a ghz state , we obtain for all bipartitions , and therefore namely there is maximum multipartite entanglement , fully distributed among the three qubits . of qubits ,maximally multipartite entangled states , that is , pure states for which the entanglement is maximal for each bipartition , is discussed in . ]note that in this case for all bipartitions , and therefore where , namely the distribution is peaked but the amount of multipartite entanglement is not maximal .as a last three - qubit example , let us consider the state latexmath:[\[|\psi\rangle=\frac{1}{\sqrt{2}}(|000\rangle+ ( bell ) state , while the third one is factorized .in this case , if subsystem is one of the first two qubits , otherwise . hence , namely the entanglement can be large but the variance of the distribution is also large . for sufficiently large systems ( ) , it is reasonable to consider only balanced bipartitions , i.e. , with ( ) , since the statistical weight of unbalanced ones ( ) becomes negligible .if the probability density has a large mean value ( denotes the average over balanced bipartitions ) and small relative standard deviation , we can conclude that genuine multipartite entanglement is almost maximal ( note that is bounded within the interval ] ( we set ) and the discrete time measures the number of map iterations . in the following iwill always consider map ( [ eq : quantmap ] ) on the torus , , where . with an -qubit quantum computer we are able to simulate the quantum sawtooth map with levels ; as a consequence , takes equidistant values in the interval , while ranges from to ( thus setting ) .we are in the quantum chaos regime for map ( [ eq : quantmap ] ) when or ; in particular , in the following i will focus on the case .there exists an efficient quantum algorithm for simulating the quantum sawtooth map .the crucial observation is that the operator in eq .[ eq : quantmap ] can be written as the product of two operators : and , that are diagonal in the - and in the -representation , respectively .therefore , the most convenient way to classically simulate the map is based on the forward - backward fast fourier transform between and representations , and requires operations per map iteration . on the other hand, quantum computation exploits its capacity of vastly parallelize the fourier transform , thus requiring only one- and two - qubit gates to accomplish the same task . in brief , the resources required by the quantum computer to simulate the sawtooth map are only logarithmic in the system size , thus admitting an exponential speedup , as compared to any known classical computation .the sawtooth map and the quantum algorithm for its simulation are discussed in details in appendix [ sec : sawmap ] .let us first compute the average bipartite entanglement as a function of the number of iterations of map ( [ eq : quantmap ] ) .numerical data in fig .[ fig : entgen ] exhibit a fast convergence , within a few kicks , of this quantity to the value expected for a random state according to page s formula ( note that this result is obtained from eq .( [ epage ] ) in the special case ) . precisely , as shown in the inset of fig .[ fig : entgen ] , converges exponentially fast to , with the time scale for convergence .therefore , the average entanglement content of a true random state is reached to a fixed accuracy within map iterations , namely quantum gates .i stress that in our case a deterministic map , instead of random one- and two - qubit gates as in ref . , is implemented .of course , since the overall hilbert space is finite , the above exponential decay in a deterministic map is possible only up to a finite time and the maximal accuracy drops exponentially with the number of qubits .i also note that , due to the quantum chaos regime , properties of the generated pseudo - random state do not depend on initial conditions , whose characteristics may even be very far from randomness ( e.g. , simulations of fig . [fig : entgen ] , start from a completely disentangled state ) . ) , and recursively applying the quantum sawtooth map ( [ eq : quantmap ] ) at and , from bottom to top , .dashed lines show the theoretical values of eq .( [ eq : entpure ] ) .inset : convergence of to the asymptotic value in eq .( [ eq : entpure ] ) ; time axis is rescaled with .this figure is taken from ref ..,width=377 ] as discussed above , multipartite entanglement should generally be described in terms of a function , rather than by a single number .i therefore show in fig [ fig : isto_eps0 ] the probability density function for the entanglement of all possible balanced bipartitions of the state . this function is sharply peaked around , with a relative standard deviation that drops exponentially with ( see the inset of fig .[ fig : isto_eps0 ] ) and is small ( ) already at .for this reason , we can conclude that multipartite entanglement is large and that it is reasonable to use the first moment of for its characterization .the corresponding probability densities for random states is also calculated ( dashed curves in fig .[ fig : isto_eps0 ] ) ; their average values and variances are in agreement with the values obtained from states generated by the sawtooth map . as we have remarked in sec .[ sec : entrandom ] , the fact that for random states the distribution is peaked around a mean value close to the maximum achievable value is a manifestation of the concentration of measure phenomenon in a multi - dimensional hilbert space . , after iterations of map ( [ eq : quantmap ] ) at .various histograms are for different numbers of qubits : from left to right ; dashed curves show the corresponding probabilities for random states .inset : relative standard deviation as a function of ( full circles ) and best exponential fit ( continuous line ) ; data and best exponential fit for random states are also shown ( empty triangles , dashed line ) .this figure is taken from ref .in order to assess the physical significance of the generated multipartite entanglement , it is crucial to study its stability when realistic noise is taken into account .hereafter i model quantum noise by means of unitary noisy gates , that result from an imperfect control of the quantum computer hardware . the noise model of ref . is followed .one - qubit gates can be seen as rotations of the bloch sphere about some fixed axis ; i assume that unitary errors slightly tilt the direction of this axis by a random amount .two - qubit controlled phase - shift gates are diagonal in the computational basis ; i consider unitary perturbations by adding random small extra phases on all the computational basis states .hereafter i assume that each noise parameter is randomly and uniformly distributed in the interval ] .starting from a given initial state , the quantum algorithm for simulating the sawtooth map in presence of unitary noise gives an output state that differs from the ideal output . here stands for all the noise parameters , that vary upon the specific noise configuration ( is proportional to the number of gates ) .since we do not have any a priori knowledge of the particular values taken by the parameters , the expectation value of any observable for our -qubit system will be given by ] of the reduced density matrix . clearly , for states like the one in eq .( [ eq : initial ] ) , we have , .as the total system evolves , we expect that decreases , while grows up , thus meaning that the two - qubit system is progressively losing coherence . if the kicked rotator is in the chaotic regime and in the semiclassical region , it is possible to drastically simplify the description of the system in eq .( [ eq : hammodel ] ) by using the _ random phase - kick _ approximation , in the framework of the kraus representation formalism .since , to a first approximation , the phases between two consecutive kicks in the chaotic regime can be considered as uncorrelated , the interaction with the environment can be simply modeled as a phase - kick rotating both qubits through the same random angle about the -axis of the bloch sphere .this rotation is described in the basis by the unitary matrix \otimes \left [ \begin{array}{cc } e^{- i \epsilon \cos \theta } & 0 \\ 0 & e^{i \epsilon \cos \theta } \end{array } \right],\ ] ] where the angle is drawn from a uniform random distribution in .the one - kick evolution of the reduced density matrix is then obtained after averaging over : in order to assess the validity of the random phase - kick approximation , model ( [ eq : hammodel ] ) is numerically investigated in the classically chaotic regime and in the region in which the environment is a semiclassical object . under these conditions , we expect that the time evolution of the entanglement can be accurately predicted by the random phase model .such expectation is confirmed by the numerical data shown in fig .[ fig : confr_es ] .even though differences between the two models remain at long times due to the finite number of levels in the kicked rotator , such differences appear at later and later times when ( ) .the parameter has been chosen much greater than one , so that the classical phase space of the kicked rotator can be considered as completely chaotic .note that the value is chosen to completely wipe off memory effects between consecutive and next - consecutive kicks ( see ref . for details ) .( main figure ) and entanglement ( inset ) as a function of time at , , , .the thin curves correspond to different number of levels for the environment ( the kicked rotator ) ( from bottom to top in the main figure and vice versa in the inset ) .the thick curves give the numerical results from the random phase model ( [ eq : randomphase ] ) .] i point out that the random phase model can be derived from the caldeira - leggett model with a pure dephasing coupling , with coupling constant to the -th oscillator of the environment , whose coordinate operator is .this establishes a direct link between the chaotic single - particle environment considered in this paper and a standard many - body environment .the role of entanglement as a resource in quantum information has stimulated intensive research aimed at unveiling both its qualitative and quantitative aspects .the interest is first of all motivated by experimental implementations of quantum information protocols .decoherence , which can be considered as the ultimate obstacle in the way of actual implementation of any quantum computation or communication protocol , is due to the entanglement between the quantum hardware and the environment .the decoherence - control issue is expected to be particularly relevant when the state of the quantum system is _ complex _ , namely when it is characterized by a large amount of multipartite entanglement .it is therefore important , for applications but also in its own right , to scrutinize the robustness and the multipartite features of relevant classes of entangled states . in this context, random states play an important role , both for applications in quantum protocols and in view of a , highly desirable , statistical theory of entanglement .such studies have deep links with the physics of complex systems . in classical physics , a well defined notion of complexity , based on the exponential instability of chaos , exists , and has profound links with the notion of algorithmic complexity : in terms of the symbolic dynamical description , almost all orbits are random and unpredictable . on the other hand , in spite of many efforts ( see and references therein )the transfer of these concepts to quantum mechanics still remains elusive .however , there is strong numerical evidence that quantum motion is characterized by a greater degree of stability than classical motion ( see ) .this has important consequences on the stability of quantum algorithms ; for instance , the robustness of the multipartite entanglement generated by chaotic maps and discussed in sec .[ sec : stabilitymultipartite ] is related to the power - law decay of the fidelity time scales for quantum algorithms which , in turn , is a consequence of the discreteness of the phase space in quantum mechanics . if we consider the chaotic classical motion ( governed by the liouville equation ) of some phase - space density , smaller and smaller scales are explored exponentially fast .these fine details of the density distribution are rapidly lost under small perturbations . in quantum mechanics, there is a lower limit to this process , set by the size of the planck cell , and this reduces the complexity of quantum motion as compared to classical motion . finally ,the fundamental , purely quantum notion of entanglement is expected to play a crucial role in characterizing the complexity of a quantum system .i believe that studies of complexity and multipartite entanglement will shed some light on series of very important issues in quantum computation and in critical phenomena of quantum many - body condensed matter physics .the peres criterion provides a necessary condition for the existence of decomposition ( [ sepdecomposition ] ) , in other words , a violation of this criterion is a sufficient condition for entanglement .this criterion is based on the _ partial transpose _ operation . introducing an orthonormal basis in the hilbert space associated with the bipartite system , the density matrix has matrix elements .the partial transpose density matrix is constructed by only taking the transpose in either the latin or greek indices ( here latin indices refer to alice s subsystem and greek indices to bob s ) . for instance , the partial transpose with respect to alice is given by since a separable state can always be written in the form ( [ sepdecomposition ] ) and the density matrices and have non - negative eigenvalues , then the overall density matrix also has non - negative eigenvalues .the partial transpose of a separable state reads since the transpose matrices are hermitian non - negative matrices with unit trace , they are also legitimate density matrices for alice .it follows that none of the eigenvalues of is non - negative .this is a necessary condition for decomposition ( [ sepdecomposition ] ) to hold .it is then sufficient to have at least one negative eigenvalue of to conclude that the state is entangled .it can be shown that for composite states of dimension and , the peres criterion provides a necessary and sufficient condition for separability ; that is , the state is separable if and only if is non - negative .however , for higher dimensional systems , states exist for which all eigenvalues of the partial transpose are non - negative , but that are non - separable .these states are known as _ bound entangled states _ since they can not be distilled by means of local operations and classical communication to form a maximally entangled state .i stress that the peres criterion is more sensitive than bell s inequality for detecting quantum entanglement ; that is , there are states detected as entangled by the peres criterion that do not violate bell s inequalities ( ). a convenient way to detect entanglement is to use the so - called entanglement witnesses . by definition ,an entanglement witness is a hermitian operator such that for all separable states while there exists at least one state such that .therefore , the negative expectation value of is a signature of entanglement and the state is said to be detected as entangled by the witness . the existence of entanglement witnesses is a consequence of the _ hahn - banach theorem : _ given a convex and compact set and , there exists a hyperplane that separates from .this fact is illustrated in fig .[ fig : witness ] .the set of separable states is a subset of the set of all possible density matrices for a given system .the dashed line represents a hyperplane separating an entangle state from .the optimized witness ( represented by a full line ) is obtained after performing a parallel transport of the above hyperplane , so that it becomes tangent to the set of separable states .therefore , the optimized witness detects more entangled states than before parallel transport .note that , in order to fully characterize the set of separable states one should find all the witnesses tangent to .the concept of entanglement witness is close to experimental implementations and detection of entanglement by means of entanglement witnesses has been realized in several experiments .the more negative expectation value of entanglement witness we find , the easier it is to detect entanglement of such a state .the expectation value of also provides lower bounds to various entanglement measures .finally , it is interesting to note that violation of bell s inequalities can be rewritten in terms of non - optimal entanglement witnesses .in general , classification of entanglement witnesses is a hard problem .however , much simpler is the issue with the so - called decomposable entanglement witnesses . by definition , a witness is called decomposable if that is , with positive semidefinite operators .decomposable entanglement witnesses can only detect entangled states with at least one negative eigenvalues of .therefore , similarly to the peres criterion , decomposable witnesses do not detect bound entangled states .note , however , that entanglement witnesses are closer to experimental implementations than the peres criterion , as full tomographic knowledge about the state is not needed .let us write a -level random state in the form where are independent random variables uniformly distributed in and is a random point uniformly distributed on the unit hypersphere , with distribution function with normalization constant to be determined later. given a bipartition of the hilbert space of the system into two parts , and , with dimensions and , the purity reads ,\ ] ] where . following ,i split in two parts : where , \label{eq : x}\ ] ] where means that equal indexes are banned in the sum . since we obtain , where i have used for all and for all with .i now evaluate the marginal distribution in particular , the normalization condition allows us to determine .thus , we obtain after substitution of eqs .( [ eq : pr0 ] ) and ( [ eq : pr0pr1 ] ) into ( [ eq : pm ] ) we readily obtain lubkin s formula ( [ eq : lubkin ] ) .the variance can be computed with the same technique as above .however , to obtain the variance ( [ eq : variancelubkin ] ) for large it is suffcient to replace in eqs .( [ eq : x ] ) and ( [ eq : m ] ) with its mean value : we can see from eqs .( [ eq : x ] ) and ( [ eq : m ] ) that and are sums of terms of order . therefore , the central limit theorem implies that , for large , the purity tends to a gaussian distribution with mean and variance .finally , i note that all moments of the purity have been recently computed .the sawtooth map is a prototype model in the studies of classical and quantum dynamical systems and exhibits a rich variety of interesting physical phenomena , from complete chaos to complete integrability , normal and anomalous diffusion , dynamical localization , and cantori localization .furthermore , the sawtooth map gives a good approximation to the motion of a particle bouncing inside a stadium billiard ( which is a well - known model of classical and quantum chaos ) .the sawtooth map belongs to the class of periodically driven dynamical systems , governed by the hamiltonian where are conjugate action - angle variables ( ) .this hamiltonian is the sum of two terms , , where is just the kinetic energy of a free rotator ( a particle moving on a circle parametrized by the coordinate ) , while represents a force acting on the particle that is switched on and off instantaneously at time intervals .therefore , we say that the dynamics described by hamiltonian ( [ sawham ] ) is _ kicked_. the corresponding hamiltonian equations of motion are \displaystyle \dot\theta = \frac{\partial{h}}{\partial{n } } = n \ , .\end{array } \right.\ ] ] these equations can be easily integrated and one finds that the evolution from time ( prior to the -th kick ) to time ( prior to the -th kick ) is described by the map \displaystyle \theta_{t+1}= \theta_t + t{n}_{t+1 } \ , , \end{array } \right .\label{sawmap}\ ] ] where the discrete time measures the number of map iterations and is the force acting on the particle . in the following ,we focus on the special case .this map is called the _ sawtooth map _ , since the force has a sawtooth shape , with a discontinuity at .for such a discontinuous map the conditions of the kolmogorov - arnold - moser ( kam ) theorem are not satisfied and , for any , the motion is not bounded by kam tori . by rescaling ,the classical dynamics is seen to depend only on the parameter . indeed , in terms of the variables map ( [ sawmap ] ) becomes \displaystyle { \theta}_{t+1 } = \theta_t + { i}_{t+1 } \ , .\end{array } \right .\label{sawmap2}\ ] ] the sawtooth map exhibits sensitive dependence on initial conditions , which is the distinctive feature of classical chaos : any small error is amplified exponentially in time . in other words , two nearby trajectoriesseparate exponentially , with a rate given by the maximum lyapunov exponent , defined as where ^ 2+[\delta \theta_t]^2}$ ] . to compute and , we differentiate map ( [ sawmap2 ] ) , obtaining = m \left [ \begin{array}{c } \delta i_t \\ \delta\theta_t \end{array } \right ] = \left [ \begin{array}{c@{\quad}c } 1 & k \\ 1 & 1+k \end{array } \right ] \left [ \begin{array}{c } \delta i_t \\ \delta\theta_t \end{array } \right ] .\label{tangmap}\ ] ] the iteration of map ( [ tangmap ] ) gives and as a function of and ( and represent a change of the initial conditions ) .the stability matrix has eigenvalues , which do not depend on the coordinates and and are complex conjugate for and real for and .thus , the classical motion is stable for and completely chaotic for and . for , in , and therefore the maximum lyapunov exponent is .similarly , we obtain for . in the stable region , . the sawtooth map can be studied on the cylinder [ , or on a torus of sinite size ( , where is an integer , to assure that no discontinuities are introduced in the second equation of ( [ sawmap2 ] ) when is taken modulus ) .although the sawtooth map is a deterministic system , for and the motion along the momentum direction is in practice indistinguishable from a random walk .thus , one has normal diffusion in the action ( momentum ) variable and the evolution of the distribution function is governed by a fokker planck equation : the diffusion coefficient is defined by where , and denotes the average over an ensemble of trajectories . if at time take a phase space distribution with initial momentum and random phases , then the solution of the fokker planck equation ( [ fokkerplanck ] ) is given by .\ ] ] the width of this gaussian distribution grows in time , according to for , the diffusion coefficient is well approximated by the random phase approximation , in which we assume that there are no correlations between the angles ( phases ) at different times . hence , we have where is the change in action after a single map step . for diffusionis slowed , due to the sticking of trajectories close to broken tori ( known as cantori ) , and we have ( this regime is discussed in ref .for the motion is stable , the phase space has a complex structure of elliptic islands down to smaller and smaller scales , and one can observe anomalous diffusion , that is , , with ( see ref .the cases are integrable . the quantum version of the sawtooth map is obtained by means of the usual quantization rules , and ( we set ) .the quantum evolution in one map iteration is described by a unitary operator , called the floquet operator , acting on the wave vector : |\psi\rangle_t \ , , \label{sawq}\ ] ] where is hamiltonian ( [ sawham ] ) . since the potential is switched on only at discrete times , it is straightforward to obtain which for the sawtooth map is just eq .( [ eq : quantmap ] ) .it is important to emphasize that , while the classical sawtooth map depends only on the rescaled parameter , the corresponding quantum evolution ( [ sawquantum ] ) depends on and separately .the effective planck constant is given by .indeed , if we consider the operator ( is the quantization of the classical rescaled action ) , we have =t[\hat{\theta},\hat{n}]=i t = i \hbar_{\rm eff}.\ ] ] the classical limit is obtained by taking and , while keeping constant . in the quantum sawtooth map model one can observe important physical phenomena like dynamical localization . indeed , due to quantum interference effects , the chaotic diffusion in momentum is suppressed , leading to exponentially localized wave functions .this phenomenon was first found and studied in the quantum kicked - rotator model and has profound analogies with anderson localization of electronic transport in disordered materials .dynamical localization has been observed experimentally in the microwave ionization of rydberg atoms and in experiments with cold atoms . in the quantum sawtooth mapalso cantori localization takes place : in the vicinity of a broken kam torus , a cantorus starts to act as a perfect barrier to quantum wave packet evolution , if the flux through it becomes less than . in the following ,we describe an exponentially efficient quantum algorithm for simulation of the map ( [ eq : quantmap ] ) .it is based on the forward/ backward quantum fourier transform between momentum and angle bases .such an approach is convenient since the floquet operator , introduced in eq .( [ eq : quantmap ] ) , is the product of two operators , and , diagonal in the and representations , respectively .this quantum algorithm requires the following steps for one map iteration : * apply to the wave function . in order to decompose the operator into one- and two - qubit gates ,we first of all write in binary notation : with . here is the number of qubits , so that the total number of levels in the quantum sawtooth map is . from this expansion, we obtain this term can be put into the unitary operator , giving the decomposition , \label{ukdec}\ ] ] which is the product of two - qubit gates ( controlled phase - shift gates ) , each acting non - trivially only on the qubits and . in the computational basis , where is a diagonal matrix : .\ ] ] * the change from the to the representation is obtained by means of the quantum fourier transform , which requires ( single - qubit ) hadamard gates and ( two - qubit ) controlled phase - shift gates ( see , e.g. , ) . * in the representation , the operator has essentially the same form as the operator in the representation and can therefore be decomposed into controlled phase - shift gates , similarly to eq .( [ ukdec ] ) .* return to the initial representation by application of the inverse quantum fourier transform .thus , overall , this quantum algorithm requires gates per map iteration ( controlled phase - shifts and hadamard gates ) .this number is to be compared with the operations required by a classical computer to simulate one map iteration by means of a fast fourier transform .thus , the quantum simulation of the quantum sawtooth map dynamics is exponentially faster than any known classical algorithm .note that the resources required to the quantum computer to simulate the evolution of the sawtooth map are only logarithmic in the system size . of course , there remains the problem of extracting useful information from the quantum computer wave function .for a discussion of this problem , see refs .finally , i point out that the quantum sawtooth map has been recently implemented on a three - qubit nuclear magnetic resonance ( nmr)-based quantum processor .any state can be decomposed as a convex combination of projectors onto pure states : the entanglement of formation is defined as the mean entanglement of the pure states forming , minimized over all possible decompositions : where the ( bipartite ) entanglement of the pure states is measured according to eq .( [ entbipure ] ) . the entanglement of formation of a generic two - qubit state can be evaluated in a closed form following ref .first of all we compute the _ concurrence _ , defined as , where the s are the square roots of the eigenvalues of the matrix , in decreasing order . here is the spin flipped matrix of , and it is defined by ( note that the complex conjugate is taken in the computational basis ) .once the concurrence has been computed , the entanglement of formation is obtained as , where is the binary entropy function : , with .the concurrence is widely investigated in condensed matter physics , in relation to the general problem of the behavior of entanglement across quantum phase transitions . for studies of the relation between entanglement and integrability to chaos crossover in quantum spin chain ,see , and references therein .while working on the topics discussed in this review paper , i had the pleasure to collaborate with dima averin , gabriel carlo , giulio casati , rosario fazio , giuseppe gennaro , jae weon lee , carlos meja - monasterio , simone montangero , massimo palma , toma prosen , alessandro romito , davide rossini , dima shepelyansky , valentin sokolov , oleg zhirov and marko nidari .i would like to express my gratitude to all of them .preprint arxiv : quant - ph/0702225v2 . . .i : basic concepts ( world scientific , singapore , 2004 ) ; vol .ii : basic tools and special topics ( world scientific , singapore , 2007 ) .( cambridge university press , cambridge , 2000 ) . .( addison - wesley , reading , massachusetts , 1994 ) .( cambridge university press , cambridge , 2006 ) . ; ; ; . . . . . . . . . . . . .( 2nd ed . )( springer - verlag , 2000 ) .( cambridge university press , cambridge , 1999 ) . . . . . .proceedings of the `` e. fermi '' varenna school on , varenna , italy , 5 - 15 july 2005 , edited by casati g. , shepelyansky d.l ., zoller p. benenti g. ( ios press and sif , bologna , 2006 ) ; reprinted in . . . . .a noisy gates model close to experimental implementations is discussed in , . . . . . . . . . . .( wiley - vch , weinheim , 1998 ) .( 2nd ed . )( world scientific , singapore , 1999 ) . . . .( 2nd ed . )( springer - verlag , 1992 ) .( 2nd ed . )( springer - verlag , 1997 ) . . ,phys . today .april 1983 , pag .preprint arxiv:0807.2902v1 [ nlin.cd ] . ; . .. ; for a review see , e.g. , . . , , and references thereinproceedings of the `` e. fermi '' varenna school on , varenna , italy , 5 - 15 july 2005 , edited by casati g. , shepelyansky d.l ., zoller p. benenti g. ( ios press and sif , bologna , 2006 ) . . . , and references therein . . . . .
|
entanglement is not only the most intriguing feature of quantum mechanics , but also a key resource in quantum information science . entanglement is central to many quantum communication protocols , including dense coding , teleportation and quantum protocols for cryptography . for quantum algorithms , multipartite ( many - qubit ) entanglement is necessary to achieve an exponential speedup over classical computation . the entanglement content of random pure quantum states is almost maximal ; such states find applications in various quantum information protocols . the preparation of a random state or , equivalently , the implementation of a random unitary operator , requires a number of elementary one- and two - qubit gates that is exponential in the number of qubits , thus becoming rapidly unfeasible when increasing . on the other hand , pseudo - random states approximating to the desired accuracy the entanglement properties of true random states may be generated efficiently , that is , polynomially in . in particular , quantum chaotic maps are efficient generators of multipartite entanglement among the qubits , close to that expected for random states . this review discusses several aspects of the relationship between entanglement , randomness and chaos . in particular , i will focus on the following items : ( i ) the robustness of the entanglement generated by quantum chaotic maps when taking into account the unavoidable noise sources affecting a quantum computer ; ( ii ) the detection of the entanglement of high - dimensional ( mixtures of ) random states , an issue also related to the question of the emergence of classicality in coarse grained quantum chaotic dynamics ; ( iii ) the decoherence induced by the coupling of a system to a chaotic environment , that is , by the entanglement established between the system and the environment . [ 1999/12/01 v1.4c il nuovo cimento ]
|
the race to understanding and explaining the emergence of complex structures and behavior has a wide variety of participants in contemporary science , whether it be sociology , physics , biology or any of the other major sciences .one of the more well - trodden paths is one where evolutionary processes play an important role in the emergence of complex structures .when various organisms compete over limited resources , complex behavior can be beneficial to outperform competitors .but the question remains : can we _ quantify _ the change of complexity throughout evolutionary processes ?the experiment we undertake in this paper addresses this question through an empirical approach . in very general terms , we simulate two simple organisms on a computer that compete over limited available resources . since this experiment takes place in a rather abstract modeling setting , we will use the term _ processes _ instead of organisms from now on .two competing processes evolve over time and we measure how the complexity of the emerging patterns evolves as the two processes interact . the complexity of the emerging structures will be compared to the complexity of the respective processes as they would have evolved without any interaction . when setting up an experiment , especially one of an abstract nature , a subtle yet essential question emerges . _ what _ exactly defines a particular process and how does the process distinguish itself from the background ?for example , is the shell of a hermit crab part of the hermit crab even though the shell itself did now grow from it ?do essential bacteria that reside in larger organisms form part of that larger organism ? where is the line drawn between an organism and its environment ? throughout this paper , we assume that a process is only separate from the background in a behavioral sense .in fact , we assume processes are different from backgrounds in a physical sense only through resource management. we will demonstrate how these ontological questions are naturally prompted in our very abstract modeling environment .as mentioned already , we wish to investigate how complexity evolves over time when two processes compete for limited resources . we chose to represent competing processes by cellular automata as defined in the subsection below .cellular automata ( ca / cas ) represent a simple parallel computational paradigm that admit various analogies to processes in nature . in this section, we shall first introduce elementary cellular automata to justify how they are useful for modeling interacting and competing processes .we will consider _ elementary cellular automata _( eca / ecas ) which are simple and well - studied cas .they are defined to act on a one - dimensional discrete space divided into countably infinite many cells .we represent this space by a copy of the integers and refer to it as a _ tape _ or simply _ row_. each cell in the tape can have a particular color . in the case of our ecaswe shall work with just two colors , represented for example by and respectively .we sometimes call the distributions of and over the tape the _ output space _ or _ spatial configuration_. thus , we can represent an output space by a function from the integers to the set of colors , in this case . instead of writing , we shall often write and call the color of the -th cell of row .an eca will act on such an output space in discrete time steps and as such can be seen as a function mapping functions to functions . in our case : if some eca acts on some initial row we will denote the output space after time - steps by so that . likewise , we will denote the cell at time by .our ecas are entirely defined in a local fashion in that the color of cell at time will only depend on and its two direct neighboring cells and ( in more general terminology , we only consider radius - one cas ( ) ) .thus , an eca with just two colors in the output space is entirely determined by its behavior on three adjacent cells .since each cell can only have two colors , there are such possible triplets so that there are possible different ecas .however , we will not consider all ecas for this experiment for computational simplicity .instead , we only consider the 88 eca that are non - equivalent under horizontal translation , 1 and 0 exchanges , or any combination of the two . in our experiment , the entire interaction will be modeled by a particular ca that is a combination of two ecas and something else .so far we have decided to model a process by an eca with a color 0 ( white ) and a non - white color 1 .a process is modeled by the evolution of the white and non - white cells throughout time steps as governed by the particular eca rule .once we have made this choice , it is natural to consider a different process in a similar fashion .that is , we model also by an eca . to tell and apart on the same grid we choose to work on the alphabet whereas the alphabet of was .both and will use the same symbol 0 ( white ) .the next question is how to model the interaction between and .we will do so by embedding both cas in a setting where we have a global ca with three colors .we will choose in such a fashion that restricted to the alphabet is just while restricted to the alphabet is . of course these two requirementsdo not determine for given and .thus , this leaves us with a difficult modeling choice reminiscent to the ontological question for organisms : what to do for triplets that contain all three colors or are otherwise not specified by or .since there are 12 such triplets , , , , , , , , , , , and .] we have different ways to define given and .given ecas and as above , we call any such that extends and a corresponding _ global rule_.since there are 88 unique ecas , there are 3916 unique combinations of and , which results in possible globals rules .an online program illustrating interacting cellular automata via a global rule can be visited at http://demonstrations.wolfram.com / competingcellularautomata/. let us go back to the main purpose of the paper .we wish to study two competing processes that evolve over time and we want to measure how the complexity of the emerging patterns evolves as the two processes interact .the complexity of the emerging structures will be compared to the complexity of the respective processes as they would have evolved without interacting with each other .the structure of an experiment readily suggests itself .pick two ecas and defined on the alphabets and respectively . measure the typical complexities and generated by and respectively when they are applied in an isolated setting .next , pick some corresponding global rule and measure the typical complexity that is generated by .once this ` typical complexity ' is well - defined , the experiment can be run and can be compared to and .the question is how to interpret the results .if we see a change in the typical complexity there are three possible reasons this change can be attributed to : 1 . an evolutionary process triggered by the interaction between and intrinsic complexity injection due the nature of how is defined on the 12 previously non - determined tuples ; 3 .an intrinsic complexity injection due to scaling the alphabet from size 2 to size 3 . in case of the second reason , an analogy to the cosmological constant is readily suggested .recall that is supposed to take care of the modeling of the background so to say , where none of or is defined but only their interaction .thus , a possible increase of complexity / entropy is attributed in the second reason to some intrinsic entropy density of the background .we shall see how the choice of will affect the change of complexity upon interaction . before we can further describe the experiment, we first need to decide on how to measure complexity . in a qualitative characterization of complexityis given for dynamical processes in general and ecas in particular .more specifically , wolfram described four classes for complexity which can be characterized as follows : * class 1 .symbolic systems which rapidly converge to a uniform state .examples are eca rules 0 , 32 and 160 . *symbolic systems which rapidly converge to a repetitive or stable state .examples are eca rules 4 , 108 and 218 . * class 3 .symbolic systems which appear to remain in a random state .examples are eca rules 22 , 30 , 126 and 189 . *symbolic systems which form areas of repetitive or stable states , but which also form structures that interact with each other in complicated ways .examples are eca rules 54 and 110 .it turns out that one can consider a quantitative complexity measure that largely generalizes the four qualitative complexity classes as given by wolfram . in our definition of doingso we shall make use of de bruijn sequences although there are different approaches possible . a _de bruijn _ sequence for n colors is any sequence of n colors so that any of the possible string of length with colors is actually a substring of . in this sense , de bruijn sequences are sometimes considered to be semi - random . using de bruijn sequences of increasing length for a fixed number of colors we can parametrize our input and as such it makes sense to speak of asymptotic behavior .now , one can formally characterize wolfram s classes in terms of kolmogorov complexity as was done in . for large enough input and for a sufficiently long time evaluation one can use kolmogorov complexity to assign a complexity measure of a system for input and runtime ( see ) . a typical complexity measure for then be defined by : as a first approximation , one can use cut - off values of these outcomes to classify into one of the four wolfram classes . in the experiment we tested the influence of global rules ( grs ) on two interacting ecas and . of the possiblegrs we explored a total number of grs representing a share of 80% . for each combination of the two ecas and with one gr, we considered 100 different initial conditions of length 26 corresponding to the first 100 ( in the canonical enumeration order ) de bruijn sequences of this length over the alphabet .each execution was evaluated for 60 timesteps .we use the method described in section [ section : complexitycharacterization ] to determine the complexity of a ca state evolution for two ecas with mixed neighborhoods evolving under a gr .specifically , we used the _ mathematica _v. 10.0 ` compress ` function as an approximation of the kolmogorov complexity as first suggested in .it is important to note the number of computational hours it took to undergo this experiment . for each exploring 100 different initial conditions and combinations of and , there were a total of billion ca executions .even with one numerical value per ca execution , the total data generated for this experiment was 1.3 terabytes on hard drive .about hours of computational time were used total on about 200 jobs at a time , each gr on a different core , which was 200 grs .sometimes up to 500 jobs in parallel .each job took about 36 hours to complete . each complexity estimate is normalized by subtracting the compression value of a string of 0 s of equal length . for most of the ca instances ,this is a good approximation of the long - term effects of a gr .as mentioned in section [ section : complexitycharacterization ] we can use these compression values to determine / approximate wolfram s complexity classes according to critical thresholds values .the thresholds were trained according to the ecas we used for the interactions . using these compression methods for each run, we determined if the output was class 1 , 2 , 3 or 4 .the outcome is organized according to the complexity classes of the constituent cas .for the best clarity , the results are represented in heat maps in the next section .each output class ( 1 - 4 ) is represented by its own heat map , so each figure will have four heat maps to account for the different classes of outputs .each map is composed of a four - by - four grid whose axes describe the complexity classes of and , as shown .thus , of all of the runs ( ecas and interaction , gr , and initial condition ) that yield a class 1 output are represented under the heat map labeled _ class 1_. for example , in figure 1 , about 11% of all class 1 outputs were generated by class 1 and class 1 interactions . here the more intense color represents the more densely populated value for a particular class .the outputs of every possible grs are accumulated and represented in figure 1 . in general, this figure shows the change in complexity when a gr is used to determine the interaction between two ecas .most of the outputs for each complexity class were generated with class 1 eca interactions , which was least expected .there are several interesting examples of grs that affect the classification of behavior of the ca interaction state output .the heat maps of three grs in particular are shown in figures 2 - 4 .as shown in the figure captions , grs are enumerated according to their position in the ` tuples ` function .that is to say , we enumerate the grs by generating all tuples with 3 symbols in lexicographical order . over half of all outputs from with gr 77399were complexity class 4 from class 1 and interactions . the majority of complexity class 3 outputs were generated from complexity class 2 and interactions .gr 72399 had outputs that were 65 the actual outputs for some of these grs are shown in figure 5 .note that most and rules are complexity class 1 or 2 .it is unexpected that the biggest increases in complexity arise from and complexity class 1 interactions .this suggests that complexity is intrinsic to the grs rather than the ecas themselves .we suspect that interacting class 1 ecas readily accept complexity through the rules of their interactions .this is not as prevalent in any of the other complexity classes of and interactions .likely , if and have more complexity without interaction , then they are more robust to any complexity changes introduced by the gr . in all cases , complexity increases orremains the same by introducing an interaction rule via a gr .the most interesting cases are when global rules increase the complexity of the output by entire classes .there are cases where mixed neighborhoods are present and sustained throughout the output , which is a form of emergence through the interaction gr rule . because we only used a short number of time steps per execution , it is unclear whether these mixed neighborhoods eventually die out or not , it is nonetheless a case of intermediate emergence from a global rule .we have found interesting cases where global rules seem to drastically change the complexity of an interacting ca output . some originally class 3 ecas , for example ,were found to be too fragile under most global rules , while some other are more resilient .most importantly , the greatest increases in complexity occur when the interacting eca are both class 1 , which is true for the majority of all possible global rules .although we still have yet to understand the mechanisms behind these results , we are confident that further analysis will be important in understanding the emergence of complex structures .we want to acknowledge the asu saguaro cluster where most of the computational work was undertaken .chaitin , on the length of programs for computing finite binary sequences : statistical considerations , _ journal of the acm _ , 16(1):145159 , 1969 .chandler , `` cellular automata with global control '' http://demonstrations.wolfram.com/cellularautomatawithglobalcontrol/ , wolfram demonstrations project , published : may 9 , 2007 . s. wolfram , _ a new kind of science _ , wolfram media , champaign , il .h. zenil , compression - based investigation of the behaviour of cellular automata and other systems , _ complex systems , _( 19)2 , 2010 . h. zenil and e. villarreal - zapata , asymptotic behaviour and ratios of complexity in cellular automata rule spaces , _ international journal of bifurcation and chaos _ vol . 13 , no . 9 , 2013 .
|
can we quantify the change of complexity throughout evolutionary processes ? we attempt to address this question through an empirical approach . in very general terms , we simulate two simple organisms on a computer that compete over limited available resources . we implement global rules that determine the interaction between two elementary cellular automata on the same grid . global rules change the complexity of the state evolution output which suggests that some complexity is intrinsic to the interaction rules themselves . the largest increases in complexity occurred when the interacting elementary rules had very little complexity , suggesting that they are able to accept complexity through interaction only . we also found that some class 3 or 4 ca rules are more fragile than others to global rules , while others are more robust , hence suggesting some intrinsic properties of the rules independent of the global rule choice . we provide statistical mappings of elementary cellular automata exposed to global rules and different initial conditions onto different complexity classes . + * keywords * : behavioral classes ; emergence of behavior ; cellular automata ; algorithmic complexity ; information theory
|
understanding dust dynamics in protoplanetary disks is an important topic in the study of planet formation , both theoretically and observationally . on one hand ,planetary cores must be built up from micron - sized interstellar dust grains , which need to gradually grow in size within a gaseous environment .especially challenging is how mm / cm - sized particles can effectively sediment towards the mid - plane of a protoplanetary disk and coalesce to form km - sized planetesimals ( * ? ? ?* and references therein ) .thanks to the advance in telescopes , on the other hand , it has become possible for observations to resolve the distribution of mm / cm - sized particles in nearby protoplanetary disks , from their thermal emission in sub - millimeter and radio wavelengths or polarized scattered light in the near infrared .for instance , used the atacama large millimeter / centimeter array ( alma ) to locate large - scale , lopsided concentration of mm - sized pebbles in the transition disk around oph irs 48 . found almost perfectly axisymmetric , ring - like distribution of pebbles around hl tau .the strategic explorations of exoplanets and disks with subaru ( seeds ) project found a wealth of morphologically diverse structures like spiral arms and rings in an array of protoplanetary disks .these structures have also been detected with the very large telescope ( vlt ; see , e.g. , ) .to gain more insight into the physical processes at work in these protoplanetary disks , numerical simulations modeling both gas and solid particles are often enlisted given the complexity of such a system .for example , numerical simulations were used to demonstrate how pebbles / boulders could spontaneously concentrate themselves in a gaseous disk via the streaming instability , to the extent that planetesimal formation is triggered ( ; ; schfer , yang , & johansen , in preparation ) .simulations have also been used to study how dust is trapped within vortices , which could explain the observed lopsided dust concentration in some transition disks . using simulations of a protoplanetary disk with planet - induced gaps ,the axisymmetric dust distribution observed in the hl tau disk can be reproduced . by matching simulation models with the spiral structures in an observed protoplanetary disk , the mass and orbit of any potentially unseen planetcan also be inferred . for a solid particle of sizeless than m in a protoplanetary disk , the dominant interaction between the particle and its surrounding gas is via their mutual drag force instead of their mutual gravity ( see , e.g. , * ? ? ?* ; * ? ? ?the main tendency of the drag force is to reduce the relative velocity between the gas and the particle exponentially with time , the strength of which can be characterized by the stopping time . when the collective motion of a swarm of identical solid particles is considered , the time constant of the exponential decay due to this mutual drag force is given by , where is the local solid - to - gas density ratio ( see sections [ ss : asol ] and [ ss : unistr ] ) .therefore , for any numerical method explicitly integrating a particle - gas system with mutual drag interaction , stability criterion requires that the time step must be less than this time constant at all time . depending on the value of this time constant , ,the mutual drag force can become extremely stiff in particular regimes .firstly , for particles of sizes less than about the mean free path of the surrounding gas , the stopping time is linearly proportional to the size of the particles .hence the smaller the particles , the stiffer the mutual drag force becomes . as a reference point, is on the order of local keplerian orbital period for mm - sized particles embedded at 1au in the mid - plane of a minimum mass solar nebula model ( e.g. , * ? ? ?secondly , the stronger the local concentration of solid particles , the higher the maximum solid - to - gas density ratio and yet again the stiffer the mutual drag force is .it has been suggested that the mm - sized chondrules ubiquitous in the solar system were formed in a solid - rich environment , with a density of solids roughly 100 times the background gas density or more . moreover , for sedimented layer of particles marginally coupled to the surrounding gas , the maximum local solid - to - gas density ratio in the saturated state of the streaming turbulence can be as high as . therefore ,either effect or a combination of both can render the time step so short that the computational cost of the simulation model becomes intractable .a classic approach to treat stiff source terms is to operator split these terms out and use a dedicated method to integrate them with reasonable accuracy and efficiency ( e.g. , * ? ? ?* ; * ? ? ?when considering two - fluid approximation for a particle - gas system with mutual drag interaction , this leads to a system of ordinary differential equations for the drag force without any spatial coupling .it is then relatively straightforward to integrate this system strictly locally , either with analytical formulas or with numerical methods .this approach and similar ones have been implemented in eulerian grid - based schemes and in smoothed particle hydrodynamics . instead of the two - fluid approximation ,however , it is typically preferable to use lagrangian solid particles to model a particle - gas system .the exact information of position and velocity carried by each particle better samples the distribution of particles in phase space .the ability of sampling the velocity distribution of particles is especially important in the study of collisional evolution of solids , both large ( e.g. , * ? ? ?* ) and small ( e.g. , * ? ? ?lagrangian particles not only help measure their velocity dispersion , but also can directly be used to predict the collision parameters , neither of which is readily available in the fluid approximation . moreover , in the drag - dominated regime , particles do not dynamically equilibrate among themselves due to lack of direct interaction , and thus the effective pressure of particles is virtually zero .this implies that any spatial variation in the fluid description of particles is connected by a contact discontinuity , which is not trivial to be traced numerically accurately , especially when the contrast is significant .this issue is absent when using lagrangian particles .despite these advantages of employing lagrangian solid particles along with eulerian gas , major difficulties arise for the direct integration of the mutual drag force between gas and particles ( see section [ s : algm ] ) .the most difficult of these is that the presence of the particles can make the system of equations globally coupled , much like a diffusion equation , with which the temporal solution for any cell depends on the initial conditions for all other cells . with the diffusion equation , an initial delta functionis immediately broadened in time and becomes a gaussian distribution which is nonzero for the whole domain .surprisingly , using the standard particle - mesh approach to compute the mutual drag force in a particle - gas system also induces this property for the connected domain covered by the particles , as we show in appendix [ s : ns ] .consider , for example , a one - dimensional grid of gas with uniformly distributed particles . suddenly pushing one particle on one side of the domainwould drag not only its surrounding gas , but also all of the particles via their intermediate gas cells , up to the particles on the other side of the domain .though this effect of global propagation of information diminishes exponentially with distance , it indicates that using the particle - mesh method , the coupling between the gas and the particles is more complex than simple local particle - gas pairs .while there exist standard numerical techniques to treat a diffusion equation efficiently , e.g. , the crank nicolson method and the spectral method , the incongruence between the lagrangian and the eulerian descriptions makes these methods not applicable to the particle - gas systems with mutual drag force .therefore , the focus has been only on the direct integration of the drag force on the particles without operator splitting the drag force on the gas , as in , , and .this approach only relieves the time - step constraint due to small particles , while it remains problematic as strong solid concentration occurs .note also that in as well as in , an artificial increase of the stopping time is implemented for those cells with high local solid - to - gas density ratios in order to circumvent the time - step constraint , and the numerical accuracy of this approach has not been systematically demonstrated yet . in this work ,we devise a numerical algorithm that effectively disentangles the system of equations for the mutual drag force and allows for its direct integration on a cell - by - cell basis .for each cell , then , we use an analytical solution to assist in predicting the velocities of the gas and the particles at the next time step , so that the time - step constraint posed by the mutual drag force is lifted .this algorithm is described in detail in section [ s : algm ] . to validate our algorithm ,we use an extensive suite of benchmarks with known solutions in one , two , and three dimensions , presented in sections [ s:1d][s:3d ] , respectively .finally , we discuss the generality and possible applications of our algorithm in section [ s : conc ] .to begin with , we consider a system of eulerian gas and lagrangian solid particles moving in a differentially rotating disk in a rotating frame . in addition to the coriolis force , the centrifugal force , and the external axisymmetric gravitational potential , the gas andeach of the particles interact with their mutual drag force .we further adopt the local - shearing - sheet approximation , in which the origin of the frame is located at an arbitrary distance away from the rotation axis with the - , - , and -axes of the frame constantly in the radial , azimuthal , and vertical directions , respectively , and the frame co - rotates with the disk at the local angular frequency at its origin .we include also a constant -acceleration to the gas from , e.g. , a background pressure gradient , and a vertically varying -acceleration on both the gas and the particles due to , e.g. , the vertical component of the central gravity . without loss of generality ,we assume an isothermal equation of state for the gas with being the speed of sound . then the governing equations for the gas read the equations of motion for the particles read in the above equations , and are respectively the density and the velocity of the gas at grid point , and is the velocity of the -th particle which is located at ; see figure [ f : pm]a .the constant local angular velocity is parallel to the -axis , and is the dimensionless shear parameter at radial distance from the rotation axis , which is for a keplerian potential .both and are measured relative to the background shear velocity at their respective locations .the parameter is the stopping time of the mutual drag force between the gas and each of the particles . for simplicity , and more importantly , for a clean demonstration of our algorithm , we assume is a constant in this work and discuss the possibility of its generalization in section [ s : conc ] .the remaining variables are and : the former is some averaged particle - to - gas density ratio `` perceived '' by the gas in cell from the -th particle , while the latter is some averaged velocity of the surrounding gas `` perceived '' by the -th particle .we then operator split out the source terms for the mutual drag force , the rotation / shear , the external radial acceleration for the gas , and the external vertical acceleration for the particles from the full system of equations ( see appendix [ s : os ] ) . as a result , the two independent systems of equations now read [ e : hypsys ] and [ e : src ] the system of equations consists of the usual euler equations for fluid dynamics and an ordinary differential equation for the positions of the particles , both of which can be integrated with any standard technique .our task is therefore to devise a numerical algorithm to solve the system of equations without any time - step constraint due to the mutual drag force between the gas and the particles . even after the operator splitting, the system of equations still presents several major difficulties , as mentioned in section [ s : intro ] .firstly , the rather loose definitions of and are a manifestation of the distinctly different formalisms of the gas and the solid particles , i.e. , eulerian vs. lagrangian .their design is therefore one of the central factors that determine the accuracy as well as the efficiency of the algorithm concerning the mutual drag force . secondly , to relieve the time - step constraint posed by both small stopping time and large solid - to - gas density ratio , the characteristic velocity curves followed by each cell of gas and each particle need to be accurately captured by the numerical method .this is best achieved by solving the system of equations simultaneously in the same integrator , given the particle - gas coupling via the mutual drag force .worst of all , the density ratio usually includes all particles within a definite distance to cell , while the velocity is usually a weighted average from the surrounding cells around the -th particle .this implies that all cells by equation and all particles by equation are more than likely to be completely coupled via and , particularly when the particles are roughly uniformly distributed throughout the computational domain .these couplings make the solutions for the whole velocity field of the gas and all the particle velocities as a function of time dependent on each other ( see also appendix [ s : ns ] ) . in order to solve this system efficiently ,an implicit numerical method is usually employed , since the method is unconditionally stable against time step . however , even with an implicit method , the matrix representing the system of equations is huge , with the total number of cells plus particles on each side , and the inversion of this matrix can be a prohibitive computational task , since it can not be organized in simple band diagonal form ., this approach does not relieve the time - step constraint due to high local solid - to - gas density ratios .] in the following subsections , we construct our numerical algorithm with these difficulties in mind . in order to proceed , we first conceive the simplest possible scenario to solve the system of equations , which leads to the nearest - grid - point ( ngp ) scheme of the particle - mesh method . in this scheme , each individual cell containing some number of particles is considered independently ; the gas in one cell interacts only with the particles inside and vice versa . as a result ,the gas and the particles in a cell do not couple with those in any of the neighboring cells .this implies that and , where is the mass of the -th particle and is the volume of the cell . with these simplifications ,the system of equations can be solved analytically .the vertical direction is independent of the other directions , and the solutions for it read ,\label{e : uzt}\\ v_{j , z}(t ) & = \left[v_{j , z}(0 ) + u_z(0)\left(\frac{1 - e^{-{\epsilon_{tot}}\tau}}{{\epsilon_{tot}}}\right)\right]e^{-\tau } + \alpha t + \nonumber\\&\quad \left(u_z - \frac{\alpha t_s}{{1 + { \epsilon_{tot}}}}\right ) \left[1 - e^{-\tau}\left(1 - \frac{1 - e^{-{\epsilon_{tot}}\tau } } { { \epsilon_{tot}}}\right)\right]\nonumber\\&\quad + \left[g_z(z_{p , j } ) - \alpha\right]t_s\left(1 - e^{-\tau}\right ) \label{e : vzt},\end{aligned}\ ] ] where is the total solid - to - gas density ratio in the cell , is the number of -folding times for the drag force at , is the vertical center - of - mass acceleration , and is the vertical component of the initial center - of - mass velocity of the particle - gas system .the first term in equations and is the decaying mode from the initial velocities of the gas and the particles .the second term represents the center - of - mass motion of the system .the remaining terms stem from the coupling between the gas and the particles due to the mutual drag force and determine the terminal velocities relative to the center - of - mass motion .solving the system of equations for the horizontal directions is more involved because the coriolis force and the shear couple the - and -components of the velocities .nevertheless , the analytical solutions can still be found and are the first term in the above equations denotes the equilibrium velocities , which are calculated by where is the dimensionless stopping time , or the stokes number in the context of a rotating disk . when the shear parameter and , the keplerian frequency , these are simply the nakagawa sekiya hayashi ( nsh ) equilibrium solutions .the vectors and are defined by + \sum_j{\tilde{\epsilon}_{kj}}\left[{\boldsymbol{v}}_j(0 ) - \tilde{{\boldsymbol{v}}}\right]}{{1 + { \epsilon_{tot}}}},\\ { \boldsymbol{v } } & \equiv \frac{\left[{\boldsymbol{u}}(0 ) - \tilde{{\boldsymbol{u}}}\right ] - { \epsilon_{tot}}^{-1}\sum_j{\tilde{\epsilon}_{kj}}\left[{\boldsymbol{v}}_j(0 ) - \tilde{{\boldsymbol{v}}}\right ] } { { 1 + { \epsilon_{tot}}}},\end{aligned}\ ] ] which are respectively the initial center - of - mass velocity of the system and the weighted velocity difference between the gas and the particles , measured with respect to the equilibrium state .the constant is defined by and is the epicycle frequency .hence the second term in equations describes the constant , in - phase , epicycle motion of the bulk system , while the third term depicts the decaying , out - of - phase , epicyclic mode of the velocity difference between the gas and the particles .finally , the vectors are defined by ,\ ] ] and thus the last term in equations and denotes the decaying , epicyclic mode of the velocity difference of each individual particle relative to the center - of - mass of the particle system .it should be clear now what the advantages of operator splitting not only the mutual drag force but also the other source terms , as in equations , are .these source terms do not depend on any spatial derivative of the field variables or differences between particles , and an equilibrium state should be preserved numerically when they cooperate in balance .hence the equilibrium velocities and inherent in equations are important in guaranteeing that the equilibrium state can be maintained down to the machine precision .- folding times is required for these seeds to grow to appreciable amplitude compared to the dynamical timescale of many models of interest , these errors do not pose a serious issue .] note that the hydrostatic equilibrium is on the contrary determined by the system of equations , or the like , which should be independently maintained by the other integrator given the operator split . also with equations , all the epicyclic modes due to the coupling between the gas and the particles are accurately followed in time .most importantly , the velocity damping due to mutual drag force can be predicted with a time step of arbitrary size using the solutions , and thus the time - step constraint posed by the drag timescale in either direction can be relieved .even though the ngp scheme is simple and easy to implement , it is not suitable in many circumstances and a higher - order particle - mesh scheme is usually desirable . with the ngp scheme, each particle only interacts with the gas at the nearest grid point , so the particle behaves as if sitting at the grid point and the significance of its positional information within the cell is lost .moreover , the mass and the momentum density fields sampled via the particles are prone to be overwhelmed by the poisson noise . to have a 1% sensitivity , at least particles are required in each cell . on the other hand , any higher - order interpolation scheme that utilizes the positional information of the particlesdrastically overturns the situation ; as less as one particle per cell is sufficient to describe a signal of arbitrarily low amplitude .therefore , it is necessary to incorporate any particle - mesh scheme of choice into the solutions discussed in section [ ss : asol ] . to achieve this ,the solid - to - gas density ratio contributed by the -th particle in the cell at should be generalized to where is the weight function of the particle - mesh scheme in question .more often than not , the weight function has a physical interpretation ( e.g. , * ? ? ?* ) ; see figure [ f : pm]b .for example , in the cloud - in - cell ( cic ) scheme , the mass of each particle is distributed uniformly in a rectangular box of the cell size that is centered at the particle , i.e. , a uniform rectangular `` particle cloud '' . in the triangular - shaped - cloud ( tsc )scheme , each dimension of the particle cloud is doubled and the mass of the particle is distributed non - uniformly with a density peak at the cloud center and zero density at the cloud boundary . this scheme is so named since the density profile of the particle cloud resembles an isosceles triangle when viewed at the cloud center along any of the coordinate directions . the density function of the ngp scheme in this interpretation is simply a delta function .hence the term in equation can be interpreted as the fraction of the mass of the -th `` particle cloud '' that is enclosed by the cell at and is treated as part of the `` particle fluid '' in the cell .guided by this interpretation of equation , we have the following proposition to update the velocities of the particles . 1 .split each particle cloud into multiple sub - clouds and distribute them into the surrounding cells according to equation .each sub - cloud has the same initial velocity as their parent particle .we denote the velocity of the sub - cloud at cell as and thus for all .see figure [ f : pm]b .2 . for each cell ,treat the gas and all the sub - cloud inside as a multi - fluid system .hence identify as and as in equations .3 . find the velocity changes at time from the analytical solutions in equations , , and .see figures [ f : pm]c and d.[dvjk ] 4 .after all cells are integrated , collect the momentum changes of the sub - clouds back to the parent particles : 5 .update the velocities of the particles by .see figure [ f : pm]e . essentially , this procedure decouples the coupled system of equations and makes it possible to conduct the integrations on a cell - by - cell basis , and hence substantially reduces the amount of computational work required , as discussed in the introduction of this section .we note that the procedure outlined above is consistent with standard particle - mesh interpolation .since the momentum change of an individual particle is the sum of the momentum changes of its sub - clouds , \nonumber\\ & = g_z(z_{p , j}){\hat{{\boldsymbol{e}}}}_z - 2{\boldsymbol{\omega}}\times{\boldsymbol{v}}_j + q \omega v_{j , x } { \hat{{\boldsymbol{e}}}}_y + \nonumber\\&\qquad \frac{\sum_k w({\boldsymbol{r}}_k - { \boldsymbol{r}}_{p , j}){\boldsymbol{u}}_k - { \boldsymbol{v}}_j}{t_s } , \label{e : pmgas}\end{aligned}\ ] ] where is the gas velocity at . by comparing this with equation , it can be seen that , which proves that the gas velocity experienced by the particle is the standard particle - mesh interpolation from its surrounding cells . in principle , the gas velocity in cell at time can be similarly obtained along with step [ dvjk ] above via the analytical solutions in equations , , and , and thus can be updated directly . as will be shown in section [ ss : lsi ], the growth rate of a linear mode for the streaming instability indeed converges with resolution using this approach .however , the convergence rate is relatively poor compared with that using the explicit integration . in the next subsection , we devise further steps for the update of the gas velocity that significantly improve the benchmarks .the reason for the relatively poor performance of directly using the analytical solutions to update the gas velocity after the steps described in section [ ss : uppar ] is that this approach remains local from the perspective of the gas .even though the gas in a cell receives sub - particle clouds from the surrounding cells , it does not interact with the neighboring gas .this can be seen in the system of equations with the substitutions and , where the gas velocity in cell is the sole state variable for the gas and no coupling for gas velocities between different cells exists . in reality , however , the gas in neighboring cells should couple via the drag force with the interpenetrating particle clouds as interpreted in the particle - mesh method .this missing coupling can be remedied , as inspired by the algorithm suggested in for distributing the back reaction of the drag forces from particles to gas .we note that the velocity change of each particle acquired by the steps in section [ ss : uppar ] contains all the information of the mutual drag force between the particle and the surrounding gas .that is , the particle has sampled the spatial variation in the velocity field of the gas to determine its own velocity change , as shown in equation .this process can then be reversed since the mutual drag force forms an action - reaction pair .each particle can be considered now as a unified particle cloud undergoing a momentum change instead of a group of independent sub - clouds .this momentum change can then be redistributed onto the grid by standard particle - mesh assignment .see figure [ f : pm]f .one more difficulty remains , though , since additional source terms are included in our system and thus the total momentum of the gas and particles in each cell is not conserved .this difficulty can be resolved with the center - of - mass frame approach , in which the mutual drag force cancels out . combining equations , , and gives where is the center - of - mass velocity .equation can be analytically integrated and gives the change at time as with since are known , equation can be rearranged to find the velocity change of the gas ( figure [ f : pm]f ) .this completes our algorithm . as a final note ,the sheared periodic boundary conditions for the local - shearing - sheet approximation require some attention in our algorithm . for the eulerian description , they state that , where is any of the dynamical fields , is the radial dimension of the computational domain , and is the time at the beginning of each time step instead of the size of a time step used liberally in the previous subsections .note , however , that our algorithm completely decouples the gas fields and no simultaneous information in any pair of adjacent cells is needed in any of our steps . on the other hand ,all of the coupling is achieved via the splitting of the particle `` clouds '' .the only place that the boundary conditions ( as well as domain decomposition in parallel computing ) are required , then , is in the particle - mesh weight function , specifically in equations , , and .therefore , the radial boundary conditions can simply be executed by shifting the positions of the particles near the radial boundaries by [ e : parbc ] when these particles are cast into the other side of the boundary , in which the upper / lower sign is taken for the left / right boundary .- coordinate into the limits of the ghost cell a particle is sent to , by applying the azimuthal periodicity , where is the azimuthal size of the computational domain . ]the azimuthal and the vertical boundary conditions for our algorithm can be similarly implemented by revising equations .all the other properties of the particles remain unchanged .we note that this approach also eliminates the need of interpolation as required in the implementation of , the latter of which introduces additional numerical errors near the radial boundaries in particle - mesh assignment .we have implemented our algorithm in the pencil code , a high - order finite - difference simulation code for astrophysical fluids and particles .the code employs sixth - order centered differences in space and third - order runge kutta integration in time .the system of equations is operator split out of the runge kutta steps and thus separately integrated by the algorithm described in this section . throughout this work ,we restrict ourselves to the tsc weight function , with which the interpolation error is of second order in cell size . in the following ,we validate the algorithm as well as our implementation on several systems with known solutions .sedimentation of solid particles towards the mid - plane of a gas disk is one of the most important topics in the theory of planet formation .it is considered to be the first process in the core accretion scenario , creating a dense layer of solid materials that can later concentrate and become seeds of planetary cores .the degree of sedimentation intimately couples with the gas dynamics , especially in turbulent disks , and thus numerical simulations are often required .we hereby use a simple form of the sedimentation process as the first benchmark against our integrator .we consider a single particle moving vertically through a stationary gas .the particle undergoes a gravitational acceleration of the form , where is the vertical natural frequency , and the gas drag of stopping time .the equation of motion for the particle is then this system is the well - known damped harmonic oscillator , and its analytical solutions are readily available . using our algorithm , equation is equivalently being operator split into two separate equations as the first of which is in the runge kutta integrator and thus any integrator of order at least one renders this solution . ] while the second is in the split integrator of section [ s : algm ] .note that in this case the mass of the particle is effectively zero so that the gas is unaffected and remains stationary .the particle is released at a height of at rest , i.e. , and .we integrate this system with both the godunov and the strang splitting methods , which are formally first- and second - order accurate , respectively ( see appendix [ s : os ] ) .figure [ f : oscex ] compares the numerical and the analytical solutions for the cases of a simple harmonic ( ) , underdamped ( ) , critically damped ( ) , and over - damped ( ) oscillator .we use a fixed and unusually large time step of to highlight the numerical errors in the comparison .even with such a large time step , the numerical solutions agree reasonably well with the analytical ones , especially for more highly damped systems .the strang splitting does perform better than the godunov splitting in particle position , but no appreciable difference appears in particle velocity between the two splitting methods . in any case , dispersive errors do exist in both position and velocity for the case of oscillatory systems .however , no diffusive errors exist in either variable , and having this property is important in accurately establishing the scale height of the particle layer , which is one of the critical factors in driving planet formation .finally , notice that although the time step is much larger than the stopping time in the over - damped system and thus the initial acceleration of the particle is not resolved , the numerical solution still accurately captures the terminal speed at the very first time step , and even more so at later times .figure [ f : oscerr ] demonstrates the accuracy and convergence properties of our algorithm for two over - damped harmonic oscillators , one with and the other with .we cover a wide range of time step so that both and regimes are included . for the errors in position , the godunov splitting shows the expected first - order convergence .on the other hand , the strang splitting shows the expected second - order convergence only for small while first - order convergence for large , the latter of which might be due to the unresolved initial acceleration of the particle .the transition occurs at , and it seems that the error approaches the same asymptote towards small irrespective of the stopping time .nevertheless , the strang splitting is indeed more accurate in position than the godunov splitting , albeit only slightly at large time step . for the errors in velocity, there exists no difference between the godunov and the strang splittings , which is consistent with figure [ f : oscex ] , and this property does not depend on the stopping time .similar to the convergence in position with the strang splitting , the convergence in velocity for both splittings shows first order for large and second order for small , and the transition occurs at . in any case , the smaller the stopping time , the more accurate the results are at any given time step .note that it is never stable to integrate this system explicitly with , which is the regime of interest in this work .furthermore , in a typical model , one usually operates on the range .hence we consider these results to be fairly accurate .we next consider the interpenetrating streaming motions between uniform gas and uniformly distributed particles .it is the same test performed by , in which it was called particle - gas deceleration test , and it is also the linear - drag case of the dustybox suite presented by . in this scenario ,the system of equations reads where and are the velocities of the gas and the particles , respectively , and is the constant solid - to - gas density ratio . with the initial conditions and , the solutions for the velocities are ,\label{e : usu}\\ v(t ) & = v_0 e^{-(1+\epsilon)t / t_s } + u_0\left[1 - e^{-(1+\epsilon)t / t_s}\right],\label{e : usv}\end{aligned}\ ] ] where is the center - of - mass velocity .the displacement for each of the particles is given by + u_0 t.\label{e : uss}\end{aligned}\ ] ] in the center - of - mass frame , .the only relevant scales of time and velocity in this system are the stopping time and the speed of sound , respectively . hence the time and all the velocities can be normalized by these two scales , and this in turn fixes the length scale at . to test this system with our algorithm , we set up a one - dimensional , periodic grid of gas and uniformly distributed lagrangian particles .the computational domain has a length of so that a time step greater than the stopping time can be covered in the test .we allocate one particle at the center of each cell .the particles have an initial velocity of while the gas has a uniform initial velocity of , i.e. , the gas and the particles move in opposite directions .the errors are measured at a fixed final time of , and the cell size is varied to test the numerical convergence .we note that for , it takes exactly one time step to reach .finally , we experiment with a wide range of solid - to - gas density ratio , which is the only free parameter in the system .we find that the final velocities in all cases are accurate to the analytical solutions , equations and , close to the machine precision .although the velocities do not exactly remain uniform , the noise introduced is close to the machine - precision level .only slight increase in the noise level can be observed for high resolutions and thus small time steps , and this can be attributed to the round - off errors .the high degree of accuracy in velocities is hence not surprisingly due to our analytical integration of the velocities in section [ ss : asol ] , assisted by the high degree of uniform motion .therefore , the only error remains to be considered is the displacement of the particles .similar to the velocities , the noise in the displacements of the particles is small and close to the machine - precision level , and thus the separation between adjacent particles remain highly constant . figure [ f : userr ] shows the absolute error in the mean displacement of the particles as a function of cell size , and it can be seen that numerical convergence is achieved in a wide range of solid - to - gas density ratio with both the godunov and the strang splittings .a few features can be observed from figure [ f : userr ] .firstly , the accuracy is relatively insensitive to the solid - to - gas density ratio for , while it improves appreciably for .secondly , the godunov splitting demonstrates the expected first - order convergence .the strang splitting , on the other hand , only achieves the second - order convergence when is sufficiently small such that the mutual drag timescale of is resolved .this behavior is similar to what is found in section [ ss : osc ] .thirdly , the strang splitting renders more accurate displacements when .however , the difference between the strang and the godunov splittings significantly reduces when .finally , we note that in all cases , the errors are all below the cell size , and hence the algorithm gives more precise displacements of the particles than what the resolution can provide for .in the local - shearing - sheet approximation ( see , e.g. , equation or ) , the existence of the shear advection terms , where is any dynamical field variable , makes }$ ] with a time - dependent -wavenumber and a constant -wavenumber a natural choice of the basis function .this basis depicts a two - dimensional wave , in which the power in the azimuthal direction feeds the power in the radial direction , winding up the structure into a tighter and tighter spiral wave towards trailing morphology . substituting this basis into the two - fluid description of the particle - gas system without background radial gas pressure gradient ( i.e. , in equation ) , derived a set of ordinary differential equations for the ( complex ) amplitudes of the wave : [ e : swav ] ,\\ { \frac{\mathrm{d}\hat{u}_x}{\mathrm{d}t } } & = 2\omega\hat{u}_y - \frac{\epsilon_0}{t_s}\left(\hat{u}_x - \hat{v}_x\right ) - \frac{i k_x(t)c_s^2}{\rho_{g,0}}\hat{\rho}_g,\\ { \frac{\mathrm{d}\hat{u}_y}{\mathrm{d}t } } & = -(2 - q)\omega\hat{u}_x - \frac{\epsilon_0}{t_s}\left(\hat{u}_y - \hat{v}_y\right ) - \frac{i k_y c_s^2}{\rho_{g,0}}\hat{\rho}_g,\\ { \frac{\mathrm{d}\hat{\rho}_p}{\mathrm{d}t } } & = -\rho_{p,0}\left[i k_x(t)\hat{v}_x + i k_y\hat{v}_y\right],\\ { \frac{\mathrm{d}\hat{v}_x}{\mathrm{d}t } } & = 2\omega\hat{v}_y - \frac{1}{t_s}\left(\hat{v}_x - \hat{u}_x\right),\\ { \frac{\mathrm{d}\hat{v}_y}{\mathrm{d}t } } & = -(2 - q)\omega\hat{v}_x - \frac{1}{t_s}\left(\hat{v}_y - \hat{u}_y\right),\end{aligned}\ ] ] where and are the background uniform densities of the gas and the particles , respectively , , and is the isothermal speed of sound .this system of equations can be readily integrated numerically , and its solution serves as a convenient analytical benchmark to validate our implementation of the sheared periodic boundary conditions as well as the mutual drag force . applying our algorithm to this shear - wave test , we evolve the particle - gas system for a square domain in the -plane with a single mode and . we use a 64 grid and a shear parameter of , and allocate one particle per cell .the initial conditions are such that while .note that in this case , it takes 22 for the -wavenumber to reach the nyquist frequency , when the numerical diffusion and/or aliasing becomes significant . in what follows , we vary the values of and to assess the performance of our algorithm , where .we only present the results with the godunov splitting method and note that the strang splitting only slightly improves the accuracy .first we consider the case of and , which makes the test exactly the same as was done in .figure [ f : swav1 ] shows the comparison between the analytical solutions obtained from integrating equations and the measurements of the amplitudes on the simulation data obtained with our algorithm .all the amplitudes of the shear wave from the simulation agree well with the analytical solutions for several orbital periods , with some minor deviation in the velocity field of the particles for .note that the time steps used here are significantly larger than those used in .next we probe the case of small particles with and , the result of which is shown in figure [ f : swav2 ] . in this case , there exists an initial abrupt jump in the and the fields , followed by a smooth oscillatory evolution in the amplitudes as in the previous case .an integration with the explicit method would require an extremely small time step to resolve and accurately capture this initial jump ; it is not even numerically stable if the time step is larger than the width of this feature .our algorithm , in contrast , accurately predicts the velocity fields with a time step much longer than the timescale of this feature . in spite of some initial minor deviation of the density fields ,the solutions with our algorithm closely follow the analytical ones up to , after which some noticeable deviation appears in both the gas and the particle fields . finally, we barge into the solid - dominated regime with and , as shown in figure [ f : swav3 ] . similar to the previous case ,a yet larger initial jump occurs in the field but a lesser one in the fields , and our algorithm accurately finds the first velocity fields with one long time step . although the algorithm also captures the density field of the particles relatively accurately , a significant error exists in the density field of the gas at the very first time step .this error affects the accuracy of the subsequent evolution of the shear wave , the most prominent of which is in the frequency of the oscillation in the amplitude of each field .nevertheless , the general evolution of this shear wave is still reproduced by our algorithm up to , where especially noticeable are the maximum amplitudes achieved in each oscillation of the fields .this last case highlights that some inaccuracy in density fields can occur when there exists unresolved transient behavior in velocity , a situation reminiscent of the sedimentation benchmark presented in section [ ss : osc ] .the magnitude of this numerical error depends on how strong the change in velocity is , and in this test problem , increases with increasing background solid - to - gas density ratio .note that this kind of transient behavior often stems from initial conditions which are significantly out of equilibrium , as in this case , or impulses imposed onto the system in its course of evolution .once the impulse subsides and the system resumes smooth evolution , i.e. , one that is resolved by the numerical time steps , our algorithm should exhibit high degree of accuracy , as demonstrated in the first case ( or even the second case ) of this section as well as other benchmarks presented in this work . on the other hand , if the transients are of importance , one can always restrict the time steps so that those can be accurately captured . as demonstrated in figure [f : swav4 ] , simply resolving the initial transient jump by four time steps makes the evolution of all fields accurate up to , a similar performance achieved in the earlier cases of figures [ f : swav1 ] and [ f : swav2 ] .an important discovery in the theory of planet formation was that of the streaming instability by .this instability efficiently concentrates centimeter / meter - sized solid particles in a protoplanetary gas disk to trigger gravitational collapse and form kilometer - sized planetesimals , circumventing the problematic fast radial drift of the solid particles ( e.g. , * ? ? ?the mutual drag force between the gas and the solid particles moving in a differentially rotating disk is an essential ingredient in this instability , and hence the results of the analysis carried out by , serendipitously , can be used as a rigorous touchstone to validate any numerical algorithm concerning the integration of this kind of the particle - gas system .this is the goal of this section .the streaming instability is a local , two - dimensional , axisymmetric instability with interpenetrating gas and particles under differential rotation . ignoring vertical gravity and assuming an isothermal equation of state for the gas , have performed linear analysis on this system and found the growth rate and wave speed of this instability and the corresponding eigenvector as a function of the wavenumber of the perturbation as well as all the relevant dimensionless parameters for the system .the eigensystem was expressed in the frame of the center - of - mass velocity of the particle - gas system , which was not convenient for direct comparison with numerical simulations , and hence have transformed these solutions back into the local - shearing - sheet frame and expressed them as a standing wave in the vertical direction while propagating in the horizontal direction .the resulting eigenfunction is either even e^{st}\cos k_z z,\label{e : lsie}\end{aligned}\ ] ] or odd e^{st}\sin k_z z,\label{e : lsio}\end{aligned}\ ] ] in the vertical direction , where is the complex amplitude , is the wavenumber , and is the complex angular frequency .the vertical component of the velocities of the gas and the particles assumes the odd parity , while all other dynamical fields assume the even parity . on top of the background equilibrium state as given in equations and with an initially small amplitude for the perturbations , equations and then serve as both the initial conditions and the analytical solutions against which our algorithm is validated .four eigensystems have been published in the literature , all of which are the ( nearly ) fastest growing modes at the respective set of the dimensionless parameters .we first test the lina and the linb modes in .they have the same dimensionless stopping time of as well as the same background radial gas pressure gradient of , where is the isothermal speed of sound and is the local keplerian angular frequency .on the other hand , the lina mode has a background solid - to - gas density ratio of , while the linb mode has . in our tests ,the computational domain is such that one wavelength of the mode is fit in each direction .we use one particle per cell , and use the method described in appendix c of to seed a perturbation of in the density field of the particles .given their positions , the initial velocities of the particles are then set by the perturbation equations and as well as the equilibrium drift velocity equations and .it is trivial to set the initial density and velocity field of the gas using equations and as well as equations and .we measure the complex amplitude of the fourier mode as a function of time in our simulation data , and then use the linear regression on the magnitude of the amplitude to find the ( exponential ) growth rate .we vary the resolution to seek the convergence of our measured rate as compared to the analytical one . for comparison purposes , we use three different integration schemes for this test .one is the explicit integration , as originally used by .the other two are our algorithm either with or without particle - mesh back - reaction ( pmbr ) to the gas velocity field , as described in section [ ss : upgas ] , and we only present the results with the godunov splitting here .the results are shown in figures [ f : lina ] and [ f : linb ] . as can be seen ,the growth rates for all three schemes converge to the theoretical one with resolution . for the lina mode, our algorithm _ with pmbr _ achieves virtually the same performance of the explicit integration .while it requires a resolution of around 3264 points per wavelength for the growth rate of the field to approach within 5% of the theoretical one , it requires around 1632 points per for those of the , , and fields , around 16 points per for that of the field , and only around 48 points per for that of the , , and fields .albeit convergent , our algorithm _ without pmbr _ has a much slower convergence rate , requiring a resolution of around 64128 points per for the growth rates of most fields , except around 3264 points per for those of the and fields and around 1632 points per for that of the field .as for the linb mode , our algorithm _ with pmbr _ has again similar growth rates as achieved by the explicit integration. a resolution of around 3264 points per is required , except for the field , which requires only a resolution of around 1632 points per .our algorithm _ without pmbr _ remains inferior , requiring a resolution of around 128256 points per in all fields except for the field , which requires a resolution of around 64128 points per .the other two modes are the linc and the lind modes from .these modes push the limit of the dimensionless stopping time down to and , respectively .they have the same solid - to - gas density ratio of , as well as the same background radial gas pressure gradient as do the lina and the linb modes .our test results are shown in figures [ f : linc ] and [ f : lind ] .again , the growth rates for all three schemes demonstrate convergence towards the theoretical one .moreover , the performance of our algorithm _ with pmbr _ remains essentially the same as that of the explicit integration for all fields . for the linc mode , a resolution of around 3264points per wavelength is required for all the velocity fields , while a resolution of around 1632 points per and that of around 128256 points per are required for the and the fields , respectively . for the lind mode , a resolution of around 64128 points per is required for all the velocity fields , while that of around 128256 points per is required for both the density fields .we note that the performance of these integrators for these modes is relatively better in comparison with that performed in . as for our algorithm _ without pmbr _ , the convergence is again much slower as compared to the other two schemes , and it does not even converge to within 5% of the theoretical rate at a resolution of 512 points per for the lind mode except for the density fields , which requires a resolution of around 256 points per .these results demonstrate the excellent performance of our algorithm in reproducing the linear modes of the streaming instability , especially with a wide range of the dimensionless stopping time .moreover , they justify the necessity of the implementation of pmbr to the gas velocity in our algorithm as described in section [ ss : upgas ] .we note here that there exists no noticeable improvement in the test results when using our algorithm with the strang splitting method . in the next subsection, we continue to explore our algorithm in the nonlinear saturation of the streaming instability . in a companion paper to , investigated the nonlinear saturation of the streaming instability and found two distinctive patterns that could develop in this stage .one is for marginally coupled solid particles with , while the other is for strongly coupled solid particles with .the particles initially concentrate themselves into short stripes that are close to horizontal but tilt alternately in the vertical direction with a separation consistent with the wavelength of the fastest growing mode of the streaming instability .these stripes then undergo inverse cascade and merge themselves into larger and larger stripes that tilt more and more towards the vertical direction . in the fully saturated stage ,long , strongly concentrated filaments with a wide separation float either upwards or downwards with much reduced radial drift motion . on the other hand ,the particles initially create numerous small voids by random , locally divergent motions .these voids then undergo inverse cascade and merge themselves into larger and larger voids , driving the particles into their rims . in the fully saturated stage ,voids of various sizes move around in random directions , while the particles stream along the alleyways in between these voids . using our algorithm along with the godunov splitting method , we rerun model ba and model ab in .model ba contains marginally coupled particles with and has a solid - to - gas density ratio of , while model ab contains strongly coupled particles with and has a solid - to - gas density ratio of .both models have a background radial gas pressure gradient of . adopting the same computational domains and resolutions as used in , we construct a 256 grid with a domain of 2 and 0.1.1 for model ba and model ab , respectively , where is the scale height of the gas .we allocate on average one particle per cell , which is sufficient in capturing the overall density distribution function of particles at a fixed resolution , as has been shown by .the initial velocities of the gas and the particles are those under mutual drag equilibrium , equations . the initial density field of the gas is uniform , while the initial positions of the particles are random to seed the streaming instability at all scales .the resulting evolutions of the density field of the particles are shown in figure [ f : nsi ] .as can be seen , we have reproduced the two aforementioned distinct patterns in the saturated state of the streaming instability , with quantitative similarity as the same models in .this again illustrates the consistency between the original explicit integration and our algorithm in this work .finally , we explore our algorithm with its full three - dimensional glory . to the best of our knowledge , there exists no appropriate analytical model that contains all the physics considered in our algorithm .we therefore resort to our earlier nonlinear simulations published in as our base for comparison , which employed the technique of explicit integration . in , we systematically studied the dependence of the streaming turbulence in a sedimented layer of solid particles on the computational domain of the simulations . with particles of dimensionless stopping time and a background gas pressure gradient of , where is the isothermal speed of sound and is the local keplerian angular frequency, we found that multiple radial filamentary concentrations of solids can be driven by the streaming instability and that the characteristic separation between adjacent filaments is on the order of 0.2 , with being the vertical scale height of the gas . using our algorithm along with the godunov splitting method, we rerun otherwise exactly the same model in that has a computational domain of 1.6.6.2 and a resolution of 160 points per . figure [ f : sedsig ] compares the resulting radial concentrations of solids between the simulations in and in this work .as can be seen in the comparison , we obtain excellent agreement between the two simulations . despite minor differences in the stochastic erosion and accretion of solids from and onto the filaments ,the total number of filaments , their radial drifts , and the magnitude of the solid density are all quantitatively and evolutionarily similar .this experiment demonstrates the consistency between the two techniques in three dimensions and yet again the robustness of our algorithm . in figure [f : seddt ] , we compare the time steps used in the two simulations . for the simulation in ,the time steps were initially determined by the hydrodynamic courant condition but soon dropped by roughly one order of magnitude beginning at , where is the local orbital period .this is predominantly limited by the drag time , which is the timescale for the exponential decay in the relative velocity between the gas and the particles due to their mutual drag interactions , as discussed in section [ s : intro ] . in the pencil code , it is approximated by , where is the gas density in the cell , is the volume of the cell , is the total number of ( super-)particles in the cell , is the mass of a ( super-)particle , and is the physical stopping time .note that we used 100% of as our time - step limiter in , for cells with high local solid - to - gas density ratios ; see section [ s : intro ] . ]which , although speeds up the simulations , could amount to a relative error of roughly 10% in relative velocity in the worst - case scenario .the pencil code defaults to use 20% of as a time - step limiter to guarantee better accuracy , and thus we also show the time steps that would have been used in this case in figure [ f : seddt ] . as can be seen , the drag time begins to dominate earlier at and the time steps drop by another order of magnitude than those used in . on the other hand ,our algorithm is not limited by the drag time at all .as shown in figure [ f : seddt ] , the time steps remain almost constant and are only determined by the hydrodynamic courant condition . dominates the time - step limiter , the courant number used is irrelevant .] these time steps are in drastic contrast to those used in the simulation of , the latter of which are more than one order of magnitude smaller , and would have been more than two orders of magnitude smaller if 20% of were adopted .therefore , this experiment also illustrates the exceptional efficiency of our algorithm , which we set out to achieve in this work .in summary , we have devised an accurate , efficient numerical algorithm to directly integrate the mutual drag force in a system of eulerian gas and lagrangian solid particles . despite the entanglement between the gas and the particles due to the conventional particle - mesh construct, we have effectively decomposed the globally coupled system of equations for the mutual drag force and been able to integrate this system on a cell - by - cell basis .analytical solution exists for the temporal evolution of each cell , which we use to achieve the highest degree of accuracy .this solution relieves the time - step constraint posed by the mutual drag force , making simulation models with small particles and/or strong local solid concentration significantly more amenable .we have used an extensive suite of benchmarks with known solutions in one , two , and three dimensions to validate our algorithm and found satisfactory consistency in all cases .even though the strang splitting is formally higher order , we find that its use with our algorithm does not offer significant advantage over the godunov splitting , especially in multidimensional models . in our one - dimensional benchmarks ,both splittings predict virtually the same velocities , and hence they give the same accuracy in velocities and have the same behavior in numerical convergence . as for the accuracy in particle positions, although the strang splitting demonstrates expected second - order convergence for small time steps , it degrades to first - order convergence and does not give noticeably improved accuracy than the godunov splitting for large time steps . in our multidimensional benchmarks , on the other hand , no appreciable difference in either density or velocity fields for the gas and the particles exists between the two splitting methods .we emphasize that since our objective is to relieve the time - step constraint due to the mutual drag force , the use of large time steps is of more interest here .moreover , note that since the strang splitting requires one more run of either our algorithm or the other integrator than the godunov splitting ( see appendix [ s : os ] ) , the former is significantly more expensive than the latter , let alone other operator splitting methods of even higher orders .therefore , we find that employing the godunov splitting with our algorithm is sufficient and more economical for practical purposes. the simulation models to which our algorithm can be applied are more general than what have been presented in this work .first of all , our algorithm only concerns the mutual drag force as well as the rotation / shear - related source terms and the background accelerations .all other physical processes are consolidated in the other half of the operator - split system of equations as in equations and are integrated independently of our algorithm . in this regard ,the particle - gas system that can be considered using our algorithm is unconstrained and potentially arbitrary .moreover , our algorithm is not limited to the local - shearing - sheet approximation .for instance , a non - rotating inertial frame , either rectangular or curvilinear , is simply a degenerate case of equations with the angular frequency , and thus the same solution of section [ ss : asol ] applies except that the equilibrium velocities ( equations ) need to be modified accordingly . as for a rotating cylindrical coordinate system , the shear acceleration terms in equations are replaced by the centrifugal force as well as the external gravity , but an analytical counterpart of the solution in section [ ss : asol ] can still be obtained from the modified system of equations .lastly , note also that it is not required for the particles to have the same mass , as shown in equation .some elaboration is needed when generalizing our procedure for a system with particles of various stopping times . a closed - form analytical solution as that in section [ ss : asol ]is only possible when the stopping time is a constant for all particles . for the case of independent stopping time for each particle ,nevertheless , it becomes merely a matter of adopting a numerical method that can accurately and efficiently approximate the solution of the corresponding system of equations in place of the analytical solution of section [ ss : asol ] .given the stiffness of the mutual drag force , an implicit method is likely to be required for this purpose .however , since we have effectively decoupled the system and made it possible to integrate it on a cell - by - cell basis , the individual integration task for each cell does not amount to an unmanageable size of matrix inversion , for instance .therefore , we expect that this extra complexity would not lead to noxious reduction in computational efficiency with our algorithm .having the time - step constraint due to stiff mutual drag force removed , our algorithm presented in this work will prove to be useful in the study of dust dynamics in protoplanetary disks .specifically , a model containing numerous mm / cm - sized pebbles and/or with high mass loading of solids can be simulated as efficiently as a model containing m - sized boulders without strong concentration of them .in other words , it becomes feasible to evolve small solid particles interacting with a gas disk in relatively high numerical resolution and long simulation time .this capability is particularly important in further study of the streaming instability with mm / cm - sized pebbles and its connection to planetesimal formation , as pioneered by , since the wavelength of the fastest growing mode decreases with decreasing particle size and the corresponding growth rate is low when the initial solid abundance is low . for the same reasons , our algorithm may also find its use in investigating the dynamics of pebbles with the influence of turbulence , vortices , or other large - scale structures in the gas disk , and the corresponding observational consequences .we thank james m. stone for motivating this project .we would also like to thank shu - ichiro inutsuka for his discussion on this work .significant improvement in the clarity of the manuscript was attributed to the referee s comments .part of the code development and the benchmarks presented in this work were performed on resources provided by the swedish national infrastructure for computing ( snic ) at lunarc in lund university , sweden .this research was supported by the european research council under erc starting grant agreement 278675-pebble2planet .a. j. is grateful for financial support from the knut and alice wallenberg foundation and from the swedish research council ( grant 2010 - 3710 ) .in this section , we demonstrate by a simple example that the mutual drag force between the gas and the particles leads to numerical stresses under the particle - mesh construct . consider a system of uniform gas and uniformly distributed particles .suppose that all the velocities of gas and particles align in the -direction and depend on only .the evolution of this system then becomes a one - dimensional problem in the -direction with all the velocities in the transverse direction , so the densities of both the gas and the particles remain uniform over time .we construct a regular grid for the gas with one particle at each cell interface , i.e. , the center of cell is at and the -th particle is at , where is the cell size .the system of equations for this problem reads , from equations and , where is the velocity of gas in cell , is the velocity of the -th particle , is the ( uniform ) solid - to - gas density ratio , is the stopping time , and is the particle - mesh weight function .we now consider the special case of , where is the dimensionless stopping time , i.e. , normalized by the timescale of interest for the system . in this limit ,equation implies that for all .we further adopt the cic weight function such that and for all .equation then becomes for all .this equation is exactly the same as the diffusion equation with a diffusion coefficient of when spatially discretized by second - order accurate , centered finite differences .we note that using a higher - order weight function as tsc does not change this diffusion coefficient , but only increases the accuracy of the spatial derivatives .another special case we can consider is the limit of and . in this limit ,equation implies that for all . with the cic weight function ,equation then becomes for all .once again , this equation is the same as the diffusion equation with a diffusion coefficient of , to second - order spatial accuracy .equations and indicates that using the particle - mesh method to treat the mutual drag force introduces numerical shear stresses to the system .moreover , it can be seen by generalizing the argument outlined above that numerical normal stresses are also induced by the particle - mesh method .fortunately , the diffusion coefficient associated with these stresses diminishes quadratically with increasing resolution .on the other hand , this property implies that the system of equations is indeed globally coupled , and special care needs to be taken to numerically solve this system both accurately and efficiently .in this appendix , we briefly review the concept of operator splitting , and two standard methods of it .suppose that is the full partial differential equation in question , where is the vector for the dynamical variables to be solved for and and are two differential operators .we can _ operator split _equation into two separate differential equations as with and being the respective solutions to equations along with the initial conditions .note that and can be either exact or approximate themselves .then by somehow combining and can the real solution to the full equation be approximated .there are two standard operator - splitting methods to approximate the solution at time step with the initial conditions at time step .the first is called the godunov splitting , and is given by a two - step method : the accuracies for the godunov and the strang splittings are formally first - order and second - order , respectively . for a proof of these properties and more information , the interested reader is referred to , e.g. , chapter 17 of .
|
numerical simulation of numerous mm / cm - sized particles embedded in a gaseous disk has become an important tool in the study of planet formation and in understanding the dust distribution in observed protoplanetary disks . however , the mutual drag force between the gas and the particles can become so stiff , particularly because of small particles and/or strong local solid concentration , that an explicit integration of this system is computationally formidable . in this work , we consider the integration of the mutual drag force in a system of eulerian gas and lagrangian solid particles . despite the entanglement between the gas and the particles under the particle - mesh construct , we are able to devise a numerical algorithm that effectively decomposes the globally coupled system of equations for the mutual drag force and makes it possible to integrate this system on a cell - by - cell basis , which considerably reduces the computational task required . we use an analytical solution for the temporal evolution of each cell to relieve the time - step constraint posed by the mutual drag force as well as to achieve the highest degree of accuracy . to validate our algorithm , we use an extensive suite of benchmarks with known solutions in one , two , and three dimensions , including the linear growth and the nonlinear saturation of the streaming instability . we demonstrate numerical convergence and satisfactory consistency in all cases . our algorithm can for example be applied to model the evolution of the streaming instability with mm / cm - sized pebbles at high mass loading , which has important consequences for the formation scenarios of planetesimals .
|
in many applications it is common to separate observed data into groups ( populations ) indexed by some covariate . a particularly fruitful characterization of grouped data is the use of mixture distributions to describe the populations in terms of clusters of similar behaviors .viewing observations associated with a group as local data , and the clusters associated with a group as local clusters , it is often of interest to assess how the local heterogeneity is described by the changing values of covariate .moreover , in some applications the primary interest is to extract some sort of global clustering patterns that arise out of the aggregated observations .consider , for instance , a problem of tracking multiple objects moving in a geographical area .using covariate to index the time point , at a given time point we are provided with a snapshot of the locations of the objects , which tend to be grouped into local clusters . over time , the objects may switch their local clusters .we are not really interested in the movement of each individual object .it is the paths over which the local clusters evolve that are our primary interest .such paths are the global clusters .note that the number of global and local clusters are unknown , and are to be inferred directly from the locally observed groups of data .the problem of estimating global clustering patterns out of locally observed groups of data also arises in the context of functional data analysis where the functional identity information is not available . by the absence of functional identity information ,we mean the data are not actually given as a collection of sampled functional curves ( even if such functional curves exist in reality or conceptually ) , due to confidentiality constraints or the impracticality of matching the identity of individual functional curves .as another example , the progesterone hormone behaviors recorded by a number of women on a given day in their monthly menstrual cycle is associated with a local group , which are clustered into typical behaviors .such local clusters and the number of clusters may evolve throughout the monthly cycle . moreover , aggregating the data over days in the cycle , there might exist one or more typical monthly ( `` global '' trend ) hormone behaviors due to contraception or medical treatments .these are the global clusters . due to privacy concern , the subject identity of the hormone levelsare neither known nor matched across the time points . in other words ,the data are given not as a collection of hormone curves , but as a collection of hormone levels observed over time . in the foregoing examples ,the covariate indexes the time .in other applications , the covariate might index geographical locations where the observations are collected .more generally , observations associated with different groups may also be of different data types .for instance , consider the assets of a number of individuals ( or countries ) , where the observed data can be subdivided into holdings according to different currency types ( e.g. , usd , gold , bonds ) . here , each is associated with a currency type , and a global cluster may be taken to represent a typical portforlio of currency holdings by a given individual . in view of a substantial existing body of work drawing from the spatial statistics literature that we shall describe in the sequel , throughout this paper a covariate value sometimes referred to as a spatial location unless specified otherwise .therefore , the dependence on varying covariate values of the local heterogeneity of data is also sometimes referred to as the spatial dependence among groups of data collected at varying local sites .we propose in this paper a model - based approach to learning global clusters from locally distributed data . because the number of both global and local clusters are assumed to be unknown , andbecause the local clusters may vary with the covariate , a natural approach to handling this uncertainty is based on dirichlet process mixtures and their variants .a dirichlet process defines a distribution on ( random ) probability measures , where is called the concentration parameter , and parameter denotes the base probability measure or centering distribution .a random draw from the dirichlet process ( dp ) is a discrete measure ( with probability 1 ) , which admits the well - known `` stick - breaking '' representation : where the s are independent random variables distributed according to , denotes an atomic distribution concentrated at , and the stick breaking weights are random and depend only on parameter . due to the discrete nature of the dp realizations , dirichlet processes and their variantshave become an effective tool in mixture modeling and learning of clustered data .the basic idea is to use the dp as a prior on the mixture components in a mixture model , where each mixture component is associated with an atom in .the posterior distribution of the atoms provides the probability distribution on mixture components , and also yields a probability distribution of partitions of the data .the resultant mixture model , generally known as the dirichlet process mixture , was pioneered by the work of and subsequentially developed by many others ( e.g. , ) . a dirichlet process ( dp ) mixture can be utilized to model each group of observations , so a key issue is how to model and assess the local heterogeneity among a collection of dp mixtures . in fact , there is an extensive literature in bayesian nonparametrics that focuses on coupling multiple dirichlet process mixture distributions ( e.g. , ) .a common theme has been to utilize the bayesian hierarchical modeling framework , where the parameters are conditionally independent draws from a probability distribution . in particular , suppose that the -indexed group is modeled using a mixing distribution .we highlight the hierarchical dirichlet process ( hdp ) introduced by , a framework that we shall subsequentially generalize , which posits that for some base measure and concentration parameter .moreover , is also random , and is distributed according to another dp : .the hdp model and other aforementioned work are inadequate for our problem , because we are interested in modeling the linkage among the groups _ not _ through the exchangeability assumption among the groups , but through the more explicit dependence on changing values of a covariate .coupling multiple dp - distributed mixture distributions can be described under a general framework outlined by . in this framework ,a dp - distributed random measure can be represented by the random `` stick '' and `` atom '' random variables ( see eq . ) , which are general stochastic processes indexed by .starting from this representation , there are a number of proposals for co - varying infinite mixture models .these proposals were designed for functional data only , i.e. , where the data are given as a collection of sampled functions of , and thus not suitable for our problem , because functional identity information is assumed unknown in our setting . in this regard ,the work of are somewhat closer to our setting .these authors introduced spatial dependency of the local dp mixtures through the stick variables in a number of interesting ways , while additionally considered spatially varying atom variables , resulting in a flexible model .these work focused mostly on the problem of interpolation and prediction , not clustering .in particular , they did not consider the problem of inferring global clusters from locally observed data groups , which is our primary goal . to draw inferences about global clustering patterns from locally grouped data , in this paperwe will introduce an explicit notion of and model for global clusters , through which the dependence among locally distributed groups of data can be described .this allows us to not only assess the dependence of local clusters associated with multiple groups of data indexed by , but also to extract the global clusters that arise from the aggregated observations . from the outset, we use a spatial stochastic process , and more generally a graphical model indexed over to characterize the centering distribution of global clusters . spatial stochastic process and graphical models are versatile and customary choice for modeling of multivariate data . to `` link '' global clusters to local clusters , we appeal to a hierarchical and nonparametric bayesian formalism : the distribution of global clusters is random and distributed according to a dp : . for each , the distribution of local clustersis assumed random , and is distributed according to a dp : , where denotes the marginal distribution at induced by the stochastic process . in other words , in the first stage , the dirichlet process provides support for _ global atoms _ , which in turn provide support for the _ local atoms _ of lower dimensions for multiple groups in the second stage . due to the use of hierarchy and the discreteness property of the dp realizations , there is sharing of global atoms across the groups . because different groups may share only _ disjoint_ components of the global atoms , the spatial dependency among the groups is induced by the spatial distribution of the global atoms .we shall refer to the described hierarchical specification as the nested hierarchical dirichlet process ( nhdp ) model .the idea of incorporating spatial dependence in the base measure of dirichlet processes goes back to , although not in a fully nonparametric hierarchical framework as is considered here .the proposed nhdp is an instantiation of the nonparametric and hierarchical modeling philosophy eloquently advocated in , but there is a crucial distinction : whereas teh and jordan generally advocated for a _ recursive _ construction of bayesian hierarchy , as exemplified by the popular hdp , the nhdp features a richer _ nested _ hierarchy : instead of taking a joint distribution , one can take marginal distributions of a random distribution to be the base measure to a dp in the next stage of the hierarchy . this feature is essential to bring about the relationship between global clusters and local clusters in our model .in fact , the nhdp generalizes the hdp model in the following sense : if places a prior with probability one on constant functions ( i.e. , if then ) then the nhdp is reduced to the hdp .most closely related to our work is the hybrid dp of , which also considers global and local clustering , and which in fact serves as an inspiration for this work . because the hybrid dp is designed for functional data , it can not be applied to situations where functional ( curve ) identity information is not available , i.e. , when the data are not given as a collection of curves .when such functional i d information is indeed available , it makes sense to model the behavior of individual curves directly , and this ability may provide an advantage over the nhdp . on the other hand , the hybrid dp is a rather complex model , and in our experiment ( see section [ sec - examples ] ) , it tends to overfit the data due to the model complexity .in fact , we show that the nhdp provides a more satisfactory clustering performance for the global clusters despite not using any functional i d information , while the hybrid dp requires not only such information , it also requires the number of global clusters ( `` pure species '' ) to be pre - specified .it is worth noting that in the proposed nhdp , by not directly modeling the local cluster switching behavior , our model is significantly simpler from both viewpoints of model complexity and computational efficiency of statistical inference .the paper outline is as follows .section [ sec - gdp ] provides a brief background of dirichlet processes , the hdp , and we then proceed to define the nhdp mixture model .section [ sec - properties ] explores the model properties , including a stick - breaking characterization , an analysis of the underlying graphical and spatial dependency , a plya - urn sampling characterization .we also offer a discussion of a rather interesting issue intrinsic to our problem and the solution , namely , the conditions under which global clusters can be identified based on only locally grouped data . as with most nonparametric bayesian methods , inference is an important issue .we demonstrate in section [ sec - inference ] that the confluence of graphical / spatial with hierarchical modeling allows for efficient computations of the relevant posterior distributions .section [ sec - examples ] presents several experimental results , including a comparison to a recent approach in the literature .section [ sec - discussions ] concludes the paper .we start with a brief background on dirichlet processes , and then proceed to hierarchical dirichlet processes . let be a probability space , and . a dirichlet process is defined to be the distribution of a random probability measure over such that , for any finite measurable partition of , the random vector is distributed as a finite dimensional dirichlet distribution with parameters . is referred to as the concentration parameter , which governs the amount of variability of around the centering distribution .a dp - distributed probability measure is discrete with probability one .moreover , it has a constructive representation due to : , where are iid draws from , and denotes an atomic probability measure concentrated at atom .the elements of the sequence are referred to as `` stick - breaking '' weights , and can be expressed in terms of independent beta variables : , where are iid draws from .note that satisfies with probability one , and can be viewed as a random probabity measure on the positive integers .for notational convenience , we write , following .a useful viewpoint for the dirichlet process is given by the plya urn scheme , which shows that draws from the dirichlet process are both discrete and exhibit a clustering property . from a computational perspective , the plya urn scheme provides a method for sampling from the random distribution , by integrating out . more concretely ,let atoms are iid random variables distributed according to . because is random , are exchangeable . showed that the conditional distribution of given has the following form : \sim \sum_{l=1}^{i-1}\frac{1}{i-1+\alpha_0}\delta_{\theta_l } + \frac{\alpha_0}{i-1+\alpha_0}g_0.\ ] ] this expression shows that has a positive probability of being equal to one of the previous draws .moreover , the more often an atom is drawn , the more likely it is to be drawn in the future , suggesting a clustering property induced by the random measure . the induced distribution over random partitions of is also known as the chinese restaurant process .a dirichlet process mixture model utilizes as the prior on the mixture component . combining with a likelihood function , the dp mixture model is given as : ; .such mixture models have been studied in the pioneering work of and subsequentially by a number of authors , for more recent and elegant accounts on the theories and wide - ranging applications of dp mixture modeling , see . [ [ hierarchical - dirichlet - processes . ] ] hierarchical dirichlet processes .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + next , we proceed giving a brief background on the hdp formalism of , which is typically motivated from the setting of grouped data . under this setting ,the observations are organized into groups indexed by a covariate , where is the index set .let , be the observations associated with group . for each , the assumed to be exchangeable .this suggests the use of mixture modeling : the are assumed identically and independently drawn from a mixture distribution .specifically , let denote the parameter specifying the mixture component associated with . under the hdp formalism, is the same space for all , i.e. , for all , and is endowed with the borel -algebra of subsets of . is referred to as _ local factors _ indexed by covariate .let denote the distribution of observation given the local factor .let denote a prior distribution for the local factors .we assume that the local factors s are conditionally independent given . as a resultwe have the following specification : under the hdp formalism , to statistically couple the collection of mixing distributions , we posit that random probability measures are conditionally independent , with distributions given by a dirichlet process with base probability measure : moreover , the hdp framework takes a fully nonparametric and hierarchical specification , by positing that is also a random probability measure , which is distributed according to another dirichlet process with concentration parameter and base probability measure : an interesting property of the hdp is that because s are discrete random probability measures ( with probability one ) whose support are given by the support of .moreover , is also a discrete measure , thus the collection of are random discrete measures sharing the same countable support .in addition , because the random partitions induced by the collection of within each group are distributed according to a chinese restaurant process , the collection of these chinese restaurant processes are statistically coupled .in fact , they are exchangeable , and the distribution for the collection of such stoschastic processes is known as the chinese restaurant franchise .* setting and notations . * in this paper we are interested in the same setting of grouped data as that of the hdp that is described by eq . .specifically , the observations within each group are iid draws from a mixture distribution .the local factor denotes the parameter specifying the mixture component associated with .the are iid draws from the mixing distribution .implicit in the hdp model is the assumptions that the spaces all coincide , and that random distributions are exchangeble .both assumptions will be relaxed . moreover ,our goal here is the inference of global clusters , which are associated with global factors that lie in the product space . to this end, is endowed with a -algebra to yield a measurable space . within this paper and in the data illustrations , , and corresponds to the borel -algebra of subsets of , formally , a _global factor _ , which are denoted by or in the sequel , is a high dimensional vector ( or function ) in whose components are indexed by covariate .that is , , and . as a matter of notations, we always use to denote the numbering index for ( so we have ) .we always use and to denote the number index for instances of s and s , respectively ( e.g. , and ) .the components of a vector ( ) are denoted by ( ) .we may also use letters and beside to denote the group indices .[ [ model - description . ] ] model description .+ + + + + + + + + + + + + + + + + + our modeling goal is to specify a distribution on the global factors , and to relate to the collection of mixing distributions associated with the groups of data .such resultant model shall enable us to infer about the _clusters associated with a global factor on the basis of data collected _ locally _ by the collection of groups indexed by . at a high level , the random probability measures and the s are `` glued '' together under the nonparametric and hierarchical framework , while the probabilistic linkage among the groups are governed by a stochastic process indexed by and distributed according to .customary choices of such stochastic processes include either a spatial process , or a graphical model .specifically , let denote the induced marginal distribution of .our model posits that for each , is a random measure distributed as a dp with concentration parameter , and base probability measure : .conditioning on , the distributions are independent , and varies around the centering distribution , with the amount of variability given by .the probability measure is random , and distributed as a dp with concentration parameter and base probability measure : , where is taken to be a spatial process indexed by , or more generally a graphical model defined on the collection of variables indexed by . in summary , collecting the described specifications gives the _ nested hierarchical dirichlet process _( nhdp ) mixture model : as we shall see in the next section , the s , which are draws from , provide the support for global factors , which in turn provide the support for the local factors .the global and local factors provide distinct representations for both global clusters and local clusters that we envision being present in data .local factors s provide the support for local cluster centers at each .the global factors in turn provide the support for the local clusters , but they also provide the support for global cluster centers in the data , when observations are aggregated across different groups . [ [ relations - to - the - hdp . ] ] relations to the hdp .+ + + + + + + + + + + + + + + + + + + + + both the hdp and nhdp are instances of the nonparametric and hierarchical modeling framework involving hierarchy of dirichlet processes . at a high - level ,the distinction here is that while the hdp is a recursive hierarchy of random probability measures generally operating on the same probability space , the nhdp features a nested hierarchy , in which the probability spaces associated with different levels in the hierarchy are distinct but related in the following way : the probability distribution associated with a particular level , say , has support in the support of the marginal distribution of a probability distribution ( i.e. , ) in the upper level in the hierarchy .accordingly , for , and have support in distinct components of vectors . for a more explicit comparison ,it is simple to see that if places distribution for _ constant _ global factors with probability one ( e.g. , for any there holds ) , then we obtain the hdp of .given that the multivariate base measure is distributed as a dirichlet process , it can be expressed using sethuraman s stick - breaking representation : .each atom is multivariate and denoted by .the s are independent draws from , and .the s and are mutually independent .the marginal induced by at each location is : . since each has support at the points , each necessarily has support at these points as well , andcan be written as : let .since s are independent given , the weights s are independent given . moreover , because it is possible to derive the relationship between weights s and .following , if is non - atomic , it is necessary and sufficient for defined by eq . to satisfy that the following holds : , where and are interpreted as probability measures on the set of positive integers .the connection between the nhdp and the hdp of can be observed clearly here : the stick - breaking weights of the nhdp - distributed have the same distributions as those of the hdp , while the atoms are linked by a graphical model distribution , or more generally a stochastic process indexed by . the spatial / graphical dependency given by base measure the dependency between the dp - distributed s .we shall explore this in details by considering specific examples of . [ cols="^,^,^ " , ][ [ simulation - studies . ] ] simulation studies .+ + + + + + + + + + + + + + + + + + + we generate two data sets of spatially varying clustered populations ( see fig .[ fig - data - ab ] for illustrations ) . in both data sets ,we set . for data set a , global factors are generated from a gaussian process ( gp ) .these global factors provide support for 15 spatially varying mixtures of normal distributions , each of which has 5 mixture components . the likelihood is given by . for each generated independently 100 samples from the corresponding mixture ( 20 samples from each mixture components ) .note that each circle in the figures denote a data sample .this kind of data can be encountered in tracking problems , where the samples associating with each covariate can be viewed as a snapshot of the locations of moving particles at time point .the particles move in clusters .they may switch clusters at any time , but the identification of each particle is _ not _ known as they move from one time step to the next .the clusters themselves move in relatively smoother paths .moreover , the number of clusters is not known .it is of interest to estimate the cluster centers , as well as their moving paths . for data set b , to illustrate the variation in the number of local clusters at different locations , we generate a number of global factors that simulate the bifurcation behavior in a collection of longitudinal trajectories . herea trajectory corresponds to a global factor .specifically , we set . starting at there is one global factor , which is a random draw from a relatively smooth gp with mean function , where and the exponential covariance function parameterised by , . at ,the global factor splits into two , with the second one also an independent draw from the same gp , which is re - centered so that its value at is the same as the value of the previous global factor at . at , the second global factor splits once more in the same manner .these three global factors provide support for the local clusters at each . the likelihood is given by a normal distribution with . at each we generated 30 independent observations .although it is possible to perform clustering analysis for data at each location , it is not clear how to link these clusters across the locations , especially given that the number of clusters might be different for different s .the nhdp mixture model provides a natural solution to this problem .it is fit for both data sets using essentially the same prior specifications .the concentration parameters are given by and . is taken to be a mean-0 gp using for data set a , and for data set b. the variance is endowed with prior .the results of posterior inference ( via mcmc sampling ) for both data sets are illustrated by fig .[ fig - global - a ] and fig .[ fig - global - b ] . with both data sets ,the number global clusters are estimated almost exactly ( 5 and 3 , respectively , with probability ) .the evolution of the posterior distributions on the number of local clusters for data set b is given in fig .[ fig - local - b ] . in both data sets ,the local factors are accurately estimated ( see figs .[ fig - global - a ] and [ fig - global - b ] ) . for data set b , due to the varying number of local clusters ,there are regions for , specifically the interval ] . for , for instance , which implies that are weakly dependent across s , we are not able to identify the desired global factors ( see fig . [ fig - identify ] ) , despite the fact that local factors are still estimated reasonably well .the effects of prior specification for on the inference of global factors are somewhat similar to the hybrid dp model : a smaller encourages higher numbers of and less smooth global curves to expand the coverage of the function space ( see sec .7.3 of ) . within our context , the prior for is relatively more robust than that of as discussed above .the prior for concentration parameter is extremely robust while the priors for s are somewhat less .we believe the reason for this robustness is due to the modeling of the global factors in the second stage of the nested hierarchy of dps , and the inference about these factors has the effect of pooling data from across the groups in the first stage . in practice , we take all s to be equal to increase the robustness of the associated prior . [[ progesterone - hormone - clustering . ] ] progesterone hormone clustering .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we turn to a clustering analysis of progesterone hormone data .this data set records the natural logarithm of the progesterone metabolite , measured by urinary hormone assay , during a monthly cycle for 51 female subjects .each cycle ranges from -8 to 15 ( 8 days pre - ovulation to 15 days post - ovulation ) .we are interested in clustering the hormone levels per day , and assessing the evolution over time .we are also interested in global clusters , i.e. , identifying global hormone pattern for the entire monthly cycle and analyzing the effects on contraception on the clustering patterns .see fig .[ fig - data - pgd ] for the illustration and for more details on the data set . for prior specifications , we set , and for all .let . for , we set , and . it is found that the there are 2 global clusters with probability close to 1 .in addition , the mean estimate of global clusters match very well with the sample means from the two groups of women , a group of those using contraceptives and a group that do not ( see fig .[ fig - data - pgd - dlp ] ) . examining the variations of local clusters ,there is a significant probability of having only one local cluster during the first 20 days . between day 21 and 24the number of local clusters is 2 with probability close to 1 . to elaborate the effects of contraception on the hormone behavior (the last 17 female subjects are known to use contraception ) , a pairwise comparison analysis is performed .for every two hormone curves , we estimate the posterior probability that they share the same local cluster on a given day , which is then averaged over days in a given interval .it is found that the hormone levels among these women are almost indistinguishable in the first 20 days ( with the clustering - sharing probabilities in the range of ) , but in the last 4 days , they are sharply separated into two distinct regimes ( with the clustering- sharing probability between the two groups are dropped to ) .we compare our approach to the hybrid dirichlet process ( hybrid - dp ) approach , perhaps the only existing approach in the literature for joint modeling of global and local clusters .the data are given to the hybrid - dp as the replicates of a random functional curve , whereas in our approach , such functional identity information is not used .in other words , for us only a collection of hormone levels across different time points are given ( i.e. , the subject i d of hormone levels are neither revealed nor matched with one another across time points ) .for a sensible comparison , the same prior specification for base measure of the global clusters were used for both approaches .the inference results are illustrated in fig .[ fig - data - pgd - dlp ] .a close look reveals that the global clusters obtained by the hybrid - dp approach is less faithful to the contraceptive / no contraceptive grouping than ours . this can be explained by the fact that hybrid - dp is a more complex model that directly specifies the local cluster switching behavior for functional curvesit is observed in this example that an individual hormone curve tends to over - switch the local cluster assignments for , resulting in significantly less contrasts between the two group of women ( see fig .[ fig - heat ] and [ fig - heat - dlp ] ) .this is probably due the complexity of the hybrid - dp , which can only be overcome with more data ( see propositions 7 and 8 of for a theoretical analysis of this model s complexity and posterior consistency ) .finally , it is also worth noting that the hybrid - dp approach practically requires the number of clusters to be specified a priori ( as in the so - called -hybrid - dp in ) , while such information is directly infered from data using the nhdp mixture .we have described a nonparametric approach to the inference of global clusters from locally distributed data .we proposed a nonparametric bayesian solution to this problem , by introducing the nested hierarchical dirichlet process mixture model .this model has the virtue of simultaneous modeling of both local clusters and global clusters present in the data .the global clusters are supported by a dirichlet process , using a stochastic process as its base measure ( centering distribution ) .the local clusters are supported by the global clusters . moreover, the local clusters are randomly selected using another hierarchy of dirichlet processes . as a result, we obtain a collection of local clusters which are spatially varying , whose spatial dependency is regulated by an underlying spatial or a graphical model .the canonical aspects of the nhdp ( because of its use of the dirichlet processes ) suggest straightforward extensions to accomodate richer behaviors using poisson - dirichlet processes ( also known as the pittman - yor processes ) , where they have been found to be particularly suitable for certain applications , and where our analysis and inference methods can be easily adapted .it would also be interesting to consider a multivariate version of the nhdp model .finally , the manner in which global and local clusters are combined in the nhdp mixture model is suggestive of ways of direct and simultaneous global and local clustering for various structured data types .the plya - urn characterization suggests a gibbs sampling algorithm to obtain posterior distributions of the local factors s and the global factors s , by integrating out random measures and s . rather than dealing with the s and directly , we shall sample index variables and instead , because s and s can be reconstructed from the index variables and the s .this representation is generally thought to make the mcmc sampling more efficient .thus , we construct a markov chain on the space of .although the number of variables is in principle unbounded , only finitely many are actually associated to data and represented explicitly . a quantity that plays an important role in the computation of conditional probabilities in this approach is the conditional density of a selected collection of data items , given the remaining data . for a single observation -th at location , define the conditional probability of under a mixture component , given and all data items except : similary , for a collection of observations of all data such that for a chosen , which we denote by vector , let be the conditional probability of under the mixture component , given and all data items except . * sampling . * exploiting the exchangeability of the s within the group of observations indexed by , we treat as the last variable being sampled in the group . to obtain the conditional posterior for , we combine the conditional prior distribution for with the likelihood of generating data .specifically , the prior probability that takes on a particular previously used value is proportional to , while the probability that it takes on a new value is proportional to .the likelihood due to given for some previously used is . here , .the likelihood for is calculated by integrating out the possible values of : where is the prior density of . as a result, the conditional distribution of takes the form if the sampled value of is , we need to obtain a sample of by sampling from eq . :* sampling . * as with the local factors within each group , the global factors s are also exchangeable .thus we can treat for a chosen as the last variable sampled in the collection of global factors .note that changing index variable actually changes the mixture component membership for relevant data items ( across all groups ) that are associated with , the likelihood obtained by setting is given by , where denotes the vector of all data such that .so , the conditional probability for is : where . *sampling of and .* we follow the method of auxiliary variables developed by and .endow with a prior . at each sampling step , we draw . then the posterior of is can be obtained as a gamma mixture , which can be expressed as , where .the procedure is the same for each , with and playing the role of and , respectively .alternatively , one can force all to be equal and endow it with a gamma prior , as in .
|
we consider the problem of analyzing the heterogeneity of clustering distributions for multiple groups of observed data , each of which is indexed by a covariate value , and inferring global clusters arising from observations aggregated over the covariate domain . we propose a novel bayesian nonparametric method reposing on the formalism of spatial modeling and a nested hierarchy of dirichlet processes . we provide an analysis of the model properties , relating and contrasting the notions of local and global clusters . we also provide an efficient inference algorithm , and demonstrate the utility of our method in several data examples , including the problem of object tracking and a global clustering analysis of functional data where the functional identity information is not available . inference of global clusters from locally distributed data + xuanlong nguyen + department of statistics + university of michigan * keywords : * global clustering , local clustering , nonparametric bayes , hierarchical dirichlet process , gaussian process , graphical model , spatial dependence , markov chain monte carlo , model identifiability
|
the circumstellar habitable zone ( hz ) is traditionally defined as the region around a star in which liquid water can remain stable on the surface of a rocky planet . according to standard theory , the inner edge of the habitable zoneis set either by the onset of a runaway greenhouse , defined as complete evaporation of the oceans , or by the slightly earlier onset of a moist greenhouse , in which the stratosphere becomes wet and water is lost by photodissociation followed by hydrogen escape .this theory successfully explains the lack of water on our neighboring planet venus , which formed somewhat inside the inner edge of the hz . in a recent paper , used a 3-dimensional climate model to show that the runaway greenhouse threshold is pushed inward compared to 1-d calculations as a result of escape of longwave radiation through the undersaturated descending branches of the tropical hadley cells .this result was welcome , as the most recent 1-d calculation had placed this threshold at 0.99 astronomical units ( au ) , uncomfortably ( and unrealistically ) close to earth s present orbit .the leconte et al .paper moved it back to 0.95 au , which is where it had been thought to lie for most of the past 37 years . reached another conclusion , though , that challenges conventional thinking about how water might be lost from a venus - like planet .as surface temperatures warmed from 280 k to 330 k in their model , stratospheric temperatures cooled from 140 k to below 120 k. ( following leconte et al . , we loosely refer to the atmospheric region above the troposphere as the stratosphere , although it could also be termed the mesosphere , as the ozone - free atmospheres being discussed lack the temperature inversion that is present in earth s atmosphere . )this result was arguably not a numerical artifact , as the correlated- absorption coefficients in their model were derived for pressures as low as bar .the low stratospheric temperatures , by themselves , are understandable , as the authors argued convincingly that such a result is to be expected if the atmosphere is distinctly non - gray ( see also ) . the atmosphere modeled by leconte et al .was highly non - gray because , along with 1 bar of n , it contained only 376 ppmv of co .h , while abundant near the surface , was almost completely absent from their model upper atmospheres .the stratosphere in their model is warmed by absorption of upwelling radiation in co line centers . because these line centers are optically thick ,they are shielded from the warm surface by co in lower atmospheric layers .radiation to space can occur throughout the co absorption lines , though , and so the stratospheric temperature equilibrates at an extremely cold value .the cold stratosphere in the leconte et al .model appears to preclude the loss of water from a moist greenhouse planet , that is , one on which surface liquid water is still present .indeed , the authors make this point explicitly in their paper .this result may not pose a problem in understanding water loss from venus , as one recent study suggests that venus never had liquid water on its surface .instead , venus developed a true runaway greenhouse during accretion , and the steam atmosphere never condensed .leconte et al . did not study runaway greenhouse atmospheres directly , but presumably such h - rich atmospheres can always lose water by photodissociation and hydrogen escape .this question may be relevant , though , to exoplanets near the inner edge of the circumstellar habitable zone . in older 1-d climate models ( e.g. ) , the moist greenhouse occurs at a substantially lower stellar flux than a true runaway greenhouse .the more recent kopparapu et al .( 2013a , b ) 1-d model does not show this large difference .all three of these studies employed inverse climate calculations in which the vertical temperature profile was specified , and radiative fluxes were back - calculated to determine the equivalent planet - star distance .these studies also all assumed an isothermal , 200 k stratosphere .this assumption was justified by comparison to a gray atmosphere model . in a gray atmosphere ,the the temperature at optical depth zero , ( also called the skin temperature ) can be shown to be equal to the effective radiating temperature , , divided by . for modern earth , k , so k. for venus , k , so k. in the model of , as the surface temperature warmed , the stratosphere became increasingly wet , allowing h to be efficiently photodissociated and hydrogen to escape to space , even though liquid water was still present on venus surface .a new simulation of this problem of warm , moist planets performed with a different 3-d climate model ( the ncar cam4 model ) did find stable , moist greenhouse solutions .moist greenhouse solutions should be easier to achieve in 3-d models precisely because their tropospheres are undersaturated in some regions and because they include other climate feedbacks , such as clouds , that may help to stabilize a planet s climate .leconte et al . (2013 ) did not find such solutions , but that is evidently not a general result . found comparable stratospheric temperatures ( always k ) to leconte et al . at low surface temperatures , but at high surface temperatures they found much warmer ( up to k ) stratospheric temperatures and correspondingly higher stratospheric h mixing ratios .their absorption coefficients were derived for pressures down to bar , so they should also be reliable in the upper stratosphere .( their earlier paper says that the lower pressure limit was only 0.01 bar , but this was evidently a typo , as evidenced by their accompanying discussion , and as confirmed by e. wolf ( priv .comm . ) . )if the wolf & toon result is correct , then water would eventually be lost as a planet s surface temperature warms .to answer this question , we used our own 1-d radiative - convective climate model , which has recently been updated to better handle runaway greenhouse atmospheres .admittedly , our test is not definitive , because our 1-d model can not simulate all of the processes included in a 3-d model .( in particular , although our model is non - gray , it can not simulate the cold , high tropical tropopause which dries the stratosphere of modern earth . ) however , our model can predict vertical profiles of temperature and water vapor , and so it can be used as a sanity check on the 3-d results .the details of the model have been described in the reference just given , and so we will not repeat them here .one point deserves mention , though : the model uses correlated- absorption coefficients derived from the hitran and hitemp databases for pressures of bar and for temperatures of 100 - 600 k. a pressure of bar corresponds to an altitude of km in the modern atmosphere .all calculations shown here use bar as the pressure at the top of the model atmosphere .all calculations assume a noncondensable surface pressure of 1 bar of n , and most assume a co mixing ratio of 355 ppmv .surface pressure increases as the temperature increases and h becomes more abundant .o and o are excluded from the model .our 1-d model uses a time - stepping algorithm to reach steady - state solutions .normally , we fix the solar flux and allow the model to compute a self - consistent vertical temperature / h profiles .we term that the forward mode of calculation .alternatively , the model can be run in inverse mode . in this case, we fix the surface temperature ( ) , assume an isothermal stratosphere , and connect these to each other with a moist adiabat ; then , we calculate the solar flux needed to sustain this surface temperature .most , or all , of the runaway greenhouse calculations performed by kasting ( 1988 ) and the more general hz calculations performed by and were done in this manner .the reason is that runaway greenhouse atmospheres are their name implies unstable .as the solar flux is increased above its present value , increases .this causes water vapor to increase , which causes to increase further , until eventually the model runs away " to very high surface temperatures .indeed , with our current set of h absorption coefficients , which are derived from the hitemp database , our model runs away at the earth s current solar flux if the troposphere is assumed to be fully saturated .a saturated troposphere is not realistic for the modern earth , but it becomes a better and better approximation as the atmosphere becomes warmer and more water - rich . treating relative humidity self - consistently requires a 3-d model like the ones developed by and .inverse calculations are stable and easy to perform with our 1-d model ; however , they require that the stratosphere be isothermal , which is precisely the assumption that has been challenged by leconte et al .so , we modified our 1-d model to do a type of calculation that is somewhere in between the forward and inverse modes .we used a time - stepping procedure , as in the forward model ; however , after each time step we reset the surface temperature to a specified value .we then determined where the atmospheric cold trap is located , typically somewhere in the lower stratosphere , and we reset temperatures below that level ( but not including the surface ) to the cold trap temperature .the cold trap is the altitude at which the saturation mixing ratio of h is at a minimum , so it determines the stratospheric h concentration .( note that this is not necessarily the altitude at which the stratospheric temperature is lowest .if a low temperature occurs at a correspondingly low pressure , p , then the saturation mixing ratio of water vapor , p/p , may still be relatively high . )we next drew a moist adiabat up from the surface until it intersected the temperature profile that had been formed in that way .we then recomputed fluxes and repeated the entire procedure until the temperature profile reached steady state .this methodology allowed upper stratospheric temperatures to achieve radiative equilibrium while preventing the surface temperature from running away .calculations were performed for various surface temperatures ranging from 288 k ( the present value for earth ) up to 370 k. results are shown in fig .1a shows temperature profiles . at low ,our model predicts upper stratospheric temperatures of 100 k , or even lower below the temperatures predicted by either of the 3-d climate models discussed earlier .but at higher surface temperatures , the stratospheric temperature rises , as it does in the model . for , the temperature at the top of the convective troposphere is k , right where it was assumed to be in the model .admittedly , the stratosphere is not well resolved in this particular calculation , as nearly the entire atmosphere is convective by this point .we did not attempt to extend the model higher , though , because our absorption coefficients are only good down to bar .qualitatively , these temperature profiles look much like those in kasting ( 1988 , fig .5a ) except that the stratosphere is no longer isothermal . at low temperaturesthe convective layer extends up to only km , but at high temperatures it extends well above 100 km .this dramatic difference is caused by the increased importance of latent heat release , which causes the lapse rate to become shallower at high surface temperatures .vertical profiles of temperature ( panel a ) and water vapor ( panel b ) calculated using our 1-d radiative - convective climate model .the assumed co concentration is 355 ppmv ., title="fig : " ] vertical profiles of temperature ( panel a ) and water vapor ( panel b ) calculated using our 1-d radiative - convective climate model .the assumed co concentration is 355 ppmv ., title="fig : " ] vertical profiles of temperature ( panel a ) and water vapor ( panel b ) for different atmospheric co concentrations .the assumed surface temperature is 320 k. note that the amount of water vapor does not experience significant change in response to the rise in co ., title="fig : " ] vertical profiles of temperature ( panel a ) and water vapor ( panel b ) for different atmospheric co concentrations .the assumed surface temperature is 320 k. note that the amount of water vapor does not experience significant change in response to the rise in co ., title="fig : " ] corresponding water vapor profiles are shown in fig .1b . at first glance ,these profiles again look much like those in kasting ( 1988 , fig .5b ) . at low surface temperatures ,water vapor is a minor constituent of the stratosphere , as it is in earth s atmosphere today . at high surface temperatures ,water vapor becomes a major atmospheric constituent at all altitudes .because the hydrogen escape rate is proportional to the total hydrogen mixing ratio in the upper atmosphere , this means that water could readily be lost from our high- atmospheres .if one looks more closely , however , significant differences from the calculations can be seen at intermediate values of surface temperature . at , the older model predicted a stratospheric h mixing ratio of nearly , whereas the current model predicts a value closer to . and , at k , the discrepancy is even larger : the older model predicted a stratospheric h mixing ratio of , whereas the new model predicts a value of .these differences are caused by the much colder stratospheric temperatures in the present model .but , as increases further and stratospheric h becomes more abundant , stratospheric temperatures increase , as well .this behavior can be physically explained : h absorbs well across much of the thermal - infrared spectrum ; so , as h becomes more abundant , the atmosphere becomes more and more like a gray atmosphere .we have already seen that , given earth - like insolation , the skin temperature of a gray atmosphere should be in the neighborhood of 200 k. our calculated stratospheric temperatures tend towards that value as h becomes abundant .it is easy to see why the leconte et al .model does not exhibit this behavior .the highest surface temperature reached in their calculation is only 330 k. at this point in our own calculations , the stratosphere is still cold and dry .but when reaches 350 k , water vapor begins to break through into the stratosphere , and the stratosphere begins to warm .we are able to explore this temperature regime because of the ease with which one can manipulate a 1-d model .but it is more difficult to do this in 3-d climate model because such models must always be run in forward , time - stepping mode and because the included physical parameterizations ( e.g. , moist convective fluxes ) are often more complex . as mentioned previously, do obtain solutions up to surface temperatures of k , well above the 330 k reached by leconte et al .their model exhibits negative cloud feedback at high surface temperatures , which helps stabilize the climate in this regime .their model , like ours , predicts that stratospheric water vapor increases smoothly as increases . at low ,their calculated stratospheric temperatures are also consistently warmer than either ours or those of , for reasons that are unclear . at = 370 k ,their stratospheric temperature is k , just like ours .wolf & toon computed absorption coefficients at 56 different pressure levels , as compared to 8 levels in our model and 9 in the leconte et al .model , so it is possible that their finer pressure resolution results in increased accuracy . buttheir model also develops a temperature inversion near the surface , which may be unphysical .( how does the surface remain in thermal balance when convection is absent and the temperature is higher both above and below the surface ? ) so , it is still worth investigating this question with an independent model .we performed one further set of calculations to explore the dependence of these results on the atmospheric co concentration . with at 320 k , we increased the co mixing ratio to 0.3% and 10% .results are shown in fig .2 . surprisingly ( or perhaps not ) , the stratospheric temperature warms as the co concentration is increased .this result may seem surprising at first , as co is regarded as a coolant in earth s modern stratosphere .but the modern stratosphere is relatively warm because of absorption of solar uv radiation by ozone . in the extremely cold stratospheres modeled here ,the only significant heating comes from absorption of upwelling thermal - ir radiation by co ; hence , adding more co has a warming effect .conversely , increasing co had little effect on stratospheric h concentrations ( fig . 2b ) .one further observation can be made based on these results .one can still calculate a moist greenhouse limit using a 1-d climate model , but one needs to be careful in doing so , as the assumption of an isothermal 200 k stratosphere is clearly invalid . if one has to pick a stratospheric temperature , 150 k would be a better estimate for a low - co atmosphere .the moist greenhouse limit , like the runaway greenhouse limit , should ideally be calculated using 3-d climate models .our calculations support the results of in that we , too , find very low stratospheric temperatures for moderately warm , low - co atmospheres that lack o and o .and we , too , calculate low stratospheric h concentrations for surface temperatures up to k. at still higher surface temperatures , however , the stratosphere warms and h becomes a major upper atmospheric constituent , as in the earlier model of and the more recent model of .thus , contrary to the claim of , water loss does appear to be possible from a moist greenhouse planet .finally , our calculations suggest that the moist greenhouse limit for the inner edge of the habitable zone can be estimated by doing 1-d inverse calculations , provided that one uses a stratospheric temperature of 150 k , instead of the canonical value of 200 k used in earlier studies . but a fully saturated 1-d climate model will likely underestimate the solar flux needed to trigger a moist greenhouse and will thus produce a habitable zone inner edge that is too far away from the star .more accurate estimates of the inner edge boundary require the use of 3-d climate models .the authors thank the anonymous referee for insightful comments .acknowledges the undergraduate research opportunities program ( urop ) at boston university for primarily funding the research while in residence at penn state university in the summer of 2015 .j.f.k . and r.k .thank nasa s emerging worlds and exobiology programs for their financial support .
|
a radiative - convective climate model is used to calculate stratospheric temperatures and water vapor concentrations for ozone - free atmospheres warmer than that of modern earth . cold , dry stratospheres are predicted at low surface temperatures , in agreement with recent 3-d calculations . however , at surface temperatures above 350 k , the stratosphere warms and water vapor becomes a major upper atmospheric constituent , allowing water to be lost by photodissociation and hydrogen escape . hence , a moist greenhouse explanation for loss of water from venus , or some exoplanet receiving a comparable amount of stellar radiation , remains a viable hypothesis . temperatures in the upper parts of such atmospheres are well below those estimated for a gray atmosphere , and this factor should be taken into account when performing inverse climate calculations to determine habitable zone boundaries using 1-d models .
|
the last 15 years has seen considerable progress in the conception and development of ideas for multi - spacecraft optical interferometers .stachnik , melroy , and arnold ( 1984 ) first laid out the conceptual framework for an orbiting michelson interferometer , and the following year the european space agency devoted a colloquium to spacecraft arrays of this type .particular emphasis in a number of studies ( johnston & nock 1990 ; decou 1991 ) was placed on the choice of orbits to minimize fuel usage and provide maximal uv coverage .decou ( 1991 ) described a family of orbits near geostationary which were particularly efficient in this respect . inthe early 1990 s a coherent effort began at the nasa / caltech jet propulsion laboratory to develop a consistent and detailed design for a three - spacecraft michelson interferometer known initially as the separated spacecraft interferometer ( ssi ; kulkarni 1994 ) , and then as the new millenium interferometer ( nmi ; mcguire & colavita 1996 ; blackwood et al 1998 ) because of its alignment with nasa s new millenium technology program , in which it was scheduled as deep space 3 ( ds3 ) .however , funding constraints eliminated the original three - spacecraft baseline design in late 1998 and a de - scoped version involving somewhat reduced capability and requiring only two spacecraft , was adopted .the new mission , known as space technology 3 ( st3 ) due to re - alignment of the parent nasa program , has moved into a prototype construction phase , with launch now set for 2005 . in this reportwe describe the primary enabling concepts for the dual spacecraft system , which combine specific choices of array geometry with a novel fixed optical delay line capable of supporting a continuously variable interferometer baseline from 40 to 200 m. the next section describes the choice of geometry , followed by a section describing the overall optical layout , and the fixed delay line . a related paper in this volume ( lay et al .) describes in detail the operation of the interferometer system .before the initial three - spacecraft configuration for ds3 was adopted as the working design , a variety of different configurations were considered which gave various levels of technology demonstration with respect to a formation flying multiple spacecraft interferometer .one of these proposed early configurations was in fact a dual spacecraft system ( folkner 1996 ) .the basic geometry of this configuration is shown in figure 1 . here the collector spacecraft ( which acts simply as a moving relay mirror ) travels along a parabolic trajectory with the combiner spacecraft at the focus of the parabola , which we choose as the origin in this plot .the combiner spacecraft then carries a fixed optical delay line which compensates for the additional pathlength that the collector spacecraft produces .this is indicated schematically by showing the fixed delay line as if it were reflecting off another relay mirror at the surface of the reference parabola , thus ensuring equal delay in the two arms of the interferometer . for the pictured geometry , the collector spacecraft position must satisfy : coordinate is defined as the projected baseline , and the total fixed delay carried by the combiner spacecraft is . in the case of fig .1 the -position of the collector spacecraft was always negative with respect to the combiner for simplicity in the relay optics .equation ( 1 ) then determines the required collector spacecraft position for a given projected baseline ~.\ ] ] for the configuration of fig .1 , the fixed delay is m , and the maximum baseline ( at ) is then also 100 m. the difficulty with this approach is the requirement that the combiner spacecraft must carry a 100 m fixed delay line in a very compact configuration , of order 1 - 2 m in overall length .this amount of delay is not easily achievable in a broad - band system ( 450 - 1000 nm ) as was planned for ds3 .approaches involving reflections between opposing spherical or flat mirrors typically produce too much wavefront distortion , absorption , and scattering losses to be useful for a white light interferometer .alternatives such as the use of optical fiber also do not afford the broadband single - mode operation required for a delay line .figure 2 shows a modified approach to the two spacecraft system in which a much shorter fixed delay line can be utilized . herethe spacecraft configuration entails a collector spacecraft position which moves along the reference parabola _ above _ the combiner spacecraft with respect to the source direction . referring to equation ( 2 ) above , .when exceeds the fixed delay , the collector spacecraft -value then becomes positive . for , thus the interspacecraft distance grows quadratically with baseline . for the de - scoped nasa mission st3, preliminary design considerations indicated that a fixed delay line of m stored delay was achievable within the constraints of spacecraft size and instrument visibility budget . in a later section we provide details of the fixed delay line design . using m , and the additional constraint of km imposed by formation flying requirements , st3 is able to achieve a maximum interferometer baseline of about 200 m.figure 3 indicates schematically the optical design for st3 in the adopted dual spacecraft configuration . the optical train is almost completely planar throughout the system , and employs an athermalized ultra - stable composite optical bench . in the combiner spacecraft ( which will function as a standalone fixed - baseline interferometer ) a pair of outboard siderostats feed into an afocal gregorian compressor with a 1 arcmin fieldstop at the internal focus .the 12 cm incoming beams are then compressed to 3 cm and fed into the delay lines , one fixed and one movable .after this the beams enter the beam combiner .an outer 0.5 cm annular portion of each beam is stripped off for guiding , and the central 2 cm portion of the beam is used for fringe tracking ( using a single - element avalanche - photodiode detector in one of the combined beams ) .the other combined 2 cm beam is dispersed in a prism and integrated coherently on an 80 channel ccd fringe spectrometer .perspective and schematic views of the fixed delay line are shown in fig .the design employs 3 nested cat s eye retroreflectors , two of which are in a cassegrain configuration and the third a newtonian . as noted in the plot , the optics are very slow , giving large depth of focus and minimal impact on wavefront distortion .three of the 13 reflections occur at foci and have little wavefront effect .however , due to the large magnification of the system , focal plane flats must be sized generously to match field of view requirements .we thank m. shao and m. colavita for their invaluable suggestions & support of this work .the research described here was carried out by the jet propulsion laboratory , california institute of technology , under a contract with the national aeronautics and space administration .blackwood , g.b . ,dubovitsky , s. , linfield , r.p . , and gorham , p. w. , 1998 , proc .spie 3350 ( 1 ) , 173 .decou , a. b. , 1991 , journ . astronaut ., 39 ( 3 ) , 283 .folkner , w. m. , 1996 , jpl interoffice memo .335.1 - 96 - 029 .johnston , m. d. , & nock , k. t. , 1990 , in proc .tech . for opt .interferom . from space ,( pasadena : jpl ) .kulkarni , s. r. , 1994 , prop .for new mission concepts in astroph . , caltech. lay , o. p. , blackwood , g. h. , dubovitsky , s. , gorham , p. w. , and linfield , r. p. , 1999, this volume .mcguire , j. & colavita , m. , 1996 , nmi prelim .design rev . a , jpl interoffice memorandum .stachnik , r. , melroy , p. , & arnold , d. , 1984 , proc .spie 445 , 358 .
|
we present the enabling concept and technology for a dual spacecraft formation - flying optical interferometer , to be launched into a deep space orbit as space technology 3 . the combiner spacecraft makes use of a nested cat s eye delay line configuration that minimizes wavefront distortion and stores 20 m of optical pathlength in a package of m length . a parabolic trajectory for the secondary collector spacecraft enables baselines of up to 200 m for a fixed 20 m stored delay and spacecraft separations of up to 1 km .
|
collective opinion formation in society seems to be an emerging phenomenon that occurs in networks of agents that interact and exchange opinions .the voter model is an individual - based stochastic model for collective opinion formation that has been studied in , above all , statistical physics and probability theory communities for decades . in the voter model ,a node assumes one of the different states ( i.e. , opinions ) that can stochastically change over time .the configuration in which all nodes take the same state , i.e. , consensus , is the only type of absorbing configuration of the voter model .the mean time to consensus , which we denote by , is a fundamental property of the voter model . for a given size of the population , depends on the network structure , which suggests that opinions spread faster on some networks than others .not surprisingly , is small for well - connected networks such as the complete graph for which , where is the number of nodes in the network .more complex networks with small mean path lengths also yield linear or sublinear dependence of on .in contrast , can be large for networks in which communication between nodes is difficult for topological reasons . for example , the one - dimensional chain yields .networks with community structure also yield when intercommunity links are rare because coordination between the different communities is a bottleneck of the entire process . in this study , we explore network structure that maximizes the consensus time . in practice , answering this question may help us understand why consensus is difficult in real society . in theory , we expect that there exist networks for which is larger than for the following reason .the mean consensus time of the voter model is often analyzed through the random walk .in fact , the so - called coalescence random walk is the dual process of the voter model , implying that the mean time before random walkers coalesce into one gives for the voter model .in addition , in the voter model on the chain , positions of active links , i.e. , boundaries between adjacent nodes possessing the opposite states , perform a random walk until different active links meet and annihilate .this relationship helps us to calculate for the chain [ i.e. , ] through the hitting or cover time of the random walk .because the hitting time and cover time of the random walk are known to scale as for some networks , of the voter model may also scale as for these and other networks . in networks in which the degree ( i.e. , number of neighboring nodes for a node ) is heterogeneous ,the consensus time depends on the specific rule with which we update the state of nodes .therefore , we explore the networks that maximize separately for different update rules . for each update rule, we determine such networks by combining exact numerical calculations of for small networks ( sec . [ sec : exact ] ) , coarse analytical arguments to evaluate lower and upper bounds of ( secs . [sub : order lollipop ] and [ sub : order barbell ] ) , analysis of the coalescing random walk ( sec .[ sub : order double - star ] ) , and direct numerical simulations for large networks ( sec .[ sec : direct numerical ] ) .the results are summarized in table [ tab : summary ] .we consider undirected networks possessing nodes and links .each node possesses either of the two states and at any time . in each update event, the state of a node is updated , and time is consumed .we repeat this procedure until either the consensus of state or that of state is reached .it should be noted that a node is updated once per unit time on average .we examine three update rules according to refs . . under the link dynamics( ld ) , we first select a link with probability .denote by and the two endpoints of the selected link .with probability , copies s state . with the remaining probability , copies s state . in this manner ,local consensus is obtained in a single update event .if and have the same state beforehand , the state of neither node changes . under the voter model ( vm )update rule , we first select a node to be updated with probability . then , a neighbor of , denoted by , is selected out of the neighbors of with the equal probability , i.e. , the inverse of s degree . then , copies s state . under the invasion process ( ip ) , we first select a parent node with probability . then , a neighbor of , again denoted by , is selected for updating with the probability equal to the inverse of s degree . then , copies s state . in the following ,we collectively refer to the dynamics under ld , vm , and ip as opinion dynamics .the three update rules coincide for regular networks ( i.e. , networks in which all nodes have the same degree ) . in heterogeneous networks ,opinion dynamics including depends on the update rule . in the present study, we allow heterogeneous networks .in this section , we consider small networks and exactly calculate for any given network and numerically maximize by gradually morphing network structure with ( i.e. , number of nodes ) and ( i.e. , number of links ) fixed .we refer to the collection of the states of the nodes as configuration .there are possible configurations because each node takes either state or .two configurations correspond to consensus .the mean consensus time depends on the starting configuration .we denote a configuration by ] by } ] . in a single update ,the configuration may switch to a neighboring configuration , where the neighborhood of a configuration is defined as the configurations that differ from the original configuration just in a single .each configuration has neighbors , as shown in fig .[ fig : n=4](a ) . to explain the method for exactly calculating , we refer to the network with nodes shown in fig .[ fig : n=4](b ) and consider configuration [ 0101 ] ( i.e. , nodes 1 and 3 possess state , and nodes 2 and 4 possess state ) .under ld , node 1 imitates node 2 in a single update with probability , and node 1 imitates node 4 with probability .if either of these events occurs , the configuration transits from [ 0101 ] to [ 1101 ] . if the link connecting nodes 1 and 3 is selected for the update , the configuration does not change .this event occurs with probability . by taking into account all possible transitions from [ 0101 ] in a similar manner , we obtain } = \frac{1}{4}\left < t\right>_{[0101 ] } + \frac{1}{4}\left < t\right>_{[0001 ] } + \frac{1}{8}\left < t\right>_{[0100 ] } + \frac{1}{8}\left < t\right>_{[0111 ] } + \frac{1}{4}\left < t\right>_{[1101 ] } + \frac{1}{n}. \label{eq : recursive t_[0101]}\ ] ] the last term on the right - hand side of eq . results from the fact that an update consumes time by definition. we can similarly derive the recursive equations for starting configurations in total , i.e. , those except ] . by solving the set of the linear equations , we obtain the mean consensus time for all initial configurations .finally , we define as the mean consensus time averaged over the initial configurations that possess the equal number of state and nodes .when is even , there are ^ 2 ] configurations in which the number of nodes in state is smaller than that in state just by one . using this exact method for calculating , we numerically explored the network structure with the largest value for a given , , and update rule as follows . 1 .generate an initial network using one of the two following methods .in the first method , we prepare the star with nodes and links .then , we add the remaining links between pairs of leaf ( i.e. , nonhub ) nodes with the uniform density . in the second method , we prepare the chain with nodes and links . then, we similarly add the remaining links between randomly selected nonadjacent pairs of nodes . in either method , we prohibit multiple links when adding the links .networks generated by the two methods represent two extreme initial conditions .2 . calculate for under the given update rule .3 . select a link with the uniform probability .denote the endpoints of the selected link by and . if the degrees of and are at least two , delete the link , select a pair of nonadjacent nodes with the uniform probability , and create a link between the two nodes .if the degree of is equal to one and that of is at least two , we disconnect the link from but keep as an endpoint of the new link .we select the other endpoint of the new link with the uniform probability from the nodes that are not adjacent to and connect it to .it should be noted that either the degree of or that of is at least two because the network is in fact connected throughout the procedure . if the generated network is disconnected , we repeat the procedure until a connected network is generated .we refer to the new network as .4 . calculate for .if is larger for than , we replace by .we repeat steps 3 , 4 , and 5 until the local maximum of is reached . in practice ,if the rewiring does not occur in more than 5000 steps , we stop repeating steps 3 , 4 , and 5 , calculate , and record the network structure .we run simulations with five initial networks generated by each of the two methods ( i.e. , ten initial conditions in total ) .when the final network depends on the initial network , we select the one yielding the largest value .it should be noted that the obtained network realizes a local , but not global , maximum of .first , we set and apply the optimization procedure described in sec . [sub : maximize methods ] .the networks maximizing under ld are shown for various values in fig .[ fig : max t with n=10 shape](a ) .the generated networks are close to the lollipop graph , which is defined as a network composed of a clique and a one - dimensional chain grafted to the clique . by definition, the lollipop graph can be only formed for specific values of given a value of .figure [ fig : max t with n=10 shape](a ) shows that when are these values ( i.e. , , 12 , 15 , 19 ) , the network that maximizes is the lollipop graph . for other values of ,the obtained networks are close to the lollipop graph .the maximized value is plotted against by the circles in fig .[ fig : max t with n=10 ] .the peaks are located at the values of that enable the lollipop graph .therefore , the lollipop graph is suggested to maximize under ld .the networks maximizing under vm are shown in fig .[ fig : max t with n=10 shape](b ) . when 11 , 15 , and 21 , the generated networks are the barbell graph , which is defined as the network possessing two cliques of the equal size connected by a chain . for the other values, is the largest for networks close to the barbell graph .the maximized value is plotted against by the squares in fig .[ fig : max t with n=10 ] .the peaks of are located at , 13 , and 15 .when , the generated network is an asymmetric variant of the barbell graph in which one clique has three nodes and the other clique has four nodes [ fig . [fig : max t with n=10 shape](b ) ] . taken together with the results for and 15 ,the barbell graph is suggested to maximize under vm .the networks maximizing under ip are shown in fig .[ fig : max t with n=10 shape](c ) . when , which is the minimum possible value to keep the network connected , the generated network is the so - called double - star graph ( fig .[ fig : double - star ] ) , in which two stars are connected by an additional link between the two hubs .when is slightly larger than , the generated networks are close to the double - star graph [ fig .[ fig : max t with n=10 shape](c ) ] . for larger values ,the generated networks are not similar to the double - star graph .the maximized value is plotted against by the triangles in fig .[ fig : max t with n=10 ] .the value monotonically decreases with , suggesting that the double - star graph maximizes under ip .next , for ld , we confine ourselves to the lollipop graph and search for the lollipop graph with the largest value of . to this end , we set and , and vary the size of the clique .we are allowed to use larger values of as compared to the numerical simulations described in figs .[ fig : max t with n=10 shape ] and [ fig : max t with n=10 ] , for which , for two reasons .first , in the current set of numerical simulations , we do not have to maximize by gradually changing networks .second , because of the symmetry inherent in the lollipop graph , we can considerably reduce the number of the linear equations to solve [ e.g. , eq . ] . concretely, we have to only maintain the number of nodes in the state in the clique and the configuration of the chain , because all nodes in the clique are structurally equivalent . in fig .[ fig : max t with n=15 n=20](a ) , is plotted as a function of the size of the clique in the lollipop graph when is fixed .the figure indicates that is the largest for the lollipop graph having approximately half the nodes in the clique .the corresponding results for the barbell graph under vm are shown in fig .[ fig : max t with n=15 n=20](b ) .the figure indicates that is the largest for the barbell graph that has approximately nodes in each clique and the chain .in this section , we analytically assess the dependence of on for the lollipop , barbell , and double - star graphs under the three update rules .our analysis is based on the probability and mean time for transitions among typical coarse - grained configurations . because we are interested in the asymptotic dependence of on , we assume in this section that the lollipop graph contains nodes ; the clique and chain in the lollipop graph contain nodes each .for all three update rules , consensus within the clique is reached in time if the nodes in the chain do not affect the opinion dynamics within the clique . in the following , we estimate the mean consensus time by assessing approximate lower and upper bounds of .typical configurations of the opinion dynamics on the lollipop graph are schematically shown in fig .[ fig : lollipop bounds ] .an open and filled circle represents a node in the and states , respectively .the initial configuration is totally random and depicted as configuration i in fig .[ fig : lollipop bounds ] .[ [ sub : lollipop ld ] ] ld ^^ we call a consecutive segment on the chain occupied by the same state ( i.e. , or ) the domain . under ld ,a node in the chain has degree two ( except for the one at the end of the chain , whose degree is one ) and is updated once in steps on average . because an update event consumes time by definition , the mean time before a node on the chain is updated is . in the ordinary voter model on the one - dimensional chain, the domain size grows to an size in time .therefore , under ld , the time needed for domains to grow to an size is equal to . by this time, consensus would be reached within the clique because consensus of the clique if it were isolated occurs in time .to summarize , the transition from configuration i to configuration ii occurs in time .it should be noted that configuration ii shown in fig .[ fig : lollipop bounds](a ) includes configurations in which the chain contains two or more domains of the opposite states with characteristic length .we obtain where and are the mean consensus times starting from configurations i and ii , respectively .likewise we define , , and , where iii , iv , and v correspond to the three configurations shown in fig .[ fig : lollipop bounds](a ) .there are two types of configurations that can be reached from configuration ii . with a probability , which is in fact equal to the fraction of nodes on the chain that takes the same state as that of the clique , configuration ii transits to the consensus of the entire network [ fig .[ fig : lollipop bounds](a ) ] .this event requires time because each node on the chain is updated once per time and consensus of the chain requires update events per node . otherwise , starting from configuration ii , the chain is eventually occupied by the state opposite to that taken by the clique [ configuration iv shown in fig . [ fig : lollipop bounds](a ) ] with probability .this event also requires time .therefore , we obtain .\label{eq : t_ii lollipop ld}\ ] ] there are two possible types of events that occur in configuration iv .first , the state taken by the chain may invade a node in the clique ( configuration iii ) .second , the state taken by the clique may invade the node on the chain adjacent to the clique ( configuration v ) .either event occurs with probability because the link connecting the clique and chain must be taken , and the direction of opinion transmission on the selected link determines the type of the event .the occurrence of either event requires time because the mentioned single link in the chain must be selected for either event to occur .therefore , we obtain + \frac{1}{2 } \left [ o(n ) + \left < t\right>_{\rm v } \right ] .\label{eq : t_iv lollipop ld}\ ] ] from configuration v , the consensus of the entire lollipop graph is reached in time with probability because , under ld , the consensus probability of a state is equal to the number of nodes possessing the state in an arbitrary undirected network ( chain in the present case ) . with probability , configuration iv is revisited .this event requires updates per node , which corresponds to time .these probabilities and times can be calculated as the hitting probability and time of the random walk on the chain with two absorbing boundaries .therefore , we obtain .\label{eq : t_v lollipop ld}\ ] ] from configuration iii , the single invader in the clique spreads its opinion in the clique to lead to consensus of the entire network after time with probability approximately equal to .this probability may be an overestimate because the clique and chain in fact interact possibly to prevent consensus from being reached in time .otherwise , configuration iv or a more complicated configuration as represented by configuration i is revisited .the transition to the latter configuration may occur because , starting from configuration iii , the clique may experience a mixture of the and states for some time during which the clique transforms the chain to alternating small domains of the opposite states . however , to assess a lower bound of , we pretend that configuration iv is reached from configuration iii with probability after short time , which is true if the clique were disconnected from the chain .then , we obtain .\label{eq : t_iii lollipop ld lower}\ ] ] because consensus would require longer time if we start from configuration i than iv , we assume that eq .gives a lower bound of . by solving eqs ., , , , and , we obtain , , , , as a rough lower bound of .to estimate an upper bound of , we consider the transitions between typical configurations shown in fig .[ fig : lollipop bounds](b ) , which differs from fig .[ fig : lollipop bounds](a ) only in the transitions from configuration iii .now , with probability , we presume that the configuration returns to a random one ( configuration i ) , which is adversary to consensus . with probability , we safely assume that configuration iv is revisited in time .it should be noted that in time , a link on the chain has rarely been selected for large , such that the chain would not be divided into multiple domains of the opposite states in time . by combining eqs ., , , , and + ( 1-q^{\rm ld})\left[\left < t\right>_{\rm iv}+o(\ln n)\right ] , \label{eq : t_iii lollipop ld upper}\ ] ] we obtain , , , , as a rough upper bound of .it should be noted that the same conclusion holds true as long as .in fact , is exactly upper - bounded by for arbitrary networks on the basis of the following alternative arguments .we denote the number of nodes in the state by . in an update event , increases or decreases by one with the same probability . with the remaining probability , does not change .therefore , performs an unbiased random walk on interval ] as an upper bound of .on the basis of the rough lower and upper bounds , we conclude for the combination of the lollipop graph and ld .[ [ vm ] ] vm ^^ we evaluate approximate lower and upper bounds of under vm and ip in a similar manner . it should be noted that a node in the chain is updated once per time under vm and ip , which is in contrast to the case of ld .to derive an approximate lower bound under vm , we consider the same transitions among typical configurations as those under ld [ fig .[ fig : lollipop bounds](a ) ] .we obtain , \label{eq : t_ii lollipop vm}\\ % & \left< t\right>_{\rm iii } = \frac{1}{n } o(n ) + \frac{n-1}{n } \left [ o(\ln n ) + \left < t\right>_{\rm iv } \right ] , \label{eq : t_iii lollipop vm lower}\\ % & \left< t\right>_{\rm iv } = \frac{2}{n+2 } \left [ o(n ) + \left < t\right>_{\rm iii } \right ] + \frac{n}{n+2 } \left [ o(1 ) + \left < t\right>_{\rm v } \right ] , \label{eq : t_iv lollipop vm}\\ % & \left <t\right>_{\rm v } = \frac{1}{2(n-1 ) } o(n^2 ) + \frac{2n-3}{2(n-1 ) } \left [ o(n ) + \left < t\right>_{\rm iv } \right ] , \label{eq : t_v lollipop vm}\end{aligned}\ ] ] where , . to derive eq ., we used the fact that , in a single update event , the transition from configuration iv to iii occurs with probability and that from configuration iv to v occurs with probability . to derive eq ., we used the fact that the fixation probability for a node ( i.e. , probability that the consensus of the state taken by is reached when all the other nodes possess the opposite state ) under vm is proportional to s degree . consider an isolated chain of length in which the leftmost node is in state and all the other nodes are in state , as schematically shown in configuration v in fig .[ fig : lollipop bounds](a ) .then , and fixate with probability ] , respectively ; is equal to the sum of the degree of all nodes .equations , , , , and yield , , , , . to estimate an upper bound of , we consider transitions among typical configurations shown in fig .[ fig : lollipop bounds](c ) .it is different from the diagram for ld [ fig .[ fig : lollipop bounds](b ) ] in that the transition from configuration iii to configuration iv is not present .we modified the diagram because , under vm , the node that belongs to the chain and is adjacent to the clique imitates the state of a node in the clique every time . therefore, even within time , nodes in the chain may flip the state many times such that configuration iv is rarely visited directly from configuration iii .given this situation , we replace eq . by in other words ,once configuration iii is reached , we assume that configuration i is always reached after time . because configuration i is considered to be adversary to consensus as compared to configuration iv , we assume that eq .gives an upper bound of .the time on the right - hand side of eq . comes from the fact that the opinion dynamics in an isolated clique of size ends up with consensus in time . using eqs ., , , , and , we obtain , , , , . therefore , we conclude for the combination of the lollipop graph and vm . [ [ ip ] ] ip ^^ for an approximate lower bound under ip , we obtain , \label{eq : t_ii lollipop ip}\\ % & \left < t\right>_{\rm iii } = \frac{1}{n } o(n ) + \frac{n-1}{n } \left [ o(\ln n ) + \left < t\right>_{\rm iv } \right ] , \label{eq : t_iii lollipop ip lower}\\ % & \left< t\right>_{\rm iv } = \frac{n}{n+2 } \left [ o(1 ) + \left < t\right>_{\rm iii } \right ] + \frac{2}{n+2 } \left [ o(n ) + \left < t\right>_{\rm v } \right ] , \label{eq : t_iv lollipop ip}\\ % & \left< t\right>_{\rm v } = \frac{2}{n+2 } o(n^2 ) + \frac{n}{n+2 } \left [ o(n ) + \left < t\right>_{\rm iv } \right ] , \label{eq : t_v lollipop ip}\end{aligned}\ ] ] where , . to derive eq . , we used the fact that , in a single update event , configuration iv transits to iii with probability and v with probability . to derive eq . , we used the fact that the fixation probability under ip is inversely proportional to the degree of the node .similar to the case of vm , consider an isolated chain of length in which the leftmost node is in state and all the other nodes are in state .then , and fixate with probability and , respectively .equations , , , , and yield , and , , .because a random initial condition yields , we regard as a rough lower bound of . to estimate an upper bound of , we consider the same transitions among typical configurations as those for ld [ fig .[ fig : lollipop bounds](b ) ] and replace eq .by + ( 1-q^{\rm ip})\left[\left < t\right>_{\rm iv}+o(\ln n)\right ] , \label{eq : t_iii lollipop ip upper}\ ] ] where . to derive eq . , we assumed that configuration iv is reached in time with probability . within this time, the state monopolizing the clique would not invade the chain because such an event requires time to occur . by combining eqs ., , , , and , we obtain , , , , as a rough upper bound of . therefore , we conclude for the combination of the lollipop graph and ip . in the barbell graph , the two cliques may first reach the unanimity of the opposite states .this phenomenon may delay consensus , which is the case for the two - clique graph . for simplicity , we assume in this section that each clique and the chain have nodes each such that the barbell graph is composed of nodes .we evaluate approximate lower and upper bounds for each update rule in the manner similar to that for the lollipop graph .typical configurations and the transitions among them for the barbell graph are schematically shown in fig .[ fig : barbell bounds ] .[ [ ld ] ] ld ^^ under ld , a node belonging to the chain is updated once per time .therefore , the consensus of the chain would require time if it were isolated .when the consensus of the chain is reached , each clique would have also reached consensus because the consensus within a clique occurs in time .when two cliques end up with the same state after time , the consensus of the entire barbell graph is realized .otherwise , configuration iii shown in fig .[ fig : barbell bounds](a ) is reached .either event occurs with probability .therefore , we obtain .\label{eq : t_i barbell ld}\ ] ] starting from configuration iii , the next change in the state occurs at the boundary between the chain and one of the two cliques whose state is opposite to that of the chain .similar to the transition from configuration iv in the case of the lollipop graph ( sec .[ sub : lollipop ld ] ) , we obtain + \frac{1}{2 } \left [ o(n ) + \left < t\right>_{\rm iv } \right ] .\label{eq : t_iii barbell ld}\ ] ] starting from configuration iv [ fig .[ fig : barbell bounds](a ) ] , the state taken by just one node in the chain , which is adjacent to a clique , fixates in the chain in time with probability .otherwise , with probability , the state taken by nodes in the chain fixates in the chain in time . in both cases ,configuration iii is recovered because of the symmetry .therefore , we obtain we proceed similarly to the case of the lollipop graph to evaluate an approximate lower bound of . in other words , as shown in fig .[ fig : barbell bounds](a ) , starting from configuration ii , we suppose that the consensus of the entire network is attained with probability in time and that configuration iii is revisited in time with the remaining probability .then , we obtain .\label{eq : t_ii barbell ld}\ ] ] by combining eqs . , , , and, we obtain . to evaluate an approximate upper bound, we only modify the transitions from configuration ii , similar to the case of the lollipop graph . without loss of generality , we assume that the chain and the first clique is occupied by the state and that the second clique has a single node in state and nodes in state , as illustrated by configuration ii in fig .[ fig : barbell bounds](a ) . with a small probability , the state proliferates in the second clique to possibly occupy it in time . because a node in the chain is updated once per time, the configuration of the chain may turn into a mixture of states and in the time . therefore , with probability , we assume that configuration ii transits to configuration i. with probability , the single node in state is extinguished in the second clique in time such that configuration iii is revisited . it should be noted that a node in the chain is rarely updated in time such that the unanimity on the chain is not perturbed in the time . by collecting these contributions , we obtain + \left(1-r^{\rm ld}\right ) \left [ o(\ln n ) + \left < t\right>_{\rm iii } \right ] .\label{eq : t_ii barbell ld upper}\ ] ] by combining eqs . , , , and, we obtain .it should be noted that the arguments based on the unbiased random walk ( sec .[ sub : lollipop ld ] ) also lead to same upper bound of .therefore , we conclude for the combination of the barbell graph and ld .[ [ vm-1 ] ] vm ^^ for an approximate lower bound of under vm , we assume the same types of transitions among configurations as those for ld [ fig . [fig : barbell bounds](a ) ] to obtain , \label{eq : t_i barbell vm}\\ % & \left < t\right>_{\rm ii } = \frac{1}{n } o(n ) + \frac{n-1}{n } \left [ o(\ln n ) + \left < t\right>_{\rm iii } \right ] , \label{eq : t_ii barbell vm lower}\\ & \left < t\right>_{\rm iii } = \frac{2}{n+2 } \left [ o(n ) + \left < t\right>_{\rm ii } \right]+ \frac{n}{n+2 } \left [ o(1 ) + \left < t\right>_{\rm iv } \right ] , \label{eq : t_iii barbell vm}\\ % & \left <t\right>_{\rm iv } = o(n ) + \left < t\right>_{\rm iii } , \label{eq : t_iv barbell vm}\end{aligned}\ ] ] leading to . to estimate an upper bound , we assume the transitions among configurations shown in fig .[ fig : barbell bounds](c ) .it is different from those for ld [ fig .[ fig : barbell bounds](b ) ] in the transitions starting from configuration ii . similar to the case of the ld, we assume without loss of generality that the chain , first clique , and a single node in the second clique are in state and the other nodes in the second clique are in state .the state in the second clique fixates there with probability in time and is eradicated with probability in time . because each node in the chain is updated once per unit time, domains of characteristic length are formed within the time in the latter situation .the resulting configuration is shown as configuration v in fig .[ fig : barbell bounds](c ) . given configuration v ,the consensus of the chain occurs in time , such that configuration iii is revisited . by combining eqs ., , , and + ( 1-r^{\rm vm } ) \left [ o(\ln n ) + o(n^2 ) + \left < t\right>_{\rm iii } \right ] , \label{eq : t_ii barbell vm upper}\ ] ] we obtain .therefore , we conclude for the combination of the barbell graph and vm .[ [ ip-1 ] ] ip ^^ for an approximate lower bound of under ip , we consider fig .[ fig : barbell bounds](a ) to obtain , \label{eq : t_i barbell ip}\\ % & \left < t\right>_{\rm ii } = \frac{1}{n } o(n ) + \frac{n-1}{n } \left [ o(\ln n ) + \left < t\right>_{\rm iii } \right ] , \label{eq : t_ii barbell ip lower}\\ & \left < t\right>_{\rm iii } = \frac{n}{n+2 } \left [ o(1 ) + \left < t\right>_{\rm ii } \right]+ \frac{2}{n+2 } \left [ o(n ) + \left < t\right>_{\rm iv } \right ] , \label{eq : t_iii barbell ip}\\ % & \left < t\right>_{\rm iv } = o(n ) + \left < t\right>_{\rm iii } , \label{eq : t_iv barbell ip}\end{aligned}\ ] ] leading to and . because a random initial condition yields , we regard as a rough lower bound of . for an approximate upper bound ,we consider the same diagram as that for ld [ fig .[ fig : barbell bounds](b ) ] .the rationale behind this choice is that the unanimity in the chain in configuration ii is not disturbed in time .this holds true because it takes time before the state taken by a clique may invade a node that belongs to the chain and is adjacent to the clique .therefore , we replace eq . by + \left(1-r^{\rm ip}\right ) \left[ o(\ln n ) + \left< t\right>_{\rm iii } \right ] , \label{eq : t_ii barbell ip upper}\ ] ] where . by combining eqs ., , , and , we obtain . therefore , we conclude for the combination of the barbell graph and ip .we evaluate the mean consensus time for the double - star graph under the three update rules using a different method from that for the lollipop and barbell graphs .it is mathematically established that the so - called dual process of the opinion dynamics is the so - called coalescing random walk , which is defined as follows .consider simple random walkers , with one walker located at each node initially .walkers that have arrived at the same node are assumed to coalesce into one .then , all walkers eventually coalesce into one in a finite network .the dependence on the update rule only appears in the rule with which we move the walkers .the mathematical duality between the opinion dynamics and coalescing random walk guarantees that the time at which the last two walkers coalesce is equal to the consensus time of the opinion dynamics .the time to the coalescence of the last two walkers is considered to dominate the entire coalescing random walk process starting from the walkers and ending when the last two walkers have coalesced . therefore , in this section , we assess by measuring the mean time at which two walkers starting from different nodes in the double - star graph meet .we used the same technique for a different network in a previous study . consider the double - star graph with nodes as shown in fig .[ fig : double - star ] .we call two symmetric parts composed of nodes the classes 1 and 2 .each class contains one hub node with degree and leaf nodes with degree 1 .we define where denotes the time . in eq . , is the probability that the two walkers are located at different leaves in a single class ( i.e. , class 1 or 2 ) at time ( configuration 1 shown in fig .[ fig:5 configs double - star ] ) . is the probability that one walker stays in a class 1 leaf and the other walker stays in the class 1 hub , or one walker stays in a class 2 leaf and the other walker stays in the class 2 hub ( configuration 2 ) . is the probability that one walker stays in a class 1 leaf and the other walker stays in the class 2 hub , or one walker stays in a class 2 leaf and the other walker stays in the class 1 hub ( configuration 3 ) . is the probability that a walker stays in a class 1 leaf and the other walker stays in a class 2 leaf ( configuration 4 ) .finally , is the probability that a walker stays in the class 1 hub and the other walker stays in the class 2 hub ( configuration 5 ) .we denote by the probability that the two walkers meet at time . [[ sub : double ld ] ] ld ^^ under ld , a link with one of the two directions is selected with probability $ ] in an update event , which consumes time .therefore , we obtain where equation leads to and by using eqs . and , we obtain because we conclude under ld . [ [ sub : double vm ] ] vm ^^ as shown in appendix [ sec : double - star vm ] , the derivation of for the double - star graph under vm is similar to the case under ld .for this case , we obtain we conclude because holds true for generic initial conditions corresponding to , , and . [ [ sub : double ip ] ] ip ^^ as shown in appendix [ sec : double - star ip ] , for the double - star graph under ip is given by we conclude under ip because holds true for generic initial conditions corresponding to , , and .to check the validity of the scaling between and derived in sec .[ sec : order ] , we carry out direct numerical simulations of the opinion dynamics for larger networks than those considered in sec .[ sec : exact ] . as the lollipop graph, we consider those having nodes in the chain and clique . as the barbell graph, we consider those having nodes in the chain and each clique . in each run , randomly selected nodes initially possess state and the other nodes state .we calculate as an average over runs for each network and update rule .the relationship between the numerically obtained and is shown in fig .[ fig : large n ] for each combination of the network ( i.e. , lollipop , barbell , or double star ) and update rule ( i.e. , ld , vm , or ip ) .the numerical results ( symbols ) are largely consistent with the scaling law derived in sec .[ sec : order ] ( lines ) in most cases . to be more quantitative , we fitted the relationship to each plot shown in fig .[ fig : large n ] using the least - square error method . for the double - star graph under vm, we added three data points ( for , for , and for ) to those shown in fig .[ fig : large n ] before carrying out the least - square error method .the numerically obtained values , shown in table [ tab : alpha numerical ] , are close to the theoretical results summarized in table [ tab : summary ] except for notable differences in some cases .for example , for the barbell graph under ip , the theory predicts , whereas the numerical results yield .although the precise reason for the discrepancy is unclear , it seems to be due to the finite size effect .for example , if we only use the data up to for the double - star graph under vm , we would obtain . by extending the numerical simulations up to ,we have obtained ; the theory predicts .in other combinations of network and update rule , we could not carry out numerical simulations for larger populations due to the computational cost .we explored the networks that maximized the mean consensus time , , of the three variants of the voter model .the lollipop graph , barbell graph , and double - star graph were suggested to maximize under the ld , vm , and ip update rules , respectively .in addition , we evaluated for the three types of networks under each of the three update rules .the results are summarized in table [ tab : summary ] .although the dual process of the opinion dynamics is the coalescing random walk , we expect that the characteristic time of the coalescing random walk , such as the time to the final coalesence , and that of usual random walks , such as the hitting time , are qualitatively the same .if we accept this contention , our results are consistent with the previous results for random walks .the hitting and cover time for the random walk on the lollipop graph and the barbell graph both scale as . for the lollipop graph , the theoretical results are consistent with ours for ld . for the barbell graph , the theoretical results are consistent with ours for ld and vm .however , different update rules yield for the lollipop and barbell graphs .the scaling between and depends on the update rule because a choice of the update rule corresponds to weighting of the links in the network .the link weight biases the probability that a particular link is used for state updating in opinion dynamics and the probability with which the random walk transits from one node to another .it should be noted that more exact estimation of for the lollipop and barbell graphs on the basis of the random walk , as we did for the double - star graph , warrants future work .the maximum hitting time of the random walk with respect to the network structure scales as .therefore , we expect that the consensus time attained for some combinations of the network and update rule in the present study is the maximum possible except for the constant factor and nonleading terms .the double - star graph , which maximizes under ip , is far from the lollipop and barbell graphs in two aspects .first , it has a small diameter , i.e. , three .second , it has not been recognised as a network that slows down dynamics of the random walk .previous theoretical results for opinion dynamics in heterogeneous random networks yielded for ip , where and are the mean degree and the mean of the inverse degree , respectively .for the double star with nodes , where is even , this theory predicts because and .this estimate deviates from our results , i.e. , . in the theory developed in refs . , heterogeneous random networks are assumed such that the network does not have structure other than the degree distribution .in contrast , in the double - star graph , leaf nodes are never adjacent to each other , and the two hubs are always adjacent .we consider this is the reason for the deviation .similarly , the theory for heterogeneous random networks adapted to the degree distribution of the double - star graph suggests under vm , where is the second moment of the degree .this estimate is also different from ours , i.e. , , presumably for the same reason .we thank yusuke kobayashi for suggesting a succinct proof for the upper bound of the mean consensus time under the ld update rule , sidney redner for valuable discussion , and ryosuke nishi for careful reading of the manuscript .n.m . acknowledges the support provided through jst , crest , and jst , erato , kawarabayashi large graph project .on the double - star graph , consider a single update event under vm .there are two types of events .first , a leaf node imitates the state of the hub node in the same class with probability . in the random walk interpretation, this event corresponds to the movement of a walker located at this leaf ( if so ) to the hub .second , a hub imitates the opinion of a neighbor , which is either a leaf in the same class or the hub in the opposite class , with probability . in the random walk , this event corresponds to the movement of a walker located at the hub to a neighbor .neither walker moves if a walker is not located at the selected starting node .therefore , we obtain where equation leads to 4 & 4 & 2 & 2 & \frac{2(n-1)}{n } \\[1.5ex ] 2 & 2 & 2(n+2 ) & 2(n+2 ) & \frac{2(n-1)(n+2)}{n } \\[1.5ex ] \frac{n-1}{n } & \frac{n-1}{n } & \frac{(n-1)(n+2)}{n } & \frac{n^2 + 3n+1}{n } & \frac{(n-1)^2(n+2)}{n^2 } \\[1.5ex ] 1 & 1 & n+2 & n+2 & \frac{n^2 + 3n+1}{n } \end{pmatrix } \label{iavm}\end{aligned}\ ] ] and by using eqs . and , we obtain where therefore , we obtain eq . .under ip , a walker at a leaf of the double - star graph moves to the hub in the same class with probability in an update event .a walker at a hub moves to a leaf in the same class with probability and to the hub in the opposite class with probability . with the remaining probability , neither walker moves .therefore , we obtain where equation leads to c + 1 & c + 1 & c & c & c-1 \\[1.5ex ] c & c & c ( n+2 ) & c ( n+2 ) & n(n-1)(n+2 ) \\[1.5ex ] \frac{c n(n-1)}{2 } & \frac{c n(n-1)}{2 } & \frac{c n(n-1)(n+2)}{2 } & \frac{n^5-n^3 + 3n^2-n+3}{2 } & \frac{n^2(n-1)^2(n+2)}{2 } \\[1.5ex ] \frac{1}{2 } & \frac{1}{2 } & \frac{n+2}{2 } & \frac{n+2}{2 } & \frac{2n+3}{2 } \end{pmatrix } \label{iaip}\end{aligned}\ ] ] and where . by using eqs . and , we obtain where therefore , we obtain eq . ., for the lollipop graph with various values under ld .( b ) for the barbell graph with various values under vm . in ( a ) and ( b ), the horizontal axis represents the size of the clique in the lollipop or barbell graph .we set and .,title="fig:",width=302 ] , for the lollipop graph with various values under ld .( b ) for the barbell graph with various values under vm . in ( a ) and ( b ), the horizontal axis represents the size of the clique in the lollipop or barbell graph .we set and .,title="fig:",width=302 ] for the lollipop graph .( a ) schematic for evaluating a lower bound .( b ) schematic for evaluating an upper bound under ld and ip .( c ) schematic for evaluating an upper bound under vm.,title="fig:",width=377 ] for the lollipop graph .( a ) schematic for evaluating a lower bound .( b ) schematic for evaluating an upper bound under ld and ip . ( c )schematic for evaluating an upper bound under vm.,title="fig:",width=377 ] for the lollipop graph .( a ) schematic for evaluating a lower bound .( b ) schematic for evaluating an upper bound under ld and ip . ( c )schematic for evaluating an upper bound under vm.,title="fig:",width=377 ] for the barbell graph .( a ) schematic for evaluating a lower bound .( b ) schematic for evaluating an upper bound under ld and ip . ( c )schematic for evaluating an upper bound under vm.,title="fig:",width=377 ] for the barbell graph .( a ) schematic for evaluating a lower bound .( b ) schematic for evaluating an upper bound under ld and ip .( c ) schematic for evaluating an upper bound under vm.,title="fig:",width=377 ] , and the number of nodes , , under different networks and update rules .( a ) lollipop graph .( b ) barbell graph .( c ) double - star graph.,title="fig:",width=302 ] , and the number of nodes , , under different networks and update rules .( a ) lollipop graph .( b ) barbell graph .( c ) double - star graph.,title="fig:",width=302 ] , and the number of nodes , , under different networks and update rules .( a ) lollipop graph .( b ) barbell graph .( c ) double - star graph.,title="fig:",width=302 ] .summary of the results .the dependence of the analytical estimations of mean consensus times on the number of nodes is shown for each combination of the network and update rule . [ cols="^,^,^,^",options="header " , ]
|
we explore the networks that yield the largest mean consensus time of voter models under different update rules . by analytical and numerical means , we show that the so - called lollipop graph , barbell graph , and double - star graph maximize the mean consensus time under the update rules called the link dynamics , voter model , and invasion process , respectively . for each update rule , the largest mean consensus time scales as , where is the number of nodes in the network .
|
the interaction - free measurements proposed by elitzur and vaidman ( ev ifm ) led to numerous investigations and several experiments have been performed .one of the possible applications of the interaction - free measurements for quantum communication is that it opens up the way to novel quantum non - demolition techniques .other applications are using the idea of interaction - free measurements for `` interaction - free '' computation and for improving cryptographic schemes .however , there have been several objections to the name `` interaction - free '' .some authors in trying to avoid it , made modifications such as `` interaction ( energy exchange ) free measurements '' , `` indirect measurements '' , `` seemingly interaction - free measurements '' , `` interaction - free '' interrogation , `` exposure - free imaging '' , `` interaction - free interaction '' , `` absorption - free measurements '' , etc .moreover , simon and platzman claimed that there is a `` fundamental limit on ` interaction - free ' measurements '' . in many works on the implementation and the analysis of the ev ifmthere is a considerable confusion about the meaning of the term `` interaction - free '' .for example , a very recent paper stated that `` energy exchange free '' is now well established as a more precise way to characterize ifm in the case of classical objects .on the other hand , ryff and ribeiro used the name `` interaction - free '' for a very different experiment . in this paperi want to clarify in which sense the interaction - free measurements are interaction free .i will also make a comparison with procedures termed `` interaction - free measurements '' in the past and will analyze conceptual advantages and disadvantages of various modern schemes for the ifm . the plan of this paper is as follows : in section ii i will describe the original proposal of elitzur and vaidman .section iii is devoted to a particular aspect of the ifm according to which the measurement is performed without any particle being at the vicinity of the measured object .the discussion relies on the analogy with the `` delayed choice experiment '' proposed by wheeler . in sectioniv i make a comparative analysis of the `` interaction - free measurements '' by renninger and dicke . in section v i analyze interaction - free measurements of quantum objects . section vi devoted to the controversy related to the momentum and energy transfer in the process of the ifm . in section viii discuss modifications of the original ev proposal , in particular , the application of the quantum zeno effect for obtaining a more efficient ifm .i end the paper with a few concluding remarks in section viii .in the ev ifm paper the following question has been considered : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ suppose there is an object such that _ any _ interaction with it leads to an explosion .can we locate the object without exploding it ? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the ev method is based on the mach - zehnder interferometer .a photon ( from a source of single photons ) reaches the first beam splitter which has a transmission coefficient .the transmitted and reflected parts of the photon wave are then reflected by the mirrors and finally reunite at another , similar beam splitter , see fig .1_a_. two detectors are positioned to detect the photon after it passes through the second beam splitter .the positions of the beam splitters and the mirrors are arranged in such a way that ( because of destructive interference ) the photon is never detected by one of the detectors , say , and is always detected by .this interferometer is placed in such a way that one of the routes of the photon passes through the place where the object ( an ultra - sensitive bomb ) might be present ( fig .a single photon passes through the system .there are three possible outcomes of this measurement : i ) explosion , ii ) detector clicks , iii ) detector clicks . if detector clicks ( the probability for that is ) , the goal is achieved : we know that the object is inside the interferometer and it did not explode .the ev method solves the problem which was stated above .it allows finding with certainty an infinitely sensitive bomb without exploding it .the bomb might explode in the process , but there is at least a probability of 25% to find the bomb without the explosion .`` certainty '' means that when the process is successful ( clicks ) , we know for sure that there is something inside the interferometer .the formal scheme of the ev method is as follows .the first stage of the process ( the first beam splitter ) splits the wave packet of the test particle into superposition of two wave - packets .let us signify is the wave packet which goes through the interaction region and is the wave packet which does not enter the interaction region . in the basic ev procedurethe first stage is the next stage is the interaction between the object ( the bomb ) and the test particle .if the test particle enters the interaction region when the bomb is present , it causes an explosion : if the test particle does not enter the interaction region or if the bomb is not present , then nothing happens at this stage : .4 cm fig . 1 . ( a )when the interferometer is properly tuned , all photons are detected by and none reach .( b ) if the bomb is present , detector has the probability 25% to detect the photon sent through the interferometer , and in this case we know that the bomb is inside the interferometer without exploding it . the next stage is the observation of the interference between the two wave packets of the test particle ; it takes place at the second beam - splitter and detectors .it is achieved by splitting the noninteracting wave packet and splitting the wave packet which passed through the interaction region ( if it did ) the observation of the test particle is described by corresponds to the success of the experiment , when we know that the bomb is present in the interaction region ; we signify it by the state .if the bomb is not present then the the ev measurement is described by , explosion with the probability of , and no information but no explosion with the probability of . in the latter casewe can repeat the procedure and in this way ( by repeating again and again ) we can find one third of bombs without exploding them .it was found that changing the reflectivity of the beam splitters can improve the method such that the fraction of the bombs remaining intact almost reaches one half .the name `` interaction - free '' seems very appropriate for a procedure which allows finding objects without exploding them , in spite of the fact that these objects explode due to _ any _ interaction .simple logic tells us : given that any interaction leads to an explosion and given that there has been no explosion , it follows that there has been no interaction .this argument which sounds unambiguous in the framework of classical physics requires careful definition of the meaning of `` any interaction '' in the domain of quantum mechanics .the weakness of the definition : `` the ifm is a procedure which allows finding an object exploding due to _ any _ interaction without exploding it , '' is that quantum mechanics precludes existence of such objects .indeed , a good model for an `` explosion '' is an inelastic scattering .the optical theorem tells us that there can not be an inelastic scattering without some elastic scattering .the latter does not change the internal state of the object , i.e. , the object does not explode . in order to avoid non - existing concepts in the definition of the ifm, we should modify the definition in the following way : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the ifm is a procedure which allows finding ( at least sometimes ) bombs of any sensitivity without exploding them . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the method presented in the ev ifm paper have certain additional features which further justify the name `` interaction - free '' .the method is applicable for finding the location of objects which do not necessarily explode .even for such an object we can claim that , in some sense , finding its location is `` interaction - free '' .the discussion about the justification of the term `` interaction - free '' for the ev procedure has started in the original ev ifm paper : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the argument which claims that this is an interaction - free measurement sounds very persuasive but is , in fact , an artifact of a certain interpretation of quantum mechanics ( the interpretation that is usually adopted in discussions of wheeler s delayed - choice experiment ) .the paradox of obtaining information without interaction appears due to the assumption that only one `` branch '' of a quantum state exists .( p. 991 ) __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ one of the `` choices '' of wheeler s delayed - choice experiment is an experiment with a mach - zehnder interferometer in which the second beam splitter is missing ( see fig . 2 ) .in the run of the experiment with a single photon detected by , it is usually accepted that the photon had a well defined trajectory : the upper arm of the interferometer .in contrast , according to the von neumann approach , the photon was in a superposition inside the interferometer until the time when one part of the superposition reached the detector ( or until the time the other part reached the detector if that event was earlier ) . at that moment the wave function of the photon collapses to the vicinity of .the justification of wheeler s claim that the photon detected by never was in the lower arm of the interferometer is that , according to the quantum mechanical laws , we can not see any physical trace from the photon in the lower arm of the interferometer .this is true if ( as it happened to be in this experiment ) the photon from the lower arm of the interferometer can not reach the detector .the fact that there can not be a physical trace of the photon in the lower arm of the interferometer can be explained in the framework of the two - state vector formulation of quantum mechanics .this formalism is particularly suitable for this case because we have pre- and post - selected situation : the photon was post - selected at . while the wave function of the photon evolving forward in time does not vanish in the lower arm of the interferometer , the backward - evolving wave function does .vanishing one of the waves ( forward or backward ) at a particular location is enough to ensure that the photon can not cause any change in the local variables of the lower arm of the interferometer .2 . ( a )the `` trajectory '' of the photon in the wheeler experiment given that detected the photon , as it is usually described .the photon can not leave any physical trace outside its `` trajectory '' .( b ) the `` trajectory '' of the quantum wave of the photon in the wheeler experiment according to the von neumann approach .the photon remains in a superposition until the collapse which takes place when one of the wave packets reaches a detector ..4 cm in our experiment ( fig . 1 . )we have the same situation .if there is an object in the lower arm of the interferometer , the photon can not go through this arm to the detector .this is correct if the object is such that it explodes whenever the photon reaches its location , but moreover , this is also correct in the case in which the object is completely nontransparent and it blocks the photon in the lower arm eliminating any possibility of reaching .even in this case we can claim that we locate the object `` without touching '' .this claim is identical to the argument according to which the photon in wheeler s experiment went solely through the upper arm . in the framework of the two - state vector approach we can say that the forward - evolving quantum state is nonzero in the lower arm of the interferometer only up to the location of the object , while the backward - evolving wave function is nonzero only from the location of the object .thus , at every point of the lower arm of the interferometer one of the quantum states vanishes .the two - state vector formalism does not suggest that the photon is not present at the lower arm of the interferometer ; it only helps to establish that the photon does not leave a trace there .the latter is the basis for the claim that , in some sense , the photon was not there .in many papers describing experiments and modifications of the ev ifm the first cited papers are one by renninger and another by dicke . it is frequently claimed that elitzur and vaidman `` extended ideas of renninger and dicke '' or just `` amplified the argument by inventing an efficient interferometric set '' .in fact , there is little in common between renninger - dicke ifm and the ev ifm .dicke s paper is cited in the ev ifm paper , but the citation is given only for the justification of the name : `` interaction - free measurements '' .renninger s and dicke s papers do not have the method , and , more importantly , they do not address the question which the ev ifm paper have solved . .1 cm fig .3 . renninger s experiment .the photon spherical wave is modified by the scintillation detector in spite of the fact that it detects nothing ..3 cm fig .4 . dicke s experiment .the ground state of a particle in the potential well ( solid line ) is changed to a more energetic state ( dashed line ) due to short radiation pulse , while the quantum state of the photons in the pulse remains unchanged ..3 cm renninger discussed a _ negative result experiment _ : a situation in which the detector does not detect anything . in spite of the fact that nothing happened to the detector , there is a change in the measured system .he considered a spherical wave of a photon after it extended beyond the radius at which a scintillation detector was located in part of the solid angle , see fig .the state of the detector remained unchanged but , nevertheless , the wave - function of the photon is modified .the name `` interaction - free '' for renninger s setup might be justified because there is not _ any _ , not even an infinitesimally small , change in the state of the detector in the described process .this is contrary to the classical physics in which interaction in a measurement process can be made arbitrary small , but it can not be exactly zero .dicke considered the paradox of the apparent non - conservation of energy in a renninger - type experiment .he considered an atom in a ground state inside a potential well .part of the well was illuminated by a beam of photons .a negative result experiment was considered in which no scattered photons were observed , see fig .the atom changed its state from the ground state to some superposition of energy eigenstates ( with a larger expectation value of energy ) in which the atom does not occupy the part of the well illuminated by the photons .the photons , however , apparently have not changed their state at all .then , dicke asked : `` what is the source of the additional energy of the atom ? ! '' careful analysis ( in part , made by dicke himself ) shows that there is no real paradox with the conservation of energy , although there are many interesting aspects in the process of an ideal measurement .one of the key arguments is that the photon pulse has to be well localized in time and , therefore , it must have a large uncertainty in energy .the word `` measurement '' in quantum theory have many very different meanings .the purpose of the renninger and dicke measurements is _ preparation _ of a quantum state .in contrast , the purpose of the ev interaction - free measurement is to obtain _ information _ about the object . in renninger and dicke measurements the _ measuring device _is undisturbed ( these are negative result experiments ) while in the ev measurement the _ observed object _ is , in some sense , undisturbed . in fact , in general ev ifm the quantum state of the observed object _ is _ disturbed : the wave function becomes localized at the vicinity of the lower arm of the interferometer ( see sec .3 of the ev paper ) .the reasons for using the term `` interaction - free measurements '' are that the object does not explode ( if it is a bomb ) , it does not absorb any photon ( if it is an opaque object ) and that we can claim that , in some sense , the photon does not reach the vicinity of the object .a variation of dicke s measurement which can serve as a measurement of the location of an object was considered in the ev ifm paper for justifying the name `` interaction - free measurements '' of the ev procedure .an object in a superposition of being in two far away places was considered .a beam of light passed through one of the locations and no scattered photons were observed .this yields the information that the object is located in the other place .the described experiment is interaction - free because the object ( if it is a bomb ) would not explode : the object is found in the place where there were no photons . in such an experiment , however , it is more difficult to claim that the photon was not at the vicinity of the object : the photon was not at the vicinity of the _ future _ location of the object .but the main weakness of this experiment relative to the ev scheme is that we get information about the location of the object only if we have _ prior information _ about the state of the object . if it is known in advance that the object can be found in one of two boxes and it was not found in one , then obviously , we know that it is in the second box .the whole strength of the ev method is that we get information that an object is inside the box _ without any prior information ! _ the latter , contrary to the former task can not be done without help of a quantum theory . in order to see the differencemore vividly let us consider an application of the ev method to dicke s experimental setup . instead of the light pulse we send a `` half photon '' : we arrange the ev device such that one arm of the mach - zehnder interferometer passes through the location of the particle , see fig ., if detector clicks , the particle is localized in the interaction region . in both cases ( the renninger - dicke ifm and this ev ifm )there is a change in the quantum state of the particle without , in some sense , interaction with the photon .however , the situations are quite different . in the original dicke sexperiment we can claim that the dashed line of fig . 4 . is the state of the particle after the experiment only if we have prior information about the state of the particle before the experiment ( solid line of fig . 4 . ) in contrast , in the ev modification of the experiment , we can claim that a particle is localized in the vicinity of the interaction region ( dashed line of fig . 5 .) even if we had no prior information about the state of the particle .it seems that dicke named his experiment `` interaction - free '' mainly because the photons did not scatter : this is a `` negative result experiment '' . in the ev experimentthe photon clearly changes its state and it is essential that it was detected : this is not a `` negative result experiment '' in this sense .paul noted that there is an earlier paper by renninger in which an experimental setup almost identical to that of the ev ifm was considered : a mach - zehnder interferometer tuned to have a dark output towards one of the detectors .however , renninger never regarded his experiment as a measurement on an object which was inside the interferometer : renninger s argument , as in the experiment described in fig .3 , was about `` interaction - free '' changing the state of the photon .renninger has not asked the key question of the ev ifm : how to get information in an interaction - free manner ?i can see something in common between the renninger - dicke ifm and the ev ifm in the framework of the many - worlds interpretation . in both casesthere is an `` interaction '' : radiation of the scintillator in the renninger experiment or explosion of the bomb in the ev experiment , but these interactions take place in the `` other '' branch , not in the branch we end up discussing the experiment . in an attempt to avoid adopting the many - worlds interpretation such interactionswere considered as _counterfactual _ .the ev modification of dicke s experiment .the ground state of a particle in the potential well ( solid line ) is changed to a well localized state ( dashed line ) when the photon is detected by the detector .we name the experiment described in fig . 5 . `` interaction - free '' measurement ( cf. `` interaction - free collapse '' of the ev ifm paper ) in spite of the fact that both the particle and the photon change their states .the main motivation for the name is that the interaction between the particle and the photon is such that there is an `` explosion '' if they `` touch '' each other , but the experiment ( when clicks ) ends up without explosion .the second aspect of the ev ifm , when applied to quantum objects , encounters a subtle difficulty . after performing the procedure of the ifm and obtainingthe photon click at , we can not claim that the photon was not present at the region of interaction ; moreover , it might be the case that , in some sense , the photon was there _ with certainty_. first , let us repeat the argument which led us to think that the photon was not there .consider again the experiment described on fig . 1 ., but now the `` bomb '' is replaced by a quantum object in a superposition of being in the `` interaction region '' and somewhere else outside the interferometer .if clicks , we can argue that the object had to be on the way of the photon in the lower arm of the interferometer , otherwise , it seems that we can not explain the arrival of the photon to the `` dark '' detector . if the object was on the way of the photon , we can argue that the photon was not there , otherwise we had to see the explosion .therefore , the photon went through the upper arm of the interferometer and it was not present in the interaction region .the persuasive argument of the previous paragraph is incorrect ! not just the semantic point discussed above , i.e. , that according to the standard approach the quantum wave of the photon in the lower arm of the interferometer was not zero until it reached the interaction region .it is wrong to say that the photon was not in the lower arm even in the part _ beyond _ the interaction region . in the experimentin which clicks , the photon _ can _ be found in any point of the lower arm of the interferometer !this claim can be seen most clearly by considering `` nested interaction - free measurements '' .the object is in a superposition of two wave packets inside its own mach - zehnder interferometer ( see fig . 6 . )if ( for the photon ) clicks , the object is localized inside the interaction region . however, the object itself is the test particle of another ifm ( we can consider a gedanken situation in which the object which explodes when the photon reaches its location can , nevertheless , be manipulated by other means ) . if this other ifm is successful ( i.e. `` '' for the object clicks ) then the other observer can claim that she localized the photon of the first experiment at , i.e. that the photon passed through the lower arm of the interferometer on its way to .hardy s paradox .two interferometers are tuned in such a way that , if they operate separately , there is a complete destructive interference towards detectors .the lower arm of the photon interferometer intersects the upper arm of the object interferometer in such that the object and the photon can not cross each other .when the photon and the object are sent together ( they reach at the same time ) then there is a nonzero probability for clicks of both detectors . in this caseone can infer that the object was localized at and also that the photon was localized at .however , the photon and the object were not present in together .this apparently paradoxical situation does not lead to a real contradiction because all these claims are valid only if tested separately ..3 cm paradoxically , all these claims are true ( in the operational sense ) : if we look for the photon in , we find it with certainty ; if we look , instead , for the object in , we find it with certainty too .both claims are true separately , but not together : if we look for the pair , the photon and the object together , in , we fail with certainty .such peculiarities take place because we consider a pre- and post - selected situation ( the post - selection is that in both experiments detectors click ) .an interesting insight about this peculiar situation can be learned through the analysis of the _ weak measurements _ performed on the object and the photon inside their interferometers . in spite of this peculiar feature ,the experiment is still interaction - free in the following sense .if somebody would test the success of our experiment for localization of the object , i.e. would measure the location of the object shortly after the `` meeting time '' between the object and the photon , then we know with certainty that she would find the object in and , therefore , the photon can not be there . discussing the issue of the presence of the object with her , we can correctly claim that in our experiment the photon was not in the vicinity of the object .indeed , given the assumption that she found the object , we know that she has not seen the photon in the lower arm of the interferometer , even if she looked for it there . however ,if , instead of measuring the position of the object after the meeting time , she finds the object in a particular superposition ( the superposition which with certainty reaches ) , she can claim with certainty that the photon was in .( compare this with _deterministic quantum interference experiments _probably , the largest misconception about the ifm is defining them as momentum and energy exchange - free measurements .the ev ifm can localize a bomb in an arbitrary small region without exploding it even if the quantum state of the bomb was spread out initially .localization of an object without uncertain change in its momentum leads to immediate contradiction with the heisenberg uncertainty principle . identifying the interaction - free measurements as momentum - exchange free measurements , simon and platzman derived `` fundamental limits '' on the ifm .they argued that the ifm can be performed only on infinitely sensitive bomb and that a bomb which is infinitely sensitive to any momentum transfer could not be placed in the vicinity of the ifm device from the beginning .these arguments fail because the ev ifm are not defined as momentum - exchange free measurements .( probably , the misconception came because of frequent mentioning of dicke s paper which concentrated on the issue of the energy exchange in his ifm . ) the arguments , similar to those of simon and platzman might be relevant for performing a modification of the ev ifm proposed by penrose .he proposed a method for testing some property of an object without interaction .the object is again a bomb which explodes when anything , even a single photon , `` touches '' its trigger device .some of the bombs are `` duds '' : their trigger device is locked to a body of the bomb and no explosion and no relative motion of the trigger device would happen when it is `` touched '' . again , the paradox is that any touching of a trigger of a good bomb leads to an explosion , but , nevertheless , good bombs can be found ( at least sometimes ) without the explosion . in the penrose version of ifm, the bomb plays the role of one mirror of the interferometer , see fig .it has to be placed in the correct position .we are allowed to do so by holding the body of the bomb. however , the uncertainty principle puts limits on placing the bomb in its place before the experiment . only if the position of the bomb ( in fact , what matters is the position of the dud ) is known exactly , the limitations are not present . in contrast ,in the ev ifm the bomb need not be localized prior to the measurement : the ifm localizes it by itself . fig . 7 .the penrose bomb - testing device .the mirror of the good bomb can not reflect the photon , since the incoming photon causes an explosion .therefore , sometimes clicks .the mirror of a dud is connected to the massive body , and therefore the interferometer `` works '' , i.e. never clicks when the mirror is a dud ..3 cm the zero change in the momentum of the object , location of which is found in the ifm , is not a necessary condition for the measurement to be ifm , but there are ifm in which there is no change of the momentum of the object .indeed , if the object has been localized before the ifm procedure , then its state and , therefore , its momentum distribution do not change during the process .the relevant issue seems to be the change in the momentum of the observed object , but it is interesting to consider also the change in the momentum of the measuring device , thus analyzing the question of the _ exchange _ of the momentum . if the object is localized from the beginning then its state does not change , but the state of the photon does change : from the superposition of being in two arms of the interferometer it collapses into a localized wave packet in one arm of the interferometer .it can be arranged that the two separate wave packets of the photon have the same distribution of momentum .then , the collapse to one wave packet will not change expectation value of any power of momentum of the photon .aharonov has pointed out that although in this process there is no exchange of momentum in the above sense , still there is an exchange of certain physical variable . in the ev procedurethere is an exchange of _ modular momentum_. the collapse of the quantum wave of the photon from the superposition of the two wave packets separated by a distance to a single wave packet is accompanied by the change in the modular momentum .the modular momentum of the object localized at the lower arm of the interferometer from the beginning , , does not change ( there is no _ any _ change in the quantum state of the object ) .one can , nevertheless , consider an exchange of modular momentum in this process : since is completely uncertain , there is no contradiction with the conservation law for the total modular momentum .note that the situation in which the expectation values of any power of momentum remains unchanged , while expectation values of powers of modular momentum change , is also a feature of aharonov - bohm type effects in which the quantum state changes even though no local forces are acting .the method of the ev ifm can be applied for performing various non - demolition measurements . indeed ,even if the measurement interaction can destroy the object , the method allows measurement without disturbing the object .however , not _ any _ non - demolition measurement is an ifm in the sense i discussed it here . in some nondemolition experiments the test particle of the measuring device explicitly passes through the location of the measured object . in other experimentsthe state of the object changes , but these changes are compensated at the end of the process .i suggest that such measurements should not be considered as interaction - free .the optimal scheme presented in the ifm paper allows detection of almost 50% of the bombs without explosion ( the rest explode in the process ) . applied quantum zeno effect for constructing the ifm scheme which , in principle , can be made arbitrary close to the 100% efficiency .the experiment with theoretical efficiency higher than 50% has been performed .the almost 100% efficient scheme of kwiat _ et al ._ can be explained as follows .the experimental setup consists of two identical optical cavities coupled through a highly reflective mirror , see fig .a single photon initially placed in the left cavity .if the right cavity is empty , then , after a particular number of reflections , the photon with certainty will be in the right cavity . if , however , there is a bomb in the right cavity , the photon , with the probability close to 1 for large , will be found in the left cavity .testing at the appropriate time for the photon in the left cavity , will tell us if there is a bomb in the right cavity .this method keeps all conceptual features of the ev ifm .if the photon is found in the left cavity , we are certain that there is an object in the right cavity .if the object is an ultra - sensitive bomb or if it is completely non - transparent object which does not reflect light backwards ( e.g. , it is a mirror rotated by degrees relative to the optical axes of the cavity as in the kwiat _experiment ) then , when we detect the photon in the left cavity we can claim that it never `` touched '' the object in the same sense as it is true in the original ev method .fig . 8 .the almost 100% efficient scheme of the ifm .if there is a `` bomb '' or a nontransparent object in the right cavity , then the photon stays in the left cavity , with a probability to go to the right cavity which can be made arbitrary small by increasing the reflectivity of the mirror between the cavities .if , however , the right cavity is empty , then after some time the photon will move there with certainty ..4 cm another modification of the ev ifm which leads to the efficiency of almost 100% has been proposed by paul and pavii and implemented in a laboratory by tsegaye _the basic ingredient of this method is an optical resonance cavity which is almost transparent when empty , and is an almost perfect mirror when there is an object inside .the advantage of the proposal of paul and pavii is that it has just one cavity , and is easier to perform .in fact , this method has been recently applied for `` exposure - free imaging '' of a two - dimensional object . however , one cavity method has a conceptual drawback . in this experimentthere is always a nonzero probability to reflect the photon even if the cavity is empty .thus , detecting reflected photon can not ensure presence of the object with 100% certainty .essentially , this drawback has only an academic significance . in any real experimentthere will be uncertainty anyway , and the uncertainty which i mentioned can be always reduced below the level of the experimental noise .other modifications of the ifm are related to interaction - free `` imaging'' and interaction - free measurements of semi - transparent objects .these experiments hardly pass the strict definition of the ifm in the sense that the photons do not pass in the vicinity of the object .however , they all achieve a very important practical goal , since we `` see '' the object reducing very significantly the irradiation of the object : this can allow measurements on fragile objects . indeed , in spite of the fact that for distinguishing small differences in the transparency of an object the method is not very effective , it still can be useful for reduced irradiation pattern recognition .reasoning in the framework of the many - worlds interpretation ( mwi ) leads to the statement that while we can find an object in the interaction - free manner , we can not find out that a certain place is empty in the interaction - free way .here , i mean `` interaction - free '' in the sense that no photons ( or other particles ) pass through the place in question .getting information about some location in space without any particle being there is paradoxical because physical laws include only local interactions . in the case of finding the bomb, the mwi solves the paradox .indeed , the laws apply to the whole physical universe which includes all the worlds and , therefore , the reasoning must be true only when we consider all the worlds . since there are worlds with the explosion we can not say on the level of the physical universe that no photons were at the location of the bomb .in contrast , when there is no bomb , there are no other worlds .the paradox in our world becomes the paradox for the whole universe which is a real paradox .thus , it is impossible to find a procedure which tests the presence of an object in a particular place such that no particles visit the place both in the case the object is there and in the case the object is not there . quantitative analysis of the limitations due to this effect were recently performed by reif who called the task `` interaction - free sensing '' .this effect also leads to limitations on the efficiency of `` interaction - free computation '' when all possible outcomes are considered .i have reviewed various analyses , proposals , and experiments of ifm and measurements based on the ev ifm method .the common feature of these proposals is that we obtain information about an object while significantly reducing its irradiation .the meaning of the ev ifm is that if an object changes its internal state ( not the quantum state of its center of mass ) due to the radiation , then the method allows detection of the location of the object without _ any _ change in its internal state .there is no any fundamental limit on such ifm .the ifm allow measurements of position of infinitely fragile objects .in some sense it locates objects without `` touching '' , i.e. without particles of any kind passing through its vicinity .i have clarified the limited validity of this feature for ifm performed on quantum objects .numerous papers on the ifm interpreted the concept of `` interaction - free '' in many different ways .i hope that in this work i clarified the differences and stated unambiguously the meaning of the original proposal .it is a pleasure to thank yakir aharonov , berge englert , and philip pearle for helpful discussions .this research was supported in part by grant 471/98 of the basic research foundation ( administered by the israel academy of sciences and humanities ) and the epsrc grant gr / n33058 .
|
interaction - free measurements introduced by elitzur and vaidman [ found . phys . * 23 * , 987 ( 1993 ) ] allow finding infinitely fragile objects without destroying them . many experiments have been successfully performed showing that indeed , the original scheme and its modifications lead to reduction of the disturbance of the observed systems . however , there is a controversy about the validity of the term `` interaction - free '' for these experiments . broad variety of such experiments are reviewed and the meaning of the interaction - free measurements is clarified . epsf school of physics and astronomy raymond and beverly sackler faculty of exact sciences tel - aviv university , tel - aviv 69978 , israel 2
|
an important class of non - equilibrium systems are excitable systems , in which small perturbations can lead to large excursions .examples include lasers , chemical reactions , or neurons , in which the excitation corresponds to the emission of an action potential .both the spontaneous fluctuations of such a system ( characterized , for instance , by its power spectrum ) as well as its response to a time - dependent external driving ( quantified for weak signals by the susceptibility or transfer function ) are of great interest .often , a realistic description requires incorporating also the correlation structure of the noise that the system is subject to ; this means that the popular assumption of a gaussian white noise does not always hold and one has to deal with a colored , potentially non - gaussian noise . in the stochastic description of neurons , power spectrum and susceptibilityare of particular interest because they are closely linked to measures of information transmission . an important model classare stochastic integrate - and - fire ( if ) neurons , the response properties of which have received considerable attention over the last decades .exact results for the susceptibility have been derived for leaky if neurons ( lif ) driven by gaussian white noise or white shot - noise with exponentially distributed weights . for if neurons driven by exponentially correlated gaussian noise , only approximate results in the limit of high frequencies and short or long noise correlation time exist .the power spectrum is exactly known for perfect if ( pif ) neurons and lif neurons driven by white gaussian noise . for colored noise ,approximate results for the auto - correlation function ( the fourier transform of the power spectrum ) exist for lif neurons in the limit of long noise correlation time and for pif neurons driven by weak , arbitrarily colored noise .the dichotomous markov process ( dmp ) , a two - state noise with exponential correlation function , is the rare example of a driving colored noise that can lead to tractable problems . for this reason , it has been extensively used in the statistical physics literature for a long time ; recently , its use as a model of neural input has been growing .known exact results for if neurons driven by a dmp include the firing rate and coefficient of variation ( cv ) of pif and lif or arbitrary if neurons , the interspike - interval ( isi ) density and serial correlation coefficients ( scc ) of isis for pif and lif neurons , the stationary voltage distribution for arbitrary if neurons , or the power spectrum for pif neurons . in this work ,we consider an lif neuron driven by asymmetric dichotomous noise and calculate exact expressions for the spontaneous power spectrum and the susceptibility , i.e. the rate response to a signal that is modulating the additive drive to the neuron .the outline is as follows .we briefly present the model and describe the associated master equation in sec .[ sec : model ] . in sec .[ sec : powspec ] , we derive an expression for the power spectrum and discuss its peculiar structure . here , our approach was inspired by a numerical scheme for white - noise - driven if neurons . reusing results for the power spectrum ,we calculate the susceptibility in sec .[ sec : suscep ] , employing a perturbation ansatz similar to approaches previously used for gaussian noise . in sec .[ sec : broadband ] , we study numerically how robust our results are when using broadband signals .we close with a short summary and some concluding remarks in sec .[ sec : summary ] .the evolution of the membrane potential of an lif neuron is governed by spiking is implemented through an explicit fire - and - reset rule : when the voltage hits a threshold , it is reset to , where it remains clamped for a refractory period . in eq .( [ eq : dynamics_lif ] ) , sets the equilibrium potential , is a weak stimulus and is a potentially asymmetric markovian dichotomous noise .time is measured in units of the membrane time constant . the dichotomous noise jumps between the two values and at the constant rates ( jumping from the `` plus state '' to the `` minus state '' ) and ( jumping from to ; see fig .[ fig : scheme]a ) . note that this can always be transformed to a noise with asymmetric noise values and an additional offset .the properties of such a process are rather straightforward to calculate and have been known for a long time . in the following , we will need the transition probabilities , i.e. the probability to find the noise in state given that it was in state a certain time before , where , .we will only need the transition probabilities conditioned on starting in the plus state , which read in this paper , we limit ourselves to the case .this means that the neuron can only cross the threshold when the noise is in the plus state . a general treatment for different parameter regimes , as has been carried out for stationary density and first - passage - time moments in ref . , involves much book - keeping and is beyond the scope of this work .note that this choice of parameters does not constrain the neuron to a fluctuation - driven ( sub - threshold ) or mean - driven ( supra - threshold ) regime , both and are still possible .our choice , however , implies that the generated spike train in the absence of a signal ( ) is a renewal process : because firing occurs only in the plus state and the noise has no memory about the past ( markov property ) , the interspike intervals are statistically independent .a common approach to the description of systems driven by dichotomous noise is to consider two probabilities : , the probability to find the noise in the plus state and the voltage in the interval at time , and , analogously , ( see the scheme in fig .[ fig : scheme]b ) .the system is then described by the following master equation , \\ & \quad - k_+ p_+(v , t ) + k_- p_-(v , t ) \\ &\quad + r(t-{\tau_{\rm ref } } ) p_{+|+}({\tau_{\rm ref } } ) \delta(v - v_r ) - r(t ) \delta(v - v_t ) , \end{split } \label{eq : modulated_mastereq_pp } \\\begin{split } \partial_t p_-(v , t ) & = - \partial_v \left ( ( \mu - v+{\varepsilon}s(t ) -{\sigma } ) p_-(v , t ) \right ) \\ & \quad+ k_+ p_+(v , t ) - k_- p_-(v , t ) \\ & \quad+ r(t-{\tau_{\rm ref } } ) p_{-|+}({\tau_{\rm ref } } ) \delta(v - v_r ) .\label{eq : modulated_mastereq_pm } \end{split}\end{aligned}\ ] ] the boundary conditions are and . if ( the minus dynamics has a stable fixed point between and ) , one needs additionally , ( see for a detailed treatment of fixed points in dmp - driven if neurons ) . here , ( ) refers to a voltage infinitesimally above ( below ). the respective first two lines of eq .( [ eq : modulated_mastereq_pp ] ) and eq .( [ eq : modulated_mastereq_pm ] ) are similar to what one would have for other systems driven by dichotomous noise ; they describe the deterministic drift within each state and the switching between states .the third line is more particular to this neuronal setup and incorporates the fire - and - reset rule : trajectories are taken out at the threshold and the outflux corresponds to the instantaneous firing rate ; after the refractory period has passed , they are reinserted at .as we assume , trajectories can only leave the system in the plus state ; however , they can get reinserted in both states because the noise may have changed its state during the refractory period .this is captured by the transition probabilities .note that one can describe the same dynamics by omitting these source and sink terms ( the function inhomogeneities ) and instead using more complicated boundary conditions , , and jump conditions at : {v_r } : = \lim_{\delta \to 0 } p_+(v_r + \delta ) - p_+(v_r - \delta ) = r(t-{\tau_{\rm ref } } ) p_{+|+}({\tau_{\rm ref } } ) / ( \mu - v_r+{\varepsilon}s(t ) + { \sigma}) ] . if , one additionally needs and .for the calculation of the spontaneous power spectrum , we set in eq .( [ eq : dynamics_lif ] ) and eqs .( [ eq : modulated_mastereq_pp ] , [ eq : modulated_mastereq_pm ] ) . according to the wiener - khinchin theorem ,the power spectrum is the fourier transform of the auto - correlation function of the spike train , the auto - correlation function can be expressed in terms of the stationary rate and the spike - triggered rate , for the case considered here , the stationary rate reads ^{-1}. \end{split}\ ] ] the spike - triggered rate is the rate at which spikes occur at time given that there was a ( different ) spike at .the power spectrum can be expressed using the fourier transform of the spike - triggered rate , , \right).\ ] ] to calculate , we modify the master equation eqs .( [ eq : modulated_mastereq_pp ] , [ eq : modulated_mastereq_pm ] ) , with the boundary conditions and , if , also .further , the initial condition is . in eq .( [ eq : strate_master1 ] ) , the source and sink terms implement the fire - and - reset rule ( trajectories cross the threshold at a rate and get inserted at after the refractory period has passed ) . the term , together with the initial condition , accounts for the fact that the neuron has fired at , so that after the refractory period , all probability starts at ( a fraction in the plus state ) .equivalent considerations apply to eq .( [ eq : strate_master2 ] ) .the system of two first - order partial differential equations for the probability density can be transformed to a ordinary , second - order differential equation for the fourier transform of the probability flux , , where .after some simplifying steps , it reads : , \end{split}\end{aligned}\ ] ] with the boundary conditions ) = 0 , j'(\min[z_r^-,0 ] ) = 0 ] . here , and given in terms of hypergeometric functions , eq .( [ eq : jzerosol ] ) can then be solved for the spike - triggered rate , which is contained in .owing to the delta functions in the inhomogeneities , the integration in eq .( [ eq : jzerosol ] ) is straightforward to carry out . for the power spectrum ,one obtains , via eq .( [ eq : powspec_via_strate ] ) , this is the first central result of this work . for the special case of a vanishing refractory period , , eq .( [ eq : powspec ] ) takes a particularly compact form which resembles the form of the expression for the power spectrum of lif neurons driven by gaussian white noise . in fig .[ fig : powspec ] , we plot the power spectrum and compare it to simulations .it is apparent that the theory is in excellent agreement with simulation results .the most striking feature of the power spectrum , especially for slow switching of the noise , is an undamped oscillation .this is in stark contrast to what one usually expects from spike train power spectra , which saturate at the firing rate .this periodicity in the spectrum , which has been previously observed in the pif model , can also be seen explicitly in the analytics by taking eq .( [ eq : powspec ] ) to its high - frequency limit , where is the ( deterministic ) time from reset to threshold in the plus state , taking the gaussian - white - noise limit ( where is the noise intensity of the resulting process ) in eq . ([ eq : powspec_highfreq ] ) yields , which is the known high frequency behavior for the gaussian white noise case .introducing a modified switching rate , }{t_d^+},\ ] ] eq .( [ eq : powspec_highfreq ] ) can be written compactly as the high - frequency limit is also shown in fig .[ fig : powspec ] . for slow switching, it is indistinguishable ( within line thickness ) from the exact theory over most of the shown frequency range and only deviates from it for small frequencies . to understand in a more quantitative way how the ongoing oscillation in the power spectrum arises ,consider the structure of the spike - triggered rate .there is a certain probability that after a neuron has fired , the noise does not switch but remains in the plus state long enough for the neuron to cross the threshold again . due to the absence of further stochasticity within the two states , this means that a non - vanishing fraction of trajectories that have been reset at crosses the threshold again _ exactly _ at ( see fig .[ fig : discspec]b ) . in the spike triggered rate , these trajectories become manifest as a peak at the deterministic time from reset to threshold , ( fig . [fig : discspec]a ) . of course ,albeit smaller , there is also a non - vanishing probability that the noise stays in the plus state until these trajectories hit the threshold a second time at , and so on .the spike - triggered rate can thus be split into a part containing functions and a continuous part , the fraction of trajectories contributing to the first peak in is determined as follows : after the reference spike , all trajectories are clamped at during the refractory period . during this time , the noise may switch , provided it switches often enough to end up in the plus state after .the fraction of trajectories for which the noise then remains in the plus state between and is given by ] , \else \ifnum\pdfstrcmp{eps_mu}{vhist_n}=0 100 \else \packageerror{paramvalue}{unknown param name : eps_mu } { } \fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi} ] . like the power spectrum, the susceptibility displays an undamped periodic structure that is more prominent for low switching rates ( fig .[ fig : suscep]a , c ) . for , the period of this peaked structureis again given by the inverse time from reset to threshold in the plus state ( fig .[ fig : suscep]a ) .interestingly , with a non - vanishing refractory period , these peaks become modulated by a second oscillation with period ( fig . [fig : suscep]c ) .this is also apparent by looking at the high - frequency limit of eq .( [ eq : suscep ] ) , which is also shown in fig .[ fig : suscep ] as the red dashed line . for a non - vanishing refractory period , the two oscillatory terms in eq .( [ eq : suscep_highf ] ) , and , differ slightly in their frequencies , leading to a beating at frequency .the periodic structure , along with the beating , is also apparent for higher switching rates , although less pronounced ( insets in fig .[ fig : suscep]b , d ) . here, it is particularly noticeable that the susceptibility does not decay to zero ( with a non - vanishing phase ) in the limit of high frequencies , as it would for integrate - and - fire neurons driven by gaussian white noise , but instead oscillates weakly around a finite real value .this means that the neuron can respond to signals of arbitrarily high frequency , a result that has been also attained for lif neurons driven by a different kind of colored noise ( an ornstein - uhlenbeck process ) .* power spectrum and susceptibility with a broadband stimulus .* shown are simulations for the power spectrum ( orange lines ) and susceptibility ( blue lines ) , compared to the theory without broadband stimulus , eq .( [ eq : powspec ] ) and eq .( [ eq : suscep ] ) ( black lines ) . because the focus here is on the simulations with a broadband signal , we keep the lines depicting the theory in the background .we plot these quantities for two combinations of switching rates and three different values of the stimulus intensity , where is the cutoff frequency .remaining parameters : . ] the linear response ansatz , eq .( [ eq : linear_response_ansatz ] ) , is in principle valid for arbitrary stimuli , as long as they are weak . in theoretical and experimental studies , broadband stimuli , such as band - limited gaussian noise with a flat spectrum ,have often been used , as they allow to probe the susceptibility at different frequencies simultaneously . as we have argued ,the features of power spectrum and susceptibility of neurons driven by a slowly switching dichotomous noise arise mainly because of the absence of further stochasticity within the two noise states . a broadband stimulus acts as an additional noise source , and one can thus expect it to have a qualitative effect on these features . in fig[ fig : broadband ] , we plot the power spectrum and the absolute value of the susceptibility for two switching - rate combinations and three different intensities of a gaussian stimulus with a flat spectrum of height , where is the cutoff frequency . the intensity of the signal is then given by . in simulations ,we use a time step , which means that the stimulus is effectively white ( the cutoff frequency corresponds to the nyquist frequency ) . in line with our reasoning above and with previous results for the power spectrum of dmp - driven pif neurons , additional noise ( here the broadband signal )can be seen to abolish the undamped periodicity in spectrum and susceptibility . for small signal strength , this is hardly noticeable up to frequencies that would be considered high in neurophysiology ( note that , assuming a membrane time constant {ms} ] .given the two linearly independent solutions to the homogeneous ode , a particular solution to eq .( [ eq : inhom ] ) is known : where is the wronskian , and the upper integration limit can still be freely chosen .the general solution is then given by in order to fix the integration constants in eq .( [ eq : gensol ] ) , one needs to distinguish two possible parameter regimes : if , corresponding to , the fixed point in the minus dynamics , ( corresponding to the singular point at in the hypergeometric differential equation ) lies on the lower boundary ( no trajectories can move to more negative values ) .in contrast , for , corresponding to , it lies within the interval of interest . in the first case , the integration constants in eq .( [ eq : gensol ] ) can be fixed over the whole interval ] to ] and ] and reinserting and , it is straightforward to show that this yields i.e. the compact expression for the high - frequency limit , eq .( [ eq : powspec_highfreq_compact ] ) .it is convenient to transform the dynamics eq .( [ eq : dynamics_lif ] ) by using this yields a system without additive signal , at the cost of introducing time - dependent reset and threshold values , the new master equations read - r(t ) \delta[x - x_t(t ) ] , \end{split } \\\label{eq : dichotheo : suscep_master_x_2 } \begin{split } \partial_t p_-(x , t ) & = - \partial_x \left ( ( \mu - x-{\sigma } ) p_-(x , t ) \right ) + k_+ p_+(x , t ) -k_- p_-(x , t ) \\ &\quad + r(t-{\tau_{\rm ref } } ) p_{-|+}({\tau_{\rm ref } } ) \delta[x - x_r(t ) ] , \end{split}\end{aligned}\ ] ] with the same trivial boundary conditions as above .plugging in eq .( [ eq : r_ansatz ] ) and eq .( [ eq : p_ansatz ] ) , taylor - expanding the functions for small , and keeping only the linear order in yields \\ & \quad - \frac{r_0}{2 \pi i f - 1 } \left [ p_{+|+}({\tau_{\rm ref } } ) \delta'(x - v_r ) - \delta'(x - v_t ) \right ] , \end{split } \\ \begin{split } -2 \pi i f p_{-,1}(x ) & = - \partial_x \left ( ( \mu - x-{\sigma } ) p_{-,1}(x ) \right ) + k_+ p_{+,1}(x ) -k_- p_{-,1}(x ) \\ & \quad + \chi(f ) e^{2 \pi i f{\tau_{\rm ref } } } p_{-|+}({\tau_{\rm ref } } ) \delta(x - v_r)\\ & \quad - \frac{r_0}{2 \pii f - 1 } p_{-|+}({\tau_{\rm ref } } ) \delta'(x - v_r ) .\end{split}\end{aligned}\ ] ] this has the same structure as the fourier - transformed version of eqs .( [ eq : strate_master1 ] , [ eq : strate_master2 ] ) .the correction to the flux , , with then follows the ode eq .( [ eq : j_ode ] ) , if the inhomogeneities are appropriately chosen , and eq .( [ eq : jzerosol ] ) can be used to extract .the derivatives of and , which appear when integrating by parts , are given by
|
the response properties of excitable systems driven by colored noise are of great interest , but are usually mathematically only accessible via approximations . for this reason , dichotomous noise , a rare example of a colored noise leading often to analytically tractable problems , has been extensively used in the study of stochastic systems . here , we calculate exact expressions for the power spectrum and the susceptibility of a leaky integrate - and - fire neuron driven by asymmetric dichotomous noise . while our results are in excellent agreement with simulations , they also highlight a limitation of using dichotomous noise as a simple model for more complex fluctuations : both power spectrum and susceptibility exhibit an undamped periodic structure , the origin of which we discuss in detail .
|
the bi - annual spie conference on astronomical telescopes and instrumentation is a popular event in the instrumentation and technology development fields of astronomy .a dedicated conference on software and cyberinfrastructure is a regular part of the programme .for the first time , the 2014 conference hosted a software hack day under this header . because the community of software engineers and technology managers who attend this conference have regularly discussed and analyzed ways to stimulate and increase collaboration across observatory boundaries , introduction of a hack day seemed to be a reasonable next step to bring these potentials into being .hack days , also known as hackathons or hackfests , are intense immersive events where multi - skilled teams work on collaborative projects .they are often organised around a certain topic or programming language , or focused on a particular subject , problem or challenge ( e.g. healthcare , civic activism ) .they are limited in time , e.g. one day or a weekend , and sometimes have a competitive component , with prizes given to the best hacks .common technological themes are innovative visualization of large datasets , building applications with apis to web services , or hardware - based projects .hackathons have become important networking events in the software and technology industries , as they provide an opportunity for designers , developers or managers to showcase their skills to a wider group they might not otherwise come in contact with . in astronomy , the first hack days were organised as part of the .astronomy conference series .the .astronomy conferences , which started in cardiff ( uk ) in 2008 , aim to bring together scientists , educators and outreach professionals , to discuss novel ways of exploiting web - based technologies for astronomy research or public engagement .since 2009 .astronomy has included a hack day , and in recent years astronomy - themed hack days have taken place at the annual conferences of the american astronomical society and the uk national astronomy meeting . with an increasing number of robotic telescopes coming online and the advent of mega - facilities such as gaia , the large synoptic survey telescope ( lsst ) , and the square kilometer array , control strategies , software and intelligent data processing form an increasingly important part of astronomical observatories . with the spie hack day ,our aim was to provide an opportunity for instrumentation professionals to share skills and collaborate outside of the normal boundaries of an instrumentation project .the hack day took place on thursday 26 june over an entire day . whilst we gathered contact details in the weeks beforehand via an online sign - up form, there was no formal registration procedure , the event was open and free of charge to all conference participants .lunch and refreshments were provided free of charge .the format was unscheduled , with no formal presentations , as is traditional for such events . at the start of the day, we gave participants the opportunity to introduce themselves and present what ideas or problems they were interested in working on during the event .likewise , the day was ended with the participants describing and showcasing the hacks they worked on during the day . in the following section , we highlight some selected hack day projects .the names listed alongside the hacks are the participants who proposed and led the project , however many hacks were team - based efforts .a frequent theme at astronomy hack days is to discover innovative ways of representing astronomical datasets to aid data discovery .one non - traditional avenue is to convert data to sound .this hack aimed to convert astronomical spectra into music ; the ability to convert data from a visual into an auditory medium , as well as presenting a creative challenge , can be useful for communicating astronomy to visually impaired people .the main challenge was to discover how to link the typical characteristics of spectra ( e.g. absolute wavelength range , important lines , continuum level vs transients etc . ) to the main aspects of a musical piece ( time signature , tempo , key , major / minor , pitch ) ; and to do so in a robust way so that spectra which look similar will also _ sound _ similar .james wrote a number of routines to quantify these characteristics and link them directly to the musical parameters ; then generated a short , loopable tune .an important goal is to make sure we can definitely ` hear ' the emission / absorption lines of the object , and for the sound to be attractive .rts2 , or remote telescope system , 2nd version , is an open source observatory control software .it allows astronomers around the world to let the machines control their observatories .so they can sleep during the night and work during the day .rts2 takes care of all duties of a ( night ) observer - it opens and closes observatory protected cover at evening , morning and in case of bad weather , command telescope to point , filter wheel to turn and camera to expose .it select the observations from the database , and logs the observations to the database .it is used on more than 20 observatories around the planet , making it one of the most popular observatory control programmes . during the hack day , rts2 creator petr kubnek worked on developing an android application for the software .a growing theme in modern astrophysics is the growth of the volume , variety , and velocity of astronomical data .particularly , the increasing multi - dimensionality of large astronomical datasets is giving rise to new data visualization techniques .michael gully - santiago s hack was aimed at transforming static figures to interactive figures for web browsers .the starting static figure was a single panel of figure 10.21 of the textbook `` statistics , data mining , and machine learning in astronomy : a practical python guide '' ( * ? ? ? * hereafter icvg ) .the enhanced , interactive figure had similar axes and data , but with interactive tooltips so that viewers could hover over individual data points to see all available information about individual sources .the figure was made interactive by modifying the original source code with the python module mpld3 , which converts matplotlib graphics directives into svg javascript commands in d3.js .the outcome is hosted on the spie hack day github repository webpage .the hack has catalyzed a continued effort to enhance a selection of book figures from icvg2014 , which is now hosted at gully.github.io/astromlfigs .the enhancements go beyond adding d3.js interactivity , and include ipython notebooks with step - by - step guides to how the authors generated the complex figures in the textbook .the enhanced figures could become a community resource for textbook readers .the world has numerous public and private astronomical observatories , each with their own suite of instrumentation .the best known publicly operated telescopes , such as hubble or vlt , typically have significant oversubscription rates , whilst many smaller observatories remain relatively under - used .the idea behind this hack was to create a database of telescopes and instruments , with a web - based front - end for observers to discover what instruments are available for their science ; in essence a match - making service for observers and instruments .the work for the project was divided amongst the team into several parts and used several different technologies for displaying the data .contributions were coordinated through a github.com repository .for the database of observatory sites , we used _google spreadsheet_. this allowed for rapid development of an online form for entering an observing sites information using _ google forms _ , development of a python code for automated entry , and direct access to the database for manual editing .sites were automatically uploaded from the _ iraf _ database of observatory sites . with the observatory site data ,pages were developed in javascript using the d3.js library for data visualizations .an example of one of these visualizations is presented in figure [ fig : sites ] , where all of the observatory sites are plotted on a world map .the one - day hack allowed for the creation of the database , uploading the site information , and creating some basic visualizations of the data .to make the site further useful will require more extensive work , but it does highlight the usefulness of participating in a one day hack .many of the volunteers on this project were relatively new to using some of the software and were able to become familiar with new resources for building collaborations , data collection , and visualization .[ cols="^ " , ]we hosted the first spie hack day at the 2014 astronomical telescopes and instrumentation conference in montral , with the aim of giving developers and designers to collaborate on innovative projects , perhaps outside of their ` regular ' professional activities .approximately 25 conference attendees participated in the event , and we received excellent feedback from a number of these on the organisation and spirit of the hack day ; the hacks presented covered a wide range of topics , from the development of web - based resources for observational astronomy to more creative sound - based projects . the hack day participants were drawn mainly from authors presenting work in the software and cyberinfrastructure conference , however in future the event will be advertised more widely to the entire conference .experience with previous events has shown that word - of - mouth advertising is an important way of increasing the participant numbers .an exciting component for future spie hack days will be to enable more hardware - based hacks , perhaps with participation of the exhibitors .other ideas such as a competitive component with prizes for the best projects may be discussed in this context as well .chiozzi , g. , gillies , k. , goodrich , b. , johnson , j. , mccann , k. , schumacher , g. , silva , d. , wallander , a. & wampler , s. , `` trends in software for large astronomy projects '' , proc . of icalepcs , knoxville , tn , usa ( 2007 ) chiozzi g. , bridger a. , gillies k. , goodrich b. , johnson j. , mccann k. , schumacher g. and wampler s. , `` enabling technologies and constraints for software sharing in large astronomy projects '' , in proc .7019 , 7019 - 0y ( 2008 )
|
we report here on the software hack day organised at the 2014 spie conference on astronomical telescopes and instrumentation in montral . the first ever hack day to take place at an spie event , the aim of the day was to bring together developers to collaborate on innovative solutions to problems of their choice . such events have proliferated in the technology community , providing opportunities to showcase , share and learn skills . in academic environments , these events are often also instrumental in building community beyond the limits of national borders , institutions and projects . we show examples of projects the participants worked on , and provide some lessons learned for future events .
|
providing a repeatable movement is essential for a wide range of wsn s applications , from automated testing to optimal sensor placements . to this end ,several wsn mobile infrastructures couple wireless sensors with wheeled small - scale robots that are cheap and easily available on the market .unfortunately , wheeled robots present several drawbacks that practically limit the range of mobile experiments a researcher can run .firstly , in order to navigate , affordable wheeled robots often require a localization infrastructure that accurately estimate the robot s position and heading : from simple black lines on the ground to complex tracking systems based on a camera .secondly , these mobile robots rely on batteries as source of power , limiting the maximum duration of an experiment and imposing a periodic recharging task .finally , and most importantly , wheeled robots can only move on a horizontal 2-dimensional plane ( possibly free of obstacles such as furniture and stairs ) , heavily limiting the movement space of the experiment . to avoid the aforementioned problems , we present the design of gondola , a robotic infrastructure that moves through cables , rather than wheels .inspired by plotters based on polar coordinates our robotic system embeds the mobile wireless sensor in a carriage , which is connected trough thin wires to one or more spooling motor , depending on the required degree - of - freedom of the movement ( see fig .[ fig:3d ] ) .because the design of gondola is completely parametric ( both the location and the number of spooling motors ) , it can be easily adapted to different environments ( small rooms , halls , outdoor ) and needs ( linear motion , volumetric scannings ) . moreover , because its movement is not bounded to the ground plane , gondola is less affected by obstacles than traditional wheeled robots .preliminary experiments show that , in a 6.5.9.1 meter room , gondola repeatedly achieves a positioning error of less than 2 cm .this error can be further reduced with a proper design of the spooling mechanism , a topic we briefly discuss in section [ sec : discussion ] .finally , gondola parametric infrastructure is completely open - source .the design files of both hardware and software are available at http://github.com/iprotonotarios/gondola .the architecture of gondola , shown in fig . [ fig : architecture ] , is composed of several modules .( i ) the _ system controller _ , which gets a sequence of 3-dimensional positions ( carriage s trace ) and translate them into a sequence of 1-dimensional spooling movements .one for each spooling motor .( ii ) the _ motor controller _ , which is in charge of receiving a spooling sequence and properly actuate the motors such that the resulting movement in the 3-dimensional space is smooth and the speed is constant .( iii ) the _ carriage _ ( c ) , which is connected to each motor via thin wires and carries devices such as a wireless sensors ( ws1 ) . once the carriage ( c ) reaches the intended position , the system controller logs the experiment output running on another wireless device ( ws2 ) until an event occurs .then , the carriage is moved to the next scripted location .we now analyze in detail the implementation details of our system .gondola s system controller comprise of a computer running a software that interfaces with the user ( who runs an experiment ) , a wireless sensor node ( ws2 ) and the motor controller .its interface is written in processing and can run on several platforms . through this interfacea user can input the 3d coordinates of the location of each spooling motor , together with the carriage s starting position .this calibration step is essential to convert a desired movements in 3d space to 1d spolling distances . in particular , given a 3d movement from point a to point b , the 1d spolling distance of a motor m is computed as where computes the euclidean distance between two points in the 3d space .note that , theoretically , gondola works with any motor configuration e.g.,number of motors , position of motors .nevertheless , because the pulling wires are tensioned only by gravity , to achieve a good range of movement , gondola needs at least three properly positioned motors .once the required spooling distance is computed for each motor , the system controller sends a command to the motor controller and waits for an acknowledgment ( ack ) , indicating that gondola reached the required position .then , the system controller starts logging the experiment output until a predefined event occurs e.g.,a timeout or a specific output from the sensor node , and the system controller can proceed processing the next planned position ( if any ) .the motor controller is a combination of hardware and software that drives a series of motors , each one precisely spooling / unspooling a pulling wire according to the instructions received by the system controller . for simplicity and cost effectiveness , we designed the motor controller as a shield for arduino mega that can interface with up to four stepper motors via ethernet cables .note that the current version of gondola relies only on the stepper motors to control the spooled length of the pulling wires , with no feedback control .thus , it requires a constant calibration to keep an unbiased information of length actually spooled by each motor . to overcome this problem, it is possible in the future to add a rotary encoder to each one of the spooling motors and provide feedback to our system .the motor controller allows such modification , interfacing four of the eight ethernet wires with the arduino s gpio ( the other four wires are used to control the stepper motor ) .our robotic infrastructure moves the carriage ( and the wireless sensor ) by changing the length of wire , spooled by each motor .the characteristics of these motors are therefore very important for the overall performance of the system . while it is obvious that movement precision is important , it is not obvious that other characteristics , such as the holding torque ( the capacity to maintain a position , when the carriage is loaded ) , are equally important ( see fig . [fig : gondola ] ) . moreover , to improve usability , motors must spool fast and smoothly . for our implementation , we choose four 42byghw811 wantai stepper motors , with 1.8 movement precision and a holding torque of 4800 g - cm . because each motor drive a wheel of radius 2 cm ( no gears ) ,the resulting movement precision is ( 4)1.8/360 = 0.062 cm , while the holding force is 2400 g .note that , because of the limited diameter chosen in our implementation , the pulling wire will spool several times around the spooling wheel , changing its radius and , thus , the spooled length , speed and force . as we will see in the evaluation section , this will affect the precision of our system . to reduce the aforementioned problem ,each spool used a special 0.01cm - thin fishing line with minimal elasticity and capable of holding up to 7000 g .+ & x & y & z & x & y & z + + 1 & 50 & - & - & 0.07 & - & - + 2 & 100 & - & - & 0.14 & - & - + 3 & 150 & - & - & 0.21 & - & - + 4 & 200 & - & - & 0.28 & - & - + 5 & 250 & - & - & 0.35 & - & - + 6 & 300 & - & - & 0.42 & - & - + 7 & 350 & - & - & 0.49 & - & - + 8 & 400 & - & - & 0.56 & - & - + 9 & 450 & - & - & 0.63 & - & - + 10 & 500 & - & - & 0.70 & - & - + 11 & 550 & - & - & 0.77 & - & - + 12 & 600 & - & - & 0.84 & - & - + 13 & 650 & - & - & 0.91 & - & - + + 1 & 355 & 196 & 310 & 0.54 & 0.50 & 0.99 + 2 & 405 & 86 & 240 & 0.61 & 0.22 & 0.77 + 3 & 495 & 196 & 240 & 0.75 & 0.50 & 0.77 + in order to evaluate the characteristics of gondola , we measured the accuracy of each individual motor ( 1d linear movement ) and later , the overall system ( 3d spatial movement ) . in particular , we set the position of gondola to a set of starting coordinates ( summarized in table [ tab : positions ] ) and measured the relative error for a fixed - length movement ( 25 and 30 cm for the linear and spatial movements , respectively ) . in the case of the linear spooling distance ,the results in figure [ fig : linear_error ] shows that the more wire is spooled , the smaller the error .this is due to the fact that , when lots of wire is spooled , the diameter of the spooling wheel increases , enlarging the spooled lengths .this affects in order gondola s movement in 3-dimensional space .as soon as gondola is positioned in the center of the experimental room , all the motors spooled distance are long ( position 1 , fig .[ fig : spatial_error ] ) and gondola s position error in space is very low .as soon as gondola moves towards one angle of the room ( position 2 and 3 , fig .[ fig : spatial_error ] ) , few motors spooled distance reduce drastically , increasing the linear positioning error and , thus , the spatial error in 3-dimensional space .in this paper we presented gondola , a parametric robotic system that provides an accurate and repeatable movements for wireless sensor networks .thanks to its flexibility , gondola can be easily adapted to different environment and testing scenarios , from linear movements ( using only 1 motor ) to 3-dimensional movements ( using 3 or more motors ) .nevertheless , accurately spool the desired wire length has proven to be one of the main challenges of gondola . in the future , we plan to explore different solution to overcome this problem . from adding a feedback loop , based on rotary encoders , to substitute the actual simple wires ( fishing lines ) with ball - chain wires .we argue that precisely spooling the desired length of wire , together with an accurate measurement of the motors position are the keys to improve even more the positioning accuracy of gondola .
|
when deploying a testbed infrastructure for wireless sensor networks ( wsns ) , one of the most challenging feature is to provide repeatable mobility . wheeled robots , usually employed for such tasks , strive to adapt to the wide range of environments where wsns are deployed , from chaotic office spaces to potato fields in the farmland . for this reson , these robot systems often require expensive customization steps that , for example , adapt their localization and navigation system . to avoid these issues , in this paper we present the design of gondola , a parametric robot infrastructure based on pulling wires , rather than wheels , that avoids the most common problems of wheeled robot and easily adapts to many wsn s scenarios . different from wheeled robots , wich movements are constrained on a 2-dimensional plane , gondola can easily move in 3-dimensional spaces with no need of a complex localization system and an accuracy that is comparable with off - the - shelf wheeled robots .
|
in quantum computing , elementary operations are operations that act on only a few ( usually one or two ) qubits .for example , cnots and one - qubit rotations are elementary operations .a quantum compiling algorithm is an algorithm for decomposing ( compiling " ) an arbitrary unitary matrix into a sequence of elementary operations ( seo ) .a quantum compiler is a software program that implements a quantum compiling algorithm .henceforth , we will refer to ref. as tuc99 .tuc99 gives a quantum compiling algorithm , implemented in a software program called qubiter .the tuc99 algorithm uses a matrix decomposition called the cosine - sine decomposition ( csd ) that is well known in the field of computational linear algebra .tuc99 uses csd in a recursive manner .it decomposes any unitary matrix into a sequence of diagonal unitary matrices and something called uniformly controlled u(2 ) gates .tuc99 then expresses these diagonal unitary matrices and uniformly controlled u(2 ) gates as seos of short length .more recently , two other groups have proposed quantum compiling algorithms based on csd . one group , based at the univ . of michigan and nist ,has published ref. , henceforth referred to as mich04 .another group based at helsinki univ . of tech.(hut ) , has published refs. . and , henceforth referred to as hut04a and hut04b , respectively .one way of measuring the efficiency of a quantum compiler is to measure the number of cnots it uses to express an unstructured unitary matrix ( a unitary matrix with no special symmetries ) .we will henceforth refer to this number as .although good quantum compilers will also require optimizations that deal with structured matrices , unstructured matrices are certainly an important case worthy of attention .minimizing the number of cnots is a reasonable goal , since a cnot operation ( or any 2-qubit interaction used as a cnot surrogate ) is expected to take more time to perform and to introduce more environmental noise into the quantum computer than a one - qubit rotation .ref. proved that for unitary matrices of dimension ( number of bits ) , .this lower bound is achieved for by the 3 cnot circuits first proposed in ref. .it is not known whether this bound can always be achieved for .the mich04 and hut04b algorithms try to minimize . in this paper , we propose a modification of the tuc99 algorithm which will henceforth be referred to as tuc04 .tuc04 comes in two flavors , tuc04(nr ) without relaxation process , and tuc04(r ) with relaxation process .as the next table shows , the most efficient algorithm known at present is mich04 .hut04b performs worse than mich04 .tuc04(r ) and mich04 are equally efficient . [ cols="<,<",options="header " , ] caveat : strictly speaking , the efficiency of tuc04(r ) as listed in this table is only a conjecture .the problem is that tuc04(r ) uses a relaxation process .this paper argues , based on intuition , that the relaxation process converges , but it does not prove this rigorously .a rigorous proof of the efficiency of tuc04(r ) will require theoretical and numerical proof that its relaxation process converges as expected .this paper is based heavily on tuc99 and assumes that the reader is familiar with the main ideas of tuc99 .furthermore , this paper uses the notational conventions of tuc99 .so if the reader ca nt follow the notation of this paper , he / she is advised to consult tuc99 . the section on notation in ref . is also recommended .contrary to tuc99 , in this paper we will normalize hadamard matrices so that their square equals one . as in tuc99 , for a single qubit with number operator , we define and .if labels distinct qubits and , then we define .when we say ( ditto , ) is ( ditto , ) , we mean is and is . for any complex number ,we will write . thus , and are the magnitude and phase angle of , respectively . will denote the unit vectors along the x , y , z axes , respectively .for any 3d real unit vector , , where is the vector of pauli matrices .we define a * -subset * to be an ordered set of dimensional unitary matrices .let the index take values in a set with elements . in this paper ,we are mostly concerned with the case that , and is represented by .suppose a qubit array with qubits is partitioned into target qubits and control qubits .thus , are positive integers such that .let denote the control qubits and the target qubits .thus , if and are considered as sets , they are disjoint and their union is .let be an ordered set of operators all of which act on the hilbert space of the target qubits .we will refer to any operator of the following form as a * uniformly controlled -subset * , or , more succinctly , as a * -multiplexor * : x = _ bool^ p _ ( ) u _ ( ) = _bool^ u_()^p _ ( ) .( multiplexor " means multi - fold " in latin . a special type of electronic device is commonly called a multiplexor or multiplexer ) .note that is a function of : a set of control bits , a set of target bits , and a -subset .fig.[fig - multiplexor ] shows two possible diagrammatic representations of a multiplexor , one more explicit than the other .the diagrammatic representation with the half moon " nodes was introduced in ref. . for a given -subset ( and for any multiplexor with that -subset ) , it is useful to define as follows what we shall call the optimal axis of the -subset .suppose that we express each in the form u_b = e^i_b e^i_b e^i(_b + _ b ) ( i)^f(b ) , [ eq - parametri - left - diag ] where are real parameters , where the vectors , and are orthonormal , and where is an indicator function which maps the set of all possible into .of course , .appendix [ app - param ] shows how to find the parameters for a given .appendix [ app - mini ] solves the following minimization problem . if the value of the parameters and the vectors are allowed to vary , while keeping the vectors orthonormal and keeping all fixed , find vectors that are optimal , in the sense that they minimize a cost function .the cost function penalizes deviations of the diagonal matrices away from the 2d identity matrix .any choice of orthonormal vectors will be called * strong directions * and will be called a * weak direction * , or an * axis of the -subset*. an axis that minimizes the cost function will be called the * optimum axis of the -subset*. ( an axis of goodness ) .it is also possible to define an optimum axis of a -subset in the same way as just discussed , except replacing eq.([eq - parametri - left - diag ] ) by u_b = e^i_b ( i)^f(b ) e^i(_b + _ b)e^i_b .[ eq - parametri - right - diag ] in eq.([eq - parametri - left - diag ] ) , the diagonal matrix is on the left hand side , so we will call this the * diagonal - on - left ( dol ) parameterization*. in eq.([eq - parametri - right - diag ] ) , the diagonal matrix is on the right hand side , and we will call this the * diagonal - on - right ( dor ) parameterization*.the cosine sine decomposition ( csd ) expresses an dimensional unitary matrix as a product , where , , , where are unitary matrices of dimension , and is a diagonal real matrix whose entries can be interpreted as angles between subspaces .note that the matrices and are all multiplexors .fig.[fig - csd ] depicts the csd graphically , using the multiplexor symbol of fig.[fig - multiplexor ] . in fig.[fig - csd ] , a -multiplexor whose -subset consists solely of rotations around the y axis , is indicated by putting the symbol in its target box .we will call this type of multiplexor an * -multiplexor*. lets review the tuc99 algorithm .it decomposes an arbitrary unitary matrix into a seo by applying the csd in a recursive manner .the beginning of the tuc99 algorithm for is illustrated in fig.[fig - qubiter-4bits ] .an initial unitary matrix is decomposed via csd into a product of 3 multiplexors .the and multiplexors on each side of are in turn decomposed via csd .the and multiplexors generated via any application of csd are in turn decomposed via csd . in fig.[fig - qubiter-4bits ] , we have stopped recursing once we reached multiplexors whose target box acts on a single qubit . note that at this stage , is decomposed into a product of -multiplexors .there are of these -multiplexors ( 15 for ) .half of these -multiplexors have in their target boxes and the other half do nt .furthermore the type multiplexors and non- ones alternate .furthermore , the non- -multiplexors have their target box at qubit 0 , so , according to the conventions of tuc99 , they are direct sums of matrices . the tuc99 algorithm deals with these direct sums of matrices by applying csd to each matrix in the direct sum .this converts each direct sum of matrices into a product , where and are diagonal unitary matrices and is an -multiplexor .thus , tuc99 turns the last operator sequence shown in fig.[fig - qubiter-4bits ] into a sequence of alternating diagonal unitary matrices and -multiplexors .then tuc99 gives a prescription for decomposing any diagonal unitary matrix into a seo with cnots and any -multiplexor into a seo with cnots .tuc99 considers what it calls a -matrix : [ eq - gen - d - def ] d = ( _ bool^-1 i _ p _ ) = _bool^-1 u_p _ , with u_= ( i _ ) , where _ = _ . here is a real parameter . in the nomenclature of this paper, is an -multiplexor with a single target qubit at and control qubits at .tuc99 shows how to decompose into a seo with cnots .tuc99 also discusses how , by permuting qubits via the qubit exchange operator , one can move the target qubit to any position to get what tuc99 calls a direct sum of matrices . in the nomenclature of this paper ,a direct sum of matrices " is just an -multiplexor with a single target qubit at any position out of . in conclusion, tuc99 gives a complete discussion of -multiplexors and how to decompose them into a seo with cnots .next , let us consider how to generalize tuc99 .we begin by proving certain facts about -multiplexors that are generalizations of similar facts obtained in tuc99 for -multiplexors .suppose and are orthonormal vectors .suppose we generalize the matrices of tuc99 by using eqs.([eq - gen - d - def ] ) with : _ = _ , 1 + _ , 2 .[ eq - phi - def - sw ] here and are real parameters . in tuc99 , we define to be a column vector whose components are the numbers lined up in order of increasing . here, we use the same rule to define vectors and from and , respectively . in analogy with tuc99 , we then define and via a hadamard transform : _ j = h_-1 _ j for .( has been normalized so its square equals one ) .= _ , 1 + _ , 2 . as in tuc99 , can be expressed as d = _bool^-1 a _ , where the operators mutually commute , and can be expressed as a_= ( i _ ( -1 ) _j=0^r-1(_j ) ) .[ eq - a - in - exp - form ] next we will use the following cnot identities . for any two distinct bits , ( ) ^n ( ) ( )= ( ) ( ) , and ( ) ^n ( ) ( ) = ( ) ( ) .these cnot identities are easily proven by checking them separately for the two cases and . by virtue of these cnot identities ,eq.([eq - a - in - exp - form ] ) can be re - written as a_= [ ( -1)^n(_r-1 ) ( -1)^n(_1 ) ( -1)^n(_0 ) ] .[ eq - ab - def - basis ] as shown in tuc99 , if we multiply the matrices ( given by eq.([eq - ab - def - basis ] ) ) in a gray order in , many cancel .we end up expressing as a seo wherein one - qubit rotations ( of bit ) and type operators alternate , and there is the same number ( ) of each . at this point, the operators may be converted to cnots using : ( -1)^n()= e^i(-1)_wx ( -1)^n ( ) , where is a one - qubit rotation that takes direction to direction .even for the generalized discussed here ( i.e. , for the with defined by eq.([eq - phi - def - sw ] ) ) , it is still true that , by permuting qubits via the qubit exchange operator , one can move the target qubit to any position .as we have shown , our generalized matrix can be decomposed into an alternating product of one - qubit rotations and cnots .the product contains ( one factor of 2 for each control qubit ) cnots and the same number of one - qubit rotations . this product expression for will contain a cnot at the beginning and a one - qubit rotation at the end , or vice versa , whichever we choose .suppose we choose to have a cnot at the beginning of the product , and that this cnot is , for some .then the matrix ^{n(\mu)} ] is a -multiplexor just as much as is . indeed , ^n()&= & i(-1)n ( ) + ( ) + & = & i(-1)p_0 ( ) + p_1 ( ) , so d[i(-1)]^n()&= & [ _ e^i_p_][i(-1)]^n ( ) + & = & _ s_0 ( ) ( e^i _ i)p_+ _ s_1 ( ) e^i_p _ + & = & _ p _ , where and is the complement of .thus , the -subset of ^{n(\mu)} ] , where are complex numbers such that .thus , we want to express in terms of , where : = e^i e^i ( + ) .[ eq - left - diag - ort ] let . using eq.([eq - funda - id ] ) , it is easy to show that [ eq - left - diag - ort - xy ] x = e^i , and y = e^i . if we assume that , then eqs.([eq - left - diag - ort - xy ] ) can be easily inverted .one finds [ eq - left - diag - ort - abc ] = ( x ) , = |x| , and + i = .next , we consider the general case when the triad is oblique .one has = e^i e^i ( + ) .define by = _ 1 + _ 2 = _ x _ x + _ y _ y + _ z _ z .thus , = = . using eq.([eq - funda - id ] ) , it is easy to show that [ eq - left - diag - obl ] x = e^i ( + i ) , and y = e^i ( ) .we want to express in terms of .unlike when the triad was orthogonal , now expressing in terms of is non - trivial ; as we shall see below , it requires solving numerically for the root a non - linear equation .the good news is that if we know , then and follow in a straightforward manner from : = ( x e^-i ) , and _y+i_x = ( y e^-i ) .given , one can find using eq.([eq - ab - fun - theta ] ) .since , eqs.([eq - left - diag - obl ] ) are equivalent to the following 3 equations : [ eq - abc - theta - xyz - contraints ] |x|^2 = ^2 + ( ) ^2 ^2 , ( x)= + ( ) , and ( y)= + ( ) . as stated previously , = _ 1 + _ 2 . [ eq - def - vec - theta ] next , we will solve the 6 equations given by eqs.([eq - abc - theta - xyz - contraints ] ) for the 6 unknowns . from eq.([eq - def - vec - theta ] ) , it follows that = .thus , = .[ eq - ab - fun - theta ] the determinant is given by = s_1x s_2y - s_1ys_2x= _ 1_2 _ z = w_z . substituting the expressions for given by eq.([eq - ab - fun - theta ] ) into the z component of eq.([eq - def - vec - theta ] ) now yields _ z & = & s_1z + s_2z + & = & ( ) s_1z + ( ) s_2z + & = & -k_x _ x - k_y _ y , where k_= for . at this point , we have reduced our problem to the following 4 equations for the 4 unknowns : [ eq - gamma - theta - xyz - eqs ] |x|^2 = ^2 + ( ) ^2 ^2 , [ eq - gamma - theta - xyz - eq - a ] ( ( x)- ) = , [ eq - gamma - theta - xyz - eq - b ] ( ( y)- ) = , [ eq - gamma - theta - xyz - eq - c ] and _ z = -k_x _ x - k_y _ y .[ eq - gamma - theta - xyz - eq - d ] define the following two shorthand symbols t_x = ( ( x)- ) , t_y = ( ( y)- ) .eqs.([eq - gamma - theta - xyz - eq - c ] ) and ( [ eq - gamma - theta - xyz - eq - d ] ) yield = . thus , = .[ eq - theta - xy - in - z ] substituting the values for and given by eq.([eq - theta - xy - in - z ] ) into the definition of yields : = .[ eq - theta - z - in - kt ] eqs.([eq - gamma - theta - xyz - eq - a ] ) and ( [ eq - gamma - theta - xyz - eq - b ] ) yield = .thus , = .consider the two components of the vector on the right hand side of the last equation .they must sum to one : = 1 .[ eq - pre - final - non - lin - gamma ] substituting the value for given by eq.([eq - theta - z - in - kt ] ) into eq.([eq - pre - final - non - lin - gamma ] ) finally yields ( k_y + k_x t_y)^2 ( 1 + t_x^2 ) |y|^2= ( 1+t_y^2 ) t_x^2 |x|^2 .[ eq - final - non - lin - gamma ] as foretold , in order to find in terms of , we must solve for the root of a nonlinear equation , eq.([eq - final - non - lin - gamma ] ) .let be a -subset .suppose that we express each in the form u_b = e^i_b e^i_b e^i(_b + _ b)(i)^f(b ) , where are real parameters , where the vectors , and are orthonormal , and where is an indicator function which maps the set of all possible into .of course , .appendix [ app - param ] shows how to find the parameters for a given .the goal of this appendix is to solve the following minimization problem .if the value of the parameters and the vectors are allowed to vary , while keeping the vectors orthonormal and keeping all fixed , find vectors that are optimal , in the sense that they minimize a cost function .the cost function penalizes deviations of the diagonal matrices away from the 2d identity matrix .any choice of orthonormal vectors will be called * strong directions * and will be called a * weak direction * , or an * axis of the -subset*. an axis that minimizes the cost function will be called the * optimum axis of the -subset*. x_b,1 ^ 2 + x_b,2 ^ 2 = 1 .eq.([eq - vec - thetab ] ) expresses in terms of the fundamental " variables .likewise , , , and can be expressed in terms of these fundamental variables as follows : _ b = e^i _ b .we will use the simple matrix norm ( i.e. , the sum of the absolute value of each entry ) .we define the cost function ( lagrangian ) for our minimization problem to be the sum over of the distance between and the 2d identity matrix .thus , = 4 _ b ( _ b ) _ b .[ eq - dl - in - dgamma ] the variations represent degrees of freedom ( * dof s * ) , but they are not independent dofs , as they are subject to the following constraints . for all , is kept fixed during the variation of , so [ eq - vari - contr ] u_b = ( i _ b)u_b + e^i_be^i_b ( p_b + i _b)(i)^f(b ) + u_b ( i f(b))=0 .[ eq - vari - contr - ub ] ( we ve used the fact that ) .the vectors and are kept orthonormal ( i.e. , for all ) during the variation of , so eq.([eq - vari - contr - ub ] ) represents constraints .eq.([eq - vari - contr - ortho ] ) represents 3 constraints .eq.([eq - vari - contr - pb ] ) and eq.([eq - vari - contr - xb ] ) together represent constraints .thus , eqs.([eq - vari - contr ] ) altogether represent ( scalar ) equations in terms the ( scalar ) unknowns ( the unknowns are : 3 components of , 3 components of , and , for all , ). therefore , there are really only 3 independent dofs within these variations .next , we will express in terms of only 3 independent variations ( for independent variations , we will find it convenient to use and ) .once is expressed in this manner , we will be able to set to zero the coefficients of the 3 independent variations . _b= f(b)(p_b-_b ) .eqs.([eq - du - expanded ] ) constitute 4 constraints , but only 3 are independent . indeed ,if one dot - multiplies eq.([eq - du - expanded - trio ] ) by , one gets eq.([eq - du - expanded - singlet ] ) .so let us treat eq.([eq - du - expanded - singlet ] ) as a redundant statement and ignore it .dot - multiplying eq.([eq - du - expanded - trio ] ) by and separately , yields the following 3 constraints : we have succeeded in expressing in term of the 9 variations of the strong and weak directions . but not all of these 9 variations are independent due to the orthonormality of .our next goal is to express these 9 variations in terms of 3 that can be taken to be independent .b_b =- q_b w_z_j x_bj_j -f(b)p_b_j s_jz_j . substituting this expression for into eq.([eq - del - l - in - b ] ) for a new expression for . in the new expression for , we may set the coefficients of separately to zero .this yields : suppose we denote the two constraints of eq.([eq - fruit - of - optimiz ] ) by .these two constraints depend on the set of variables . using eqs.([eq - strong - in - kxy ] ) and the results of appendix [ app - param ] , the variables can all be expressed in terms of and .thus what we really have is for .these two equations can be solved numerically for the two unknowns .
|
a quantum compiler is a software program for decomposing ( compiling " ) an arbitrary unitary matrix into a sequence of elementary operations ( seo ) . the author of this paper is also the author of a quantum compiler called qubiter . qubiter uses a matrix decomposition called the cosine - sine decomposition ( csd ) that is well known in the field of computational linear algebra . one way of measuring the efficiency of a quantum compiler is to measure the number of cnots it uses to express an unstructured unitary matrix ( a unitary matrix with no special symmetries ) . we will henceforth refer to this number as . in this paper , we show how to improve for qubiter so that it matches the current world record for , which is held by another quantum compiling algorithm based on csd .
|
dementia is a decline in mental ability , caused by damage to brain cells , that interferes with daily life .activities of daily living are usually divided into basic and instrumental activities of daily living ( iadl ) .several criteria and methods have been developed as measuring tools to implement treatments and diagnoses . despite the efforts developed in this field , the relationship between iadl performance and mental activityis nowadays implemented using only simple statistical approaches like pearson s or spearman s correlations . on the other hand , catastrophe theory , particularly cusp catastrophe models ,have been used to describe several psychological processes and human activities ( drinking , sexual interactions , nursing turnover , etc ) .however , in those studies , the data were fit to a cusp surface without support from any phenomenological model , so that the physical reasons of those processes remain obscure . here, we introduce a physical representation of brain functions representing the brain tasks as creation of networks between several neurons .in order to support a brain task , a network between several neurons is created .this network is characterized by a correlation length , , that depends both on the topology and on the functionality of the network .the degree of metabolic activity necessary to support the task ( and the network ) is proportional to the volume of the network determined by this correlation length .this metabolic activity is equal to the energy used to maintain the function of the neurons and their links , , plus the energy required for the dynamic formation of the specific network , .however each brain task is not instantiated in its own isolated network .networks are shared between tasks resulting in connectivity hubs .when several cognitive processes share the same network , they may do so without a proportional increase in metabolic demand . in order to characterize this phenomenon, we introduce the concept of synaptic overlap .the degree of synaptic overlap is proportional to the mean shared area , which is energized by other processes along the network s correlation length .this characteristic network overlap has been well described and is often referred to as a network of networks .so , the energetic balance of the network is summarized as , where and are coefficients that convert the geometric characterization of the network into energy units and characterizes the synaptic overlap .equation describes the possible values of the system in the space determined by metabolic energy , synaptic overlap , and correlation length . since neuronal network set up is a synchronized response to an electrical stimulus it seems reasonable that a faster network configuration involves more energy .let us now assume that the metabolic energy for a cognitive task is proportional to the change rate of the correlation length between neurons , that is , .so , equation could be written as , where now and are functions derived from equation that depend , in general , on metabolic energy , synaptic overlap and time , and where is a potential function that corresponds to the riemann - hugoniot surface for different and values . equation , or the equivalent potential , describes a cusp model that predicts sudden changes for values ; here and are known as asymmetry control parameter and bifurcation control parameter respectively .equation is thus a deterministic model that relates the energy in a cognitive task network to its correlation length .however , the brain networks are subject to a high level of noise .the coupling of millions of neurons in a network in order to do a task is necessarily subject to random variations . in order to apply this model to real data a probabilistic termshould be added to the model .this casts equation into a stochastic differential equation , where represents a diffusion process , that will be assumed to be constant , and which is a white noise wiener process .notice that is a langevin equation where the correlation length corresponds to the position of the particle under the potential .the corresponding fokker - planck equation for the probability density can be written as , +\sigma\frac{\partial^{2}}{\partial x^{2}}\rho\left(x , t\right).\label{eq : fokker - planck}\ ] ] nevertheless , equation involves two different characteristic times .changes in occur in the time of task processing and brain network assembling , that is , in seconds or minutes .alterations of , and consequently of , are due to the development of the neurodegenerative diseases that act in a time scale of years . since the variation of in time is faster than the change of , it can be assumed that changes very slowly over time , and consequently . from this, it is straightforward that , },\label{eq : prob_solution}\ ] ] where is a normalization constant .this last expression gives the probability density of obtaining a network of size for the steady state case , that is , if the system varies slowly over time .since the probability density for a network with correlation length is known , the entropy of the set of networks , can be calculated as , showing the natural evolution of the system .in order to evaluate our model we fit some real data .we should determine first how to model the correlation length of the network .on one hand , neurodegenerative diseases affect first the largest networks and this is reflected in the impairment of the more complex task .on the other hand , we should consider the evolution of the brain . from one organism to other , brain has growth in size and complexity . while the new evolved life forms are able to learn more complex task , their brain grow in new layers and connected networks .notice also that high frequency activity in brain has been associated to cognitive process implying that high functioning requires more energy .so , the correlation length of the network will be modeled as proportional to the network output .that is , a bigger network is assumed as needed in order to accomplish a more difficult task .data used in the preparation of this article were obtained from the alzheimer s disease neuroimaging initiative ( adni ) database ( adni.loni.usc.edu ) .the adni was launched in 2003 as a public - private partnership , led by principal investigator michael w. weiner , md .the primary goal of adni has been to test whether serial magnetic resonance imaging ( mri ) , positron emission tomography ( pet ) , other biological markers , and clinical and neuropsychological assessment can be combined to measure the progression of mild cognitive impairment ( mci ) and early alzheimer s disease ( ad ) .adni is a global research effort devote to the research of ad .the website group clinical , imaging , genetic and biospecimen biomarkers from normal aging to dementia stages .the standardized methods for imaging and biomarker collection and analysis are intended for facilitating a cohesive research worldwide .adni provides the collected information to all registered members .a sample of 1351 subjects was selected from adni cohort .all available data from these individuals gave a total of 3025 study visits .we selected for analysis : positron emission tomography fluorodeoxyglucose ( fdg ) standard uptake value ratio , total brain volume ( tbv ) , intracranial volume ( icv ) , as well as the functional activities questionnaire ( faq ) score .this questionnaire is the information obtained from caregivers about iadl performance of patients . for each subjectthe brain ratio ( br ) was calculated as the ratio between tbv and icv . for each variable to be fitted into the model , fdg , br and faq ,a linear transformation was applied to the data in order to normalize it to the interval $ ] . in the case of faq valuesthe transformation was applied in opposite direction .that is , the faq score increases as impairment of iadl increases but the normalized variable decreases as impairment of iadl increases .the network output is proposed as proportional to iadl ; the bifurcation ( ) and asymmetry ( ) control parameters are proposed as linear functions of the independent variables .that is , where , and stand for the normalized values of faq , fdg and br and , and are fitting coefficients .all the statistical procedures were made using r statistical software .the r package `` cusp '' calculate the cobb s pseudo- parameter as a measurement of the goodness of fit .cobb s pseudo- and pearson s corresponding to the linear model were calculated and compared each other .the software fits the data to the _ standard _ cusp model , where the bifurcation is centered at , and .however , by requiring to that , as boundary conditions , the data were fitted to .the cobb s correlation coefficient for the adni data was pseudo- that seems to be a much better fit compared to the pearson s correlation coefficient of the equivalent linear model .fitting coefficients of the riemann - hugoniot surface on space were , , , and .figure [ fig : plot_bifurcation ] shows the control plane of the cusp model and how the data distribute for and values .it can be seen that there is a preferential direction along the cusp surface , represented as a straight line . by translating and rotating the coordinate system ,so that lies over this straight line and applying the boundary conditions , can be expressed in its `` natural '' coordinate system ( , ) .figure [ fig : plot_transv ] is the representation of the data in the new coordinate system .it shows how the data distribute for values . here, it can be seen that two different results are possible , iadl task failure for low values of iadl performance , and success , for high values of iadl performance .control surface of the cusp model . shadowed area represents the bivaluated zone of the cusp .the straight line represents the most probable trajectory on the plane .arrow shows the general direction of aging .darkness of points is proportional to the value of the correlation length . ]change of possible values of the network correlation length along the most probable trajectory represented by line .the arrow shows the direction of aging .darkness of points is proportional to the value of the correlation length . ]entropy of the system as a function of .the maximum value of entropy is for . ]probability density of obtaining a network of size for different values of along the most probable evolution of the system . ]the entropy of the system , calculated according to and represented in figure [ fig : entropy ] , defines the possible evolution of the system in time .the system evolves in the general direction of the known aging processes of brain , represented with arrows in figures [ fig : plot_bifurcation ] and [ fig : plot_transv ] .however , the maximum value of entropy corresponds to the point of and , very close to the point where the data intersects the plane ( ) .older age is generally characterized by decreasing brain volume and a decline in brain glucose metabolism .our model shows that even if this declining process occurs slowly it could end in a catastrophic failures of iadl , that is , in dementia .even when older individuals are more likely to present multiple pathologies it has been observed that some very old people get dementia without the presence of any pathology .that is , even when age is associated to pathological processes a non small percentage of the oldest people get dementia without any pathology .as can be seen in figure [ fig : probs ] , the `` high energy '' states produce only networks with high output. however , when there is a loss of brain volume with older age and a lower energy use , a point is reached where the probability of producing a network with a very low output is not zero .that is , the probability of task failure suddenly becomes greater than zero .this probability of failure increases along the aging process while the probability of success decreases . at some point along this continuum, an individual will be diagnosed with dementia . at very low values of br andfdg the probability of success will be zero .our results show that functional brain decline is clearly observed through the measures of the energy consumption ( fdg ) and the brain volume ( br ) .dementia progression has been already associated with the lesser presence of brain energy consumption .furthermore , it has been observed that the decline in energy consumption increases in advanced disease stages pointing to a non linear relation between both magnitudes .other authors have linked the iadl impairment to brain atrophy and also abrupt changes of iadl for different levels of brain atrophy have been observed , very similar to those changes that our model predicts. however , there is no deterministic relation between those biomarkers and the onset of dementia . on the contrary ,an individual s decline could follow a random path through the surface determined by equation .furthermore , the precise moment when the subject falls into dementia can not be predicted because it is governed by a probability function . beyond statistical inference or linear relationships ,a few mathematical models link the brain functioning with observed measures .however , these models are mainly focused into capturing the patterns of the disease instead of offering a general dynamics of the subject impairment progression . herewe offer a general framework that can be used to test the weight of clinical variables over the disease .role of pathological variables could be easily determined by rewriting and expression in equations .the influence of comorbidities or other factors usually used as covariables as age or genetic factors could be tested the same way .fitting coefficients should show if these variables need to be taken into account .for instance , it is clear that in equations the brain atrophy can be neglected from since the coeffcient is an order of magnitude smaller than the others .however , the inclusion of new variables should modulate the trajectory over the surface described by .so , the research over several variables should require much more data in order to show reliable results .it has been argued that dementia is the result of a pathological process acting on the brain and is fundamentally different from what is called healthy aging .however , based on the results of our model , normal aging results in a small , but continuous change in the brain that can drive loss of performance in iadls .this presents the provocative possibility that at least in some case ( e.g. , the oldest - old ) a dementia syndrome could be an end - point of otherwise normal aging . however , the solution posed here is only for the steady state .that is , the curve of figure [ fig : plot_transv ] represents the evolution of system only if changes occur slowly .stroke , infections , and the like cause abrupt changes to the system , and these are not accounted for in our model .we do not exclude the possibility that dementia could also appear as a consequence of sudden changes on the brain .while our model explains the general behavior of the data , the entropy of the system , shown in figure [ fig : entropy ] does not explain more advanced cases of dementia .this could mean that the model should not be applied to the more sparse networks that would be apparent in demented individuals .this is a novel approach not only in the field of dementia but more generally for neurodegenerative diseases . by applying only first principles of physics , in this case the laws of thermodynamics, we can show how cumulative slow changes in the brain can trigger a catastrophic change in the performance of the functional networks .this work was supported in part by funds from fundacio ace , institut catala de neurociencies aplicades , the estate of trinitat port - carb , the national institute on aging ( ag05133 ) and prodep project dsa/103.5/15/6986 from sep , mexico .data collection and sharing for this project was funded by the alzheimer s disease neuroimaging initiative ( adni ) ( national institutes of health grant u01 ag024904 ) and dod adni ( department of defense award number w81xwh-12 - 2 - 0012 ) .adni is funded by the national institute on aging , the national institute of biomedical imaging and bioengineering , and through generous contributions from the following : abbvie , alzheimer s association ; alzheimer s drug discovery foundation ; araclon biotech ; bioclinica , inc .; biogen ; bristol - myers squibb company ; cerespir , inc . ; cogstate ; eisai inc . ; elan pharmaceuticals , inc . ;eli lilly and company ; euroimmun ; f. hoffmann - la roche ltd and its affiliated company genentech , inc . ; fujirebio ; ge healthcare ; ixico ltd . ; janssen alzheimer immunotherapy research & development , llc . ; johnson & johnson pharmaceutical research & development llc . ; lumosity ; lundbeck ; merck & co. , inc . ; meso scale diagnostics , llc . ; neurorx research ; neurotrack technologies ; novartis pharmaceuticals corporation ; pfizer inc . ; piramal imaging ; servier ; takeda pharmaceutical company ; and transition therapeutics .the canadian institutes of health research is providing funds to support adni clinical sites in canada .private sector contributions are facilitated by the foundation for the national institutes of health ( www.fnih.org ) .the grantee organization is the northern california institute for research and education , and the study is coordinated by the alzheimer s therapeutic research institute at the university of southern california .adni data are disseminated by the laboratory for neuro imaging at the university of southern california .
|
aging associated brain decline often result in some kind of dementia . even when this is a complex brain disorder a physical model can be used in order to describe its general behavior . this model is based in first principles . a probabilistic model for the development of dementia is obtained and fitted to some experimental data obtained from the alzheimer s disease neuroimaging initiative . it is explained how dementia appears as a consequence of aging and why it is irreversible .
|
the next generation of sky surveys will provide reasonably accurate photometric redshift estimates , so there is considerable interest in the development of techniques which can use these noisy distance estimates to provide unbiased estimates of galaxy scaling relations .while there exist a number of methods for estimating photometric redshifts ( budavari 2009 and references therein ) , there are fewer for using these to estimate accurate redshift distributions ( padmanabhan et al . 2005 ; sheth 2007 ; lima et al . 2008; cunha et al . 2009 ) , the luminosity function ( sheth 2007 ) , or the joint luminosity - size , color - magnitude , etc . relations ( rossi & sheth 2008 ; christlein et al . 2009 ; rossi et al . 2010 ) . ideally , the output from a photometric redshift estimator is a normalized likelihood function which gives the probability that the true redshift is given the observed colors ( i.e. bolzonella et al .2000 ; collister & lahav 2004 ; cunha et al .let denote this quantity ; it may be skewed , bimodal , or more generally it may assume any arbitrary shape .let denote the mean or the most probable value of this distribution ( it does not matter which , although some of the logic which follows is more transparent if denotes the mean ) . often , ( sometimes with an estimate of the uncertainty on its value ) is the only quantity which is available .therefore , in section [ dndz ] we first consider how compares with the true redshift , and contrast the convolution and deconvolution methods for estimating while in section [ cfc ] we describe how to reconstruct the redshift distribution directly from colors .section [ pdf ] shows what this implies if one wishes to use the full distribution .section [ phil ] shows how to extend the logic to the luminosity function , and section [ phix ] to scaling relations , again by contrasting the convolution and deconvolution methods , and showing what generalization of is required from the photometric redshift codes if one wishes to do this .a final section summarizes our results . where necessary, we write the hubble constant as , and we assume a spatially flat cosmological model with , where and are the present - day densities of matter and cosmological constant scaled to the critical density .in what follows , we will use spectroscopic and photometric redshifts from the sdss to illustrate some of our arguments .details of how the early - type galaxy sample was selected are in rossi et al .( 2010 ) ; the photo- for this sample are from csabai et al .( 2003 ) .suppose that the true redshifts are available for a subset of the objects ; for now , assume that the subset is a random subsample of the objects in a magnitude limited catalog .ideally , this subset would have the same geometry as the full survey , as cross - correlating the objects with spectra and those without allows the use of other methods ( e.g. caler et al .2009 ) . in practice, this may be difficult to achieve and this is not required for the analysis which follows , provided that the photometric redshift estimator does not have spatially dependent biases ( e.g. , as a result of photometric calibrations varying across the survey ) . for the objects with spectroscopic redshifts , one can study the joint distribution of and ( see figure [ pzzeta ] ) .typically , most photometric redshift codes are constructed to return .the codes which do so are sometimes said to be unbiased , but they are not perfect : the scatter around the unbiased mean is of order .this scatter , combined with the fact that means that : the fact that is _ guaranteed _ to be biased is not widely appreciated .however , we show below that it matters little whether or are unbiased what matters is that the bias is accurately quantified . in particular , if and denote the distribution of and values in the subset of the data where both and are available , then what matters is that and , where are known .note that the algorithm in sheth ( 2007 ) assumes that , measured in the subset for which both and are available , also applies to the full sample for which is not available .since is measured in the full dataset , and is known , a deconvolution is then used to estimate the true .suppose , however , that one measured instead .then , because one could estimate the quantity on the left hand side by ` convolving ' the two measurables on the right hand side . for the data - subset in which both and are available ,this is correct by definition .clearly , to use this method on the larger dataset for which only is available , one must assume that in the subset from which it was measured remains accurate in the larger dataset .rossi et al . (2010 ) have shown that the deconvolution method accurately reconstructs the true distribution from .figure [ nzconv ] shows that the convolution approach also works well , even when only a random 5% of the full dataset is used to calibrate as displayed in figure [ pzzeta ] .thus , for the dataset in which both and are available , both the convolution and deconvolution approaches are valid , whether or not the means ( or , for that matter , the most probable values ) of and are unbiased , and however complicated ( skewed , multimodal ) the shape of these two distributions .this remains true in the larger dataset where only is known .however , whereas the convolution approach assumes that is the same in the calibration subset as in the full one , the deconvolution approach assumes that is the same .the integral in equation ( [ nz ] ) is really a sum over all the objects in the photometric dataset , where each object with estimated contributes to with weight : now , recall that was the mean ( or most probable ) value of a distribution returned by a photometric redshift code . in cases where the observed colours map to a unique value of , then this sum over is really a sum over , andthe expression above is really equation ( [ npz|c ] ) is one of the key results of this paper .although we arrived at equation ( [ npz|c ] ) by requiring the mapping be one - to - one ( as may be the case for , e.g. , lrgs ) , it is actually more general .this is because one can simply measure in the sample for which spectra are in hand , for the same reason that one could measure .in fact , is an easier measurement , since it does not depend on the output of a photo- code !the constraint on the mapping between and in the discussion above was simply to motivate the connection between photo- codes and the convolution method .once the connection has been made , however , there is no real reason to go through the intermediate step of estimating , since all photo- codes use the observed colors anyway . in this respect ,equation ( [ npz|c ] ) is the more direct and natural expression to work with than is equation ( [ npz|zeta ] ) .in particular , because is an observable , the convolution approach of equation ( [ npz|c ] ) is independent of any photo- algorithm . of course, if this method is to work , then the subsample with spectral information _ must _ be able to provide an accurate estimate of .the convolution method of the previous subsection provides a simple way of illustrating how one should use the output from photo- codes that actually provide a properly calibrated probability distribution for each set of colors , to estimate .it also shows in what sense the codes should be ` unbiased ' .in particular , equation ( [ npz|c ] ) suggests that one can estimate by summing over all the objects in the dataset , weighting each by its .this is because equation ( [ nlz|c ] ) shows that if does not have the same shape as , then use of will lead to a bias ; this is the pernicious bias which must be reduced whether or not equals the spectroscopic redshift is , in some sense , irrelevant .( in the case of a one - to - one mapping between and , is the same as the quantity which we discussed in the previous subsections . )satisfying is nontrivial .this is perhaps most easily seen by supposing that the template or training set consists of two galaxy types ( early- and late - types , say ) , for which the same observed colors are associated with two different redshifts . in this case , if the photo- algorithms are working well , then will be bimodal for at least some .however , if the sample of interest only contains lrgs , then may actually be unimodal . as a result , unless proper priors on the templates are used , or care has been taken to insure that the training set is representative of the sample of interest .we can perform a similar analysis of the luminosity function . in this case ,the key is to recognize that , in a magnitude limited survey , the quantity which is most directly affected by the photometric redshift error is not the luminosity function itself , but the luminosity distribution ( sheth 2007 ) . in a spectroscopic survey , differs from because one sees the brightest objects to larger distances : is the largest comoving volume to which an object with absolute magnitude could be seen .if we use to denote the absolute magnitude estimated using the photometric redshift , and its correct value , then sheth ( 2007 ) describes a deconvolution algorithm for estimating given measurements of and the assumption that , measured in a subset for which both and ( hence both and ) are available , also applies to the full photometric survey . following the discussion in the previous section, we could instead have measured , and then used the fact that to estimate the quantity on the left hand side by summing over the photometric catalog on the right hand side , weighting each object in it by ; note that this weight depends on .figure [ mzmphoto ] shows and ; notice how broad they are , and how much more skewed and biased is than .nevertheless , rossi et al . (2010 ) have shown that the deconvolution algorithm produces good results .figure [ nmconv ] shows that the convolution algorithm does as well. one estimates by dividing by .since this weight is the same for all objects with the same , one could have added an additional weighting term to the sum above to get one might have written , so the expression above shows explicitly why the photometric errors should be thought of as affecting and not . to make the connection to andthen it is worth considering how one computes from given the observed colors .if there were no -correction , then the luminosity in a given band would be determined from the observed apparent brightness by the square of the ( cosmology dependent ) luminosity distance the colors are not necessary . in practicehowever , one must apply a -correction ; this depends on the spectral type of the galaxy , and hence on its color . as a result , the mapping between and depends on and .but it is still true that both and are determined by .therefore , the spectroscopic subsample which was previously used to estimate also allows one to estimate .the quantity of interest in the previous section , , is simply the integral of over all .the quantity of interest here , , is the integral of over all .thus , equation ( [ nm ] ) becomes where the second to last expression writes the integral of over all as , and the final one writes the integral explicitly as a sum over the objects in the catalog .the expression above is the convolution - type estimate of ; it does not require a photometric redshift code .however , in principle , a photometric redshift code could output : the quantity such codes currently output , , is the integral of over all .the relevant weighted sum becomes where is the integral of over all , the sum is over all the objects in the catalog , and the method only works if .note that the luminosity density ( in solar units ) can , therefore , be written as the second to last line shows that one requires the average of summed over the distribution ; this is easily computed from distributions like those shown in the bottom panel of figure [ mzmphoto ] .the final expression writes this as a sum over the observed distribution of colors .although the previous section considered the luminosity function in a single band , it is clear that the photometric redshift codes could output , where is a set of absolute luminosities ( typically , these will be those associated with the various band passes from which the colors were determined ) .hence , the color magnitude relation , which is really a statement about the joint distribution in two bands , can be estimated by galaxy scaling relations can be estimated similarly , if we simply interpret as being the vector of observables which can include sizes , etc .( not just luminosities ) . in principle , quantities other than colors ( e.g. , apparent magnitudes , surface brightness , axis ratios ) can play a role in the photometric redshift determination ; this can be incorporated into the formalism simply by using to now denote the full set of observables from which the redshift and other intrisic quantities were estimated .if one wishes to use the output from a photo- code , rather than from the spectroscopic subset , one would use having checked that , in the spectroscopic subset , .we showed how previous work on deconvolution algorithms for making unbiased reconstructions of galaxy distributions and scaling relations ( sheth 2007 ; rossi & sheth 2008 ; rossi et al . 2010 ) could be related to convolution - based methods .whereas deconvolution based methods require accurate knowledge of , the distribution of the photometric redshift given the true redshift , convolution based methods require accurate knowledge of . since is derived from photometry , this may more generally be written as , where is the vector of observed photometric parameters which were used to estimate the redshift . in both cases , and calibrated from a sample in which is known , and are then used in a larger sample where is not available .if the smaller training set has the same selection limits as the larger dataset ( e.g. , both have the same magnitude limit ) then both approaches are valid .we illustrated our arguments with measurements in the sdss ( figures [ pzzeta][nmconv ] ) .we also showed what additional information must be output from photometric redshift codes if their results are to be used in a convolution - like approach to provide unbiased estimates of galaxy scaling relations .in particular , we argued that only if the redshift distribution output by a photo- algorithm , , has the same shape as , can the algorithm be said to be unbiased . only in this case its output ( available for the full sample ) can be used in place of ( which is typically available for a small subset ) . the safest way to accomplishthis is for the training set to be a random subsample of the full dataset and to then tune the algorithm so that . if the training set is not representative , then care must be taken to ensure that does not yield biased results .obtaining spectra is expensive , so the question arises as to whether or not there is a more efficient alternative to the random sample approach . for the convolution method , which requires , the answer is clearly ` yes ' .this is because some color combinations ( e.g. the red sequence ) might give rise to a narrow distribution , whereas others may result in broader distributions .since it will take fewer objects to accurately estimate the shape of a narrow distribution than a broad one , observational effort would be better placed in obtaining spectra for those objects which produce broad distributions . for the deconvolution approach , one would like to preferentially target those redshifts which produce broader distributions for similar reasons .but , since is not known until the spectra are taken , this can not be done , so taking a random sample of the full dataset is the safest way to proceed .our methods permit accurate measurement of many scaling relations for which spectra were previously thought to be necessary ( e.g. the color - magnitude relation , the size - surface brightness relation , the photometric fundamental plane ) , so we hope that our work will permit photometric redshift surveys to provide more stringent constraints on galaxy formation models at a fraction of the cost of spectroscopic surveys .rks thanks l. da costa , m. maia , p. pellegrini , m. makler and the organizers of the des workshop in rio in may 2009 where he had stimulating discussions with c. cunha and m. lima about the relative merits of convolution and deconvolution methods , and the apc at paris 7 diderot and mpi - astronomie heidelberg , for hospitality when this work was written up .funding for the sdss and sdss - ii has been provided by the alfred p. sloan foundation , the participating institutions , the national science foundation , the u.s .department of energy , the national aeronautics and space administration , the japanese monbukagakusho , the max planck society , and the higher education funding council for england .the sdss web site is http://www.sdss.org/. the sdss is managed by the astrophysical research consortium for the participating institutions .the participating institutions are the american museum of natural history , astrophysical institute potsdam , university of basel , university of cambridge , case western reserve university , university of chicago , drexel university , fermilab , the institute for advanced study , the japan participation group , johns hopkins university , the joint institute for nuclear astrophysics , the kavli institute for particle astrophysics and cosmology , the korean scientist group , the chinese academy of sciences ( lamost ) , los alamos national laboratory , the max - planck - institute for astronomy ( mpia ) , the max - planck - institute for astrophysics ( mpa ) , new mexico state university , ohio state university , university of pittsburgh , university of portsmouth , princeton university , the united states naval observatory , and the university of washington .bolzonella m. , miralles j .-, pell r. 2000 , a , 363 , 476 budavri t. 2009 , apj , 695 , 747 caler m. , sheth r. k. , jain b. , 2009 , mnras , submitted ( arxiv:0811.2805 ) christlein d. , gawiser e. , marchesini d. , padilla n. 2009 , mnras , 1381 collister a. a. , lahav o. 2004 , pasp , 116 , 345 csabai i. , et al . 2003 ,aj , 125 , 580 cunha c. e. , lima m. , oyaizu h. , frieman j. , lin h. 2009 , mnras , 396 , 2379 lima m. , cunha c. e. , oyaizu h. , frieman j. , lin h. , sheldon e. s. 2008 , mnras , 390 , 118 padmanabhan n. , et al . 2005 , mnras , 359 , 237 rossi g. , sheth r. k. , 2008 , mnras , 387 , 735 rossi g. , sheth r. k. , park c. , 2010 , mnras , 401 , 666 sheth r. k. , 2007 , mnras , 378 , 709
|
in addition to the maximum likelihood approach , there are two other methods which are commonly used to reconstruct the true redshift distribution from photometric redshift datasets : one uses a deconvolution method , and the other a convolution . we show how these two techniques are related , and how this relationship can be extended to include the study of galaxy scaling relations in photometric datasets . we then show what additional information photometric redshift algorithms must output so that they too can be used to study galaxy scaling relations , rather than just redshift distributions . we also argue that the convolution based approach may permit a more efficient selection of the objects for which calibration spectra are required . [ firstpage ] methods : analytical , statistical galaxies : formation cosmology : observations .
|
the nucleus is the largest and stiffest organelle in a eukaryotic cell .it is actively coupled to the dynamic cytoskeleton by means of a variety of scaffold proteins : contractile acto- myosin complexes , microtubule filaments constantly undergoing dynamic reorganization , and load bearing intermediate filaments .the nucleus has been found to translate and rotate during cell migration .it is reasonable to suppose that such motions are a result of active processes in the cytoplasm , involving the cytoskeleton and molecular motors .the positioning of the nucleus in the cellular environment is critical to many physiological functions such as migration , mitosis , polarization , wound healing , fertilization and cell differentiation .alterations to nuclear position have been implicated in a number of diseases . taken togetherthese studies suggest that the mechanical homeostatic balance of nuclear positioning and dynamics is intimately coupled with cellular geometry .while a number of molecular players have been implicated in this context , the role of actomyosin contractililty on nuclear dynamics has not been explored . in this paper , we show that cell geometry and active stresses are critical components in determining nuclear position and movements .fibroblast cells ( nih3t3 ) plated on micro- patterned fibronectin surfaces of varying shapes and aspect ratio were used to assess the effect of geometrical constraint on the translational and rotational movement of the nucleus .time - lapse imaging revealed a correlation between actin flow patterns and nuclear movement .we show that a hydrodynamic model of oriented filaments endowed with active contractile stresses , with the nucleus entering only as a passive inclusion , gives rise to the observed organized actin flow and nuclear rotation . while preparing the present work for submission , we became aware of two works with theoretical formulation and predicted behaviours similar to ours .the contexts in which these works are set is different from ours , i.e. , the dynamics of the cell nucleus is not the subject of these papers . in addition , the boundary conditions are different in detail .reference was in a taylor - couette geometry , i.e. , there is no medium inside the inner circle , and reference was in a circular geometry without a central inclusion .our observations suggest that nuclear rotation and circulating flows are an inherent property of the active cell interior under geometric confinement . that nuclear rotation is not a normally observed feature of cell dynamics suggests that the cell must possess other mechanisms to suppress it .we discuss these towards the end of the paper .* cell culture * : nih3t3 fibroblasts ( atcc ) were cultured in low glucose dmem ( invitrogen ) supplemented with 10 fetal bovine serum ( fbs ) ( gibco , invitrogen ) and 1 penicillin- streptomycin ( invitrogen ) .cells were maintained at in incubator with 5 co2 in humidified condition .cells were trypsined and seeded on fibronectin coated patterned surfaces for 3 hours before staining or imaging . for confocal imaging ,low well ibidi non- treated hydrophobic dishes were used .65,000 cells were seeded on each time on patterned surfaces ( with 10,000 patterns ) for 30 minutes , after which the non - settled cells were removed and media was re - added in the dishes .blebbistatin ( invitrogen ) were diluted from stock using filtered media .blebbistatin was used at concentration of 1.25 m .this minimizes the effect of any other solvent like dmso .microtubule was immunostained using -tubulin antibody ( 1:200 , abcam ) and alexa fluor 546 secondary ( 1:500 , invitrogen ) in cell plated on triangular pattern .the nucleus was labeled using hoechst ( 1:1000 ) .* preparation of pdms stamps and micro - contact printing * : pdms stamps were prepared from pdms elastomer ( sylgard 184 , dow corning ) and the ratio of curer to precursor used was 1 .the curer and precursor were mixed homogeneously before pouring onto the micropatterned silicon wafer .the mixture was degassed in the desiccator for at least 30 minutes to remove any trapped air bubbles and was then cured at for 2 hours , after which the stamps were peeled off from the silicon wafer .micropatterned pdms stamps were oxidized and sterilized under high power in plasma cleaner ( model pdc-002 , harrick scientific corp ) for 4 minutes .30 of 100 / ml fibronectin solution ( prepared by mixing 27 of 1xpbs to 1.5 g of 1mg / ml fibronectin and 1.5 of alexa 647 conjugated fibronectin ) was allowed to adsorb onto the surface of each pdms stamp under sterile condition for 20 minutes before drying by tissue .the pdms stamp was then deposited onto the surface of a low well non - treated hydrophobic dishes ( ibidi ) ( for high - resolution imaging ) to allow transferring of the micro - features .subsequently , the stamped dish was inspected under fluorescent microscope to verify the smooth transfer of fibronectin micro - patterns .surface of sample was then treated with 1ml of 2mg / ml pluronic f-127 for 2 hours to passivate non- fibronectin coated regions . * cell transfection * :transfection of various plasmids in wt nih3t3 cells was carried out using jetprime polyplus transfection kit . of plasmid was mixed properly in of jetprime buffer by vortexing and spinning , of jetprime reagent was then added and the mixture was again vortexed and spun .the mixture was incubated for 30 minutes and then added to 50 - 60 confluent culture in 35 mm dish .cells were kept in fresh media for 2 hours prior to addition of transfection mixture .cells were incubated for 20 hrs before plating them on the patterned substrates .* imaging * : phase contrast imaging of cells on different geometrical patterns was done on nikon biostation imq using 40x objective at in a humidified chamber with 5 .confocal time lapse imaging of cells transfected with various plasmids ( lifeact egfp , rfp and dsred er ) was carried out on nikon a1r using 60x , 1.4 na oil objective at in a humidified incubator with 5 .* image analysis and quantifications * : acquired images were processed and analysed using imagej software ( http://rsbweb.nih.gov/ij/index.html ) . to determine the translational coordinates and rotational angle of nucleus ,diagonally opposite nucleoli were manually tracked from the phase contrast image of the cell using the imagej plugin- mtrackj ( http://www.imagescience.org/meijering/software/mtrackj/ ) .the translational and rotational autocorrelation were calculated from the residual of the linear fit to corresponding curves- the detrended curves , thereby taking into account only the time scales relevant in our measurements .particle image velocimetry ( piv ) analysis was carried out using matlab piv toolbox - matpiv between consecutive image frames separated by 1min .images acquired were 512 x 512 pixels .the size of the interrogation window was chosen to be 32 x 32 with an overlap of 50 between the consecutive time frames .the `` single pass '' method was used for calculating the velocities .quantifications were done using custom written program in either labview 6.1 or matlab r2010a .all the graphs and curve fittings were carried out using originpro 8.1 ( originlab corporation , northampton , usa ) .to assess nuclear dynamics independent of cell migration , we used micro - patterned fibronectin- coated substrates to confine cells to regions of defined geometry and size .single cells were cultured on each patterned substrate and time lapse phase contrast imaging was carried out for about 8 hours ( or till the cell underwent mitosis ) .geometries with a variety of rotational symmetries circle , square , equilateral triangle and rectangles with aspect ratios 1:3 and 1:5 but the same cell spreading area ( ) were fabricated and used to study effect of cell shape on the translational and rotational movement of the nucleus .figure [ fig : fig_1]a - e shows color - coded intensity - profile images obtained by average - intensity projection of phase - contrast time lapse images for the above cases ( figure [ fig : fig_1]a - c are for rectangles with aspect ratio 1:1 , 1:3 and 1:5 , figure [ fig : fig_1]d and e are for triangle and circle ) . the dark color or low intensity at the vertices of the triangular and rectangular patterns shows the formation of stable contacts in that region , a feature absent on the circular pads . notethat the cell adheres much more stably on the triangular pattern than on the circle or the square .the translational and rotational movements were measured from the time lapse images , and show convincingly the influence of cell geometry on nuclear dynamics .figure [ fig : fig_1]f displays typical trajectories of the nucleus on triangular and rectangular geometries of same area . on rectangles , presumably because of narrower confinement , the nucleus undergoes mainly translation while on triangles ( as well as circles and squares ) the nucleus both rotates and translates , as shown in supplementary movie 1 - 5 .since motion out of plane is negligible , we resolve the dynamics into two - dimensional translation and rotation in the xy plane .translation is estimated through the instantaneous mean position of two nucleoli situated at roughly diametrically opposed points ( , ) and ( , ) .the top inset to figure [ fig : fig_1]f shows a typical translation trajectory .rotation is characterised by the coordinates of one nucleolus relative to this mean .a typical rotational track for the nucleus is shown in the bottom inset to figure [ fig : fig_1]f .although rotation of the nucleus is not a normal feature of cell cycle , both translational and rotational movement of the nucleus during cell migration has been reported for many cell types including nih3t3 which was used in all our experiments . for completeness , we document such motion here as well .figure [ fig : s1]a shows a representative dic image of a monolayer of nih3t3 cells cultured on glass bottom dishes .time lapse images ( figure [ fig : s1]b ) of three cells from this field of view are presented with arrows showing the position of the nucleolus .rotation and translation tracks of the nucleus are plotted for these cells in figure [ fig : s1]c and d respectively , showing large departures from its initial position and orientation . finally , in order to demonstrate that migration is not the underlying cause of the rotation we observe , we confine cells by plating them onto fibronectin patterns of various well - defined geometries , allowing us to study the effect of cell geometry alone on nuclear dynamics .the fraction of rotating nuclei decreases significantly in geometries with large aspect ratio whereas on more symmetric patterns namely equilateral triangles , squares and circles , the fraction is not significantly different ( figure [ fig : fig_2]a ) .except on rectangles , about of cells showed at least nuclear rotations in 8 hours .nuclear circularity as a function of cell shape is altered with changes in aspect ratio ( decreases from 0.9 to 0.7 , supplementary figure [ fig : s2]a ) but not with changes in rotational symmetry of constraints .the instantaneous linear velocity decreases marginally from circle to square to triangle as well as with increase in aspect ratio of rectangle ( 1:1 - 0.23 /min and 1:5 0.20 /min , see figure [ fig : fig_2]b and supplementary figure [ fig : s2]b ) .the mean rotational velocity decreases from circle ( /min ) to square ( /min ) to triangle ( /min ) ( figure [ fig : fig_2]c and supplementary figure [ fig : s2]c , d and e ) suggesting that rotation is sensitive to geometric constraints .however , the fraction of nuclei showing significant and systematic rotation is similar for these three shapes . to explore the possible role of myosin induced contractility in these phenomena, we turn now to the active hydrodynamic theory of the cell interior .we show that cytoplasmic flows produced by acto- myosin contractility are the minimal explanation for the observed rotation of the nucleus . to this end, we turn to the theoretical framework of active hydrodynamics .contractile stresses carried by actomyosin , given an arrangement of filaments compatible with the cell shape imposed by the pads and the presence of the nucleus as an internal obstacle , lead to organized flows that rotate the nucleus .more detailed propulsive elements , e.g. , pushing by microtubules anchored onto the nuclear surface , while possibly present in the cell , are not a necessary part of the mechanism .since the cell in the experiment is stretched , its height is smaller than its dimensions in the plane .we can therefore model the cell as a quasi - two - dimensional film with the hydrodynamics being cut off at a scale proportional to the height .we also assume an axisymmetric cell , and ignore actomyosin treadmilling and the on - off kinetics of the motors .this highly simplified view of the cell still exhibits some key features of the dynamics found in the experiment .we now present the equations of active hydrodynamics .the inner circular region represents the nucleus , which is taken to contain no active motor - filament complexes and is therefore modeled as a passive liquid drop of very high viscosity ( ) in effect undeformable .the outer annular region is the cytoplasm , which contains active orientable filaments . the inner fluid - fluid interface , i.e., the boundary between cytoplasm and nucleus , has tangential stress continuity and tangential velocity continuity , and the outer surface , the contact line of cell with pad , has no slip .we assume the filaments preferentially lie parallel to any surface with which they are in contact .in particular , they therefore lie tangent to both the inner and the outer boundaries .the cytoplasmic medium is taken to consist of filaments suspended in the cytosol of viscosity .we assume the filaments are in a state of well - formed local orientation whose manitude does not change so that it can be characterised completely by a unit vector or director " field , , at postion . associated with the filaments is an active stress , where the parameter is a measure of actomyosin activity , the concentration of filaments and myosin is assumed uniform .fluid flow in the cytoplasm is described by the hydrodynamic velocity field the equations of active hydrodynamics in steady state lead to a dynamic balance between shearing and relaxation of filaments , and force balance , ignoring inertia , with total stress tensor -p\delta_{ij } - w n_i n_j + \lambda_{kij}\frac{\deltaf}{\delta n_k}+\sigma^0_{ij}.\ ] ] here , , is the elastic free energy for the director , with the frank elastic constant k , , is a flow - orientation coupling , and is the ericksen stress .we will work with a completely symmetric stress , , built from , which will give the same velocity field , due to angular momentum conservation .the coefficient in represents in a -averaged sense the effects of confinement on the damping of velocities .it has two contributions : a viscous part arising because flows within an adhered cell of thickness in the vertical direction and no - slip at the base must in general have -gradients on a scale , and direct damping of flow through the kinetics of attachment and detachment of the cytoskeletal gel to the substrate . in our estimates belowwe retain only the viscous effect , so that simply has the effect of screening the hydrodynamics at in - plane length - scales larger than . including attachment - detachmentenhances .we use circular polar coordinates in the plane .since we assume axisymmetry , the radial velocity vanishes because incompressibility implies , and at both the interfaces . for force balance in the region corresponding to the nucleus we have to solve the equation where .the equation can be solved in terms of bessel functions , with the constraint that has to be at .continuity of tangential stress and velocity at the cytoplasm - nucleus interface gives the requisite number of boundary conditions .force balance in the azimuthal direction reads expressing the diretor the steady state equation for the orientation field reads where and are the symmetric and antisymmetric parts of the velocity gradient tensor .using , the component of can be recast as a first order differential equation for . thus , we have two first order equations , and , and one second order equation to solve , which we solve numerically . a noticable and robust feature of the solution ( inset to fig .5a ) is the presence of a maximum in the magnitude of the velocity at some distance from the nucleus .this results from a combination of vanishing velocity at the outer boundary and the nuclear centre , and continuity of velocity and shear stress at the fluid - fluid interface .note that our description does not include chiral effects , so that equivalent solutions with either sense of rotation are obtained . the competition between active stresses that promote flow and orientational relaxation , that inhibits it , is contained in the dimensionless combination .accurate estimates of parameters for our system are not easy to make . the cytoskeletal active stress , w , is generally argued to be in the range 50 - 1000 pa .frank constants for actin nematics appear to be 2 - 20 pn , as in ordinary thermotropic nematics .the thickness of the spread cell in our experiments is about of the lateral extent . for a spread cell area of therefore estimate m . taken together , this leads to .however , if attachment - detachment contributions to are included , will be lowered substantially . from the active hydrodynamic model we know that the system is quiescent for small values of this parameter .however , , for which we present the results , is already sufficient to produce a spontaneous flow .increasing leads to increasingly complicated flows which we have only begun to explore .we do not attempt a detailed comparison between the observed and the theoretical flow patterns .however , the conclusion about the maximum of the velocity being away from the nucleus rests purely upon the confining geometry , and we expect that the time and angle averaged velocity profile , measured from the experiment , will have a peak away from the nuclear boundary . for .e an unbounded , oriented active fluid , one expects spontaneous velocity gradients of order . in and it was shown that the presence of confinement on a scale modifies the above conclusion giving a characteristic rate where , where is the in - plane scale associated with observation . in our case , is .thus , the rotation rate should be of the order of . using the arguments of estimate turns out to be of the order of a few degrees / min .this is reassuringly consistent with the magnitude obtained from the experiment .the two predictions we can make based on this simple model are that actomyosin is crucial for nuclear rotation , and that the angle and time averaged angular velocity will be maximum away from the nucleus .we perform a series of experiments to check these . in the next sections, we study the contribution of actomyosin contractility , a critical cytoplasmic regulator of nuclear prestress , to the translational and rotational dynamics of the nucleus .we test the role of contractility on nuclear dynamics to validate the theoretical predictions based on active fluids with an inclusion .actomyosin contractility was altered by treating cells fully spread on geometric patterns with low concentration of blebbistatin an inhibitor of the myosin ii motor . to determineif the persistence in nuclear translation motion was dependent on contractility , the autocorrelation function ( acf ) , was plotted for control and blebbistatin treated cells ( figure [ fig : fig_3]a ) .blebbistatin treated cells exhibited a decreased correlation time scale for translational motion ( bottom inset to figure [ fig : fig_3]a , ) suggesting that actomyosin contractility is important for correlated translational movement of the nucleus .next , the nuclear rotation angle as a function of time was calculated from the xy rotation trajectories .a typical plot of angle versus time for the nucleus on geometric pattern is shown in supplementary figure [ fig : s3 ] . on treatment with blebbistatin , the instantaneous angular velocity significantly decreases to 1.0 /min when compared to control 1.6 /min ( top inset to figure [ fig : fig_3]a and supplementary figure [ fig : s4 ] ) . to ascertain the effect of actomyosin contractility on the persistence of nuclear rotation we computed the auto - correlation of angular movement with time .figure [ fig : fig_3]b shows plot of auto- correlation curve for control cells and cells treated with blebbistatin .inset ( below ) to figure [ fig : fig_3]b show that on perturbing actomyosin contractility , the persistence time of nuclear rotation decreases from 62 min in control to 33 min . in the next section, we study the role of actin flow patterns in regulating the nuclear dynamics .live cell fluorescence confocal imaging was carried out to simultaneously visualize actin flow dynamics and nuclear rotation on geometric patterns .cells were transfected with lifeact - gfp to label actin in live condition ( figure [ fig : fig_4 ] ) .time lapse confocal imaging of actin revealed a retrograde flow and its remodeling around the nucleus ( figure [ fig : fig_4]a ) . to quantify the flow pattern ( supplementary movie 6 ), we carried out particle image velocimetry ( piv ) analysis using matpiv .this revealed flow vectors tangential to the nuclear boundary with direction and magnitude correlated with that of the nuclear rotation as shown in figure [ fig : fig_4]a and supplementary movie 7. velocity field maps of actin flow were determined in small regions throughout the cell ( figure [ fig : fig_4]a , last panel and supplementary movie 8) . a circulating flow , required to rotate the nucleus , is clearly seen ( figure [ fig : fig_4]a , middle panel and supplementary movie 6 and 7 ) .interestingly , upon blebbistatin treatment , inward flow of actin ( supplementary movie 9 - 11 ) , presumably driven by treadmilling , was not significantly affected .however , the azimuthal speed can be seen ( figure [ fig : fig_4]b ) to decrease substantially , despite some scatter in the data .the circulation of flow around the nucleus was lost concurrent with the loss of nuclear rotation ( figure [ fig : fig_4]b , middle panel and supplementary movie 10 ) .further , we plot in figure [ fig : fig_5]aandb , the angle averaged azimuthal velocity , with and without blebbistatin respectively , inferred from piv as a function of radial distance from the centre of the nucleus . for comparisonwe also show the radial velocity , ( figure [ fig : fig_5 ] c and d ) . note that the graphs start from the edge of the nucleus . as predicted from the theory , the azimuthal velocity peaks away from the nuclear boundary in the control cells . in blebbistatin treated cells , by contrast, the velocity is 0 , leading to the loss of nuclear rotation .however , is small in both cases , albeit with slightly larger fluctuations in the presence of blebbistatin .in addition , time lapse imaging of microtubules labeled with tau - egfp show that the microtubule organizing centre ( mtoc ) undergoes translation dynamics while the nucleus exhibits both translational and rotational dynamics ( supplementary figure [ fig : s5 ] ) .the orientation and arrangement of microtubules showed a cage like structure around the nucleus ( supplementary movie 12 ) .this caging mechanism might help keep the nucleus relatively localized , thus enhancing the rotational effects of the torque generated by the actin flow .we also visualized the endoplasmic reticulum ( er ) to assess its role in nuclear dynamics .since er is contiguous with the nuclear envelope , it could either stretch or undergo continuous remodeling as the nucleus rotates .live cell imaging of er , during nuclear rotation , showed dynamic remodeling suggesting a minor role for er in nuclear rotation and reversals ( supplementary figure [ fig : s6 ] ) .our results show that geometric constraints are critical in determining the rotational dynamics of the cell nucleus .while a number of components including cytoskeleton and motor proteins have been implicated to drive nuclear dynamics in migrating cells , our results on single cells confined to specific geometries suggest a role for actomyosin contractility .square , circular , and triangular pads support mainly rotational motion , while long narrow geometries restrict it .the shape of the confining geometry further determines the magnitude of nuclear rotation ; a relatively faster rotating nucleus is seen on circular pattern than on squares and triangles .we offer a simple theoretical explanation for the rotation in which the nucleus is modelled as a nearly rigid inclusion in the cytoplasm treated as a fluid containing filaments endowed with intrinsic stresses .the result is an angular velocity profile with a maximum at a radial position intermediate between the nucleus and the cell periphery , as observed in the experiments , and nonzero at the nuclear surface , corresponding to nuclear rotation .the predicted magnitude of the rotation rate based on plausible estimates of material parameters are also consistent with the measurements . that blebbistatin treatment greatly suppresses the flow lends support to our proposed mechanism .the question arises why nuclei are not universally observed to rotate in cells under normal conditions .at least two mechanisms could contribute to suppressing the generic instability that leads to circulating flows .one , in the absence of a rigid geometry , the cell boundary is free to change shape .this would disrupt the imposed boundary orientation of the filaments , and hence the orderly pattern of active stresses needed to drive a coherent flow .two , the apical actin fibres , absent in square and circular geometries , present to some extent in triangular geometries , and very well formed in elongated geometries , bear down on and thus enhance the friction on the nucleus , suppressing its motion .collectively , our results highlight the importance of both cell geometric constraints and actomyosin contractility in determining nuclear homeostatic balance .a number of experiments have shown that alterations in cell geometry affect gene expression programs and cell cycle time , and lead to a switching of cell fates towards apoptosis or proliferation .we hope our work leads to a search for nuclear rotation in a wider range of systems and settings , whether such rotation has biologically significant consequences , and a deeper understanding of how the cell normally suppresses such effects ., m.s . , andthank the mechanobiology institute ( mbi ) at the national university of singapore ( nus ) for funding and mbi facility .a.m. thanks tcis , tifr hyderabad for support and hospitality , and s.r . acknowledges a j.c .bose fellowship 1 dahl kn , ribeiro aj , and lammerding j , _ nuclear shape , mechanics , and mechanotransduction _ , circ res 102(11):1307 - 1318 ( 2008 ) .crisp m , et al ._ coupling of the nucleus and cytoplasm : role of the linc complex _, j cell biol 172(1):41 - 53 ( 2006 ) .haque f , et al ._ sun1 interacts with nuclear lamin a and cytoplasmic nesprins to provide a physical connection between the nuclear lamina and the cytoskeleton _, mol cell biol 26(10):3738 - 3751 ( 2006 ) .houben f , ramaekers fc , snoeckx lh , and broers jl , _ role of nuclear lamina- cytoskeleton interactions in the maintenance of cellular strength _ , biochim biophys acta ( 2006 ) .wang n , tytell jd , and ingber de _ mechanotransduction at a distance : mechanically coupling the extracellular matrix with the nucleus _ , nat rev mol cell biol 10(1):75 - 82 ( 2009 ) .takiguchi k _ heavy meromyosin induces sliding movements between antiparallel actin filaments _ j biochem .109 : 520 - 527 ( 1991 ) king mc , drivas tg , and blobel g _ a network of nuclear envelope membrane proteins linking centromeres to microtubules _ , cell 134(3):427 - 438 ( 2008 ) .theriot ja , _ the polymerization motor _ , traffic 1(1):19 - 28 ( 2000 ) .tzur yb , wilson kl , and gruenbaum y , _ sun - domain proteins : velcro that links the nucleoskeleton to the cytoskeleton _ , nat rev mol cell biol 7(10):782 - 788 ( 2006 ) .zhang q , et al . _nesprin-2 is a multi - isomeric protein that binds lamin and emerin at the nuclear envelope and forms a subcellular network in skeletal muscle _, j cell sci 118(pt 4):673 - 687 ( 2005 ) .brosig m , ferralli j , gelman l , chiquet m , and chiquet - ehrismann r , _ interfering with the connection between the nucleus and the cytoskeleton affects nuclear rotation , mechanotransduction and myogenesis _ , int j biochem cell biol 42(10):1717 - 1728 ( 2010 ) .lee js , chang mi , tseng y , and wirtz d , _ cdc42 mediates nucleus movement and mtoc polarization in swiss 3t3 fibroblasts under mechanical shear stress _ , mol biol cell 16(2):871 - 880 ( 2005 ) .levy jr and holzbaur el , _ dynein drives nuclear rotation during forward progression of motile fibroblasts _, j cell sci 121(pt 19):3187 - 3195 ( 2008 ) . .luxton gw , gomes er , folker es , vintinner e , and gundersen gg , _ linear arrays of nuclear envelope proteins harness retrograde actin flow for nuclear movement _ , science 329(5994):956 - 959 ( 2010 ) .reinsch s and gonczy p , _ mechanisms of nuclear positioning _, j cell sci 111 ( pt 16):2283 - 2295 ( 1998 ) .starr da _ communication between the cytoskeleton and the nuclear envelope to position the nucleus _ , mol biosyst 3(9):583 - 589 ( 2007 ) .wu j , lee kc , dickinson rb , and lele tp , _ how dynein and microtubules rotate the nucleus _, j cell physiol 226(10):2666 - 2674 ( 2011 ) .hagan i and yanagida m , _ evidence for cell cycle - specific , spindle pole body - mediated , nuclear positioning in the fission yeast schizosaccharomyces pombe _, j cell sci 110 ( pt 16):1851 - 1866 ( 1997 ) .marchetti et.al , _ soft active matter _ rev .( in press ) ; arxiv:1207.2929 ramaswamy s , _ the mechanics and statistics of active matter _ , annu . rev . condens .phys . 1 , 323 - 345 ( 2010 ) joanny j - f , jlicher f , kruse k , and prost j _ hydrodynamic theory for multi - component active polar gels _ new j. phys . 9 , 422 ( 2007 ) jlicher f et.al . _ active behavior of the cytoskeleton _ phys . rep . 3 , 449 , ( 2007 ) toner j , tu y , and ramaswamy s , _ hydrodynamics and phases of flocks318 , 170 ( 2005 ) s. frthauer et.al . , _ the taylor couette motor : spontaneous flows of active polar fluids between two coaxial cylinders _ new .14 , 023001 ( 2012 ) woodhouse fg and goldstein re , _ spontaneous circulation of confined active suspensions _ phys .109 , 168105 ( 2012 ) simha ra and ramaswamy s _ hydrodynamic fluctuations and instabilities in ordered suspensions of self - propelled particles _ , phys rev lett 89 , 058101 ( 2002 ) voituriez r , joanny j - f , and prost j , _ spontaneous flow transition in active polar films _ , europhys .70 404 ( 2005 ) kruse k et al ._ asters , vortices , and rotating spirals in active gels of polar filaments _ phys .92 , 078101 ( 2004 ) liverpool tb and marchetti c _ organization and instabilities of active polar filaments _ , phys .90 , 138102 ( 2003 ) .joanny jf and prost j _ active gels as a description of the actin - myosin cytoskeleton _ hfsp journal , 3(2):94 - 104 ( 2009 ) .ramaswamy s and rao m _ active - filament hydrodynamics : instabilities , boundary conditions and rheology _ new j. phys .9 , 423 ( 2007 ) lai gh et.al ., _ self - organized gels in dna / f - actin mixtures without crosslinkers : networks of induced nematic domains with tunable density _ phys . rev . lett . 101 , 218303 ( 2008 ) de gennes pg , and prost j , _ the physics of liquid crystals , second edition _ , clarendon press(1993 ) .stark h and lubensky tc _ poisson - bracket approach to the dynamics of nematic liquid crystals _, phys . rev .e 67 , 061709 ( 2003 ) .martin pc , parodi o , and pershan ps , _ unified hydrodynamic theory for crystals , liquid crystals , and normal fluids _ ,rev . a 6 , 2401 ( 1972 ) .landau l , and lifshitz em , _ theory of elasticity , third edition : volume 7 ( theoretical physics ) _, butterworth - heinemann(1986 ) .maitra a , and ramaswamy s,_(to be published ) _mazumder a and shivashankar gv , _ gold - nanoparticle - assisted laser perturbation of chromatin assembly reveals unusual aspects of nuclear architecture within living cells _ , biophys j 93(6):2209 - 2216 ( 2007 ) .mazumder a and shivashankar gv , _ emergence of a prestressed eukaryotic nucleus during cellular differentiation and development _, j r soc interface 7 suppl 3:s321 - 330 ( 2010 ) .kilian ka , bugarija b , lahn bt , and mrksich m , _ geometric cues for directing the differentiation of mesenchymal stem cells _usa 107(11):4872 - 4877 ( 2010 ) .ingber de , et al ._ cellular tensegrity : exploring how mechanical changes in the cytoskeleton regulate cell growth , migration , and tissue pattern during morphogenesis _ , int rev cytol 150:173 - 224 ( 1994 ) .gray ds , et al . _engineering amount of cell - cell contact demonstrates biphasic proliferative regulation through rhoa and the actin cytoskeleton _, exp cell res 314(15):2846 - 2854 ( 2008 ) .sun y , chen cs , and fu j _ forcing stem cells to behave : a biophysical perspective of the cellular microenvironment _ , annu rev biophys 41:519 - 542 ( 2012 ) .li q , kumar a and shivashankar gv , cellular geometry mediated apical stress fibers dynamically couples nucleus to focal adhesion ( under review ) .m. corresponding actin flow vectors ( middle panel ) and speed ( last panel ) was determined by particle image velocimetry ( piv ) analysis using matpiv for control ( a ) and blebbistatin ( b ) treated cells .the flow vectors have been scaled to 2 times for better visibility .color code : 0.0 - 1.03 / min . ] and from velocity vectors of actin flow for control ( a ) and ( c ) ; and blebbistatin treated cells ( b ) and ( d ) . each color represents single time point for cells .thick red curve is the mean of various such realizations ( 30 for control and 25 for blebbistatin treated cells).inset to 5a : a typical angular velocity vs. radial distance curve obtained by solving the equations , , and . ]m. ( b ) time lapse images of single cells from different regions ( cyan , red and green rectangles shown in ( a ) ) of the monolayer .time points are indicated at the top of each image .plot of rotational ( c ) and translational ( d ) movement for the three cells shown in ( b ) . ]
|
the nucleus of the eukaryotic cell functions amidst active cytoskeletal filaments , but its response to the stresses carried by these filaments is largely unexplored . we report here the results of studies of the translational and rotational dynamics of the nuclei of single fibroblast cells , with the effects of cell migration suppressed by plating onto fibronectin - coated micro - fabricated patterns . patterns of the same area but different shapes and/or aspect ratio were used to study the effect of cell geometry on the dynamics . on circles , squares and equilateral triangles , the nucleus undergoes persistent rotational motion , while on high - aspect - ratio rectangles of the same area it moves only back and forth . the circle and the triangle showed respectively the largest and the smallest angular speed . we show that our observations can be understood through a hydrodynamic approach in which the nucleus is treated as a highly viscous inclusion residing in a less viscous fluid of orientable filaments endowed with active stresses . lowering actin contractility selectively by introducing blebbistatin at low concentrations drastically reduced the speed and persistence time of the angular motion of the nucleus . time - lapse imaging of actin revealed a correlated hydrodynamic flow around the nucleus , with profile and magnitude consistent with the results of our theoretical approach . coherent intracellular flows and consequent nuclear rotation thus appear to be a generic property that cells must balance by specific mechanisms in order to maintain nuclear homeostasis .
|
maxwell s equations on the whole three - dimensional space are considered with initial conditions and inhomogeneity having support in a bounded domain that is not required to be convex ( or in a finite collection of such domains ) .the study of such problems leads to transparent boundary conditions , which yield the restriction of the solution to the domain .such boundary conditions are nonlocal in space and time , for both acoustic wave equations and maxwell s equations .there is a vast literature to tackle this problem in general for wave equations : fast algorithms for exact , nonlocal boundary conditions on a ball , local absorbing boundary conditions , perfectly matched layers , which were originally considered for electromagnetism in , and numerical coupling with time - dependent boundary integral operators .all the above approaches , except the last one , are inadequate for non - convex domains .the local methods fail because waves may leave and re - enter a non - convex domain .inclusion of a non - convex domain in a larger convex domain is computationally undesirable in situations such as a cavity or an antenna - like structure or a far - spread non - connected collection of small domains .the main objective of the present work is to transfer the programme of from acoustic wave equations to maxwell s equations : to propose and analyze a provably stable and convergent fully discrete numerical method that couples discretizations in the interior and on the boundary , without requiring convexity of the domain . like abboud _ et al . _ ( and later also ) for the acoustic wave equation , we start from a symmetrized weak first - order formulation of maxwell s equations . in the interiorthis is discretized by a discontinuous galerkin ( dg ) method in space together with the explicit leapfrog scheme in time .the boundary integral terms are discretized by standard boundary element methods in space and by convolution quadrature ( cq ) in time .this yields a coupled method that is explicit in the interior and implicit on the boundary .the choice of a cq time discretization of the boundary integral operators is essential for our analysis , and to a lesser extent also the choice of the leapfrog scheme in the interior .however , our approach is not specific to the chosen space discretizations which could , in particular , be replaced by conformal edge elements .while the general approach of this paper is clearly based on , it should be emphasized that the appropriate boundary integral formulation requires a careful study of the time - harmonic maxwell s equation .this is based on , with special attention to the appropriate trace space on the boundary and to the corresponding duality . due to the analogue of green s formula for maxwell s equations, the duality naturally turns out to be an anti - symmetric pairing .the calderon operator for maxwell s equation , which arises in the boundary integral equation formulation of the transparent boundary conditions , differs from the acoustic case to a large extent , and therefore the study of its coercivity property is an important and nontrivial point . similarly to the acoustic case , the continuous - time and discrete - time coercivity is obtained from the laplace - domain coercivity using an operator - valued version , given in , of the classical herglotz theorem .both the second and first order formulation of maxwell s equations are used .the spatial semi - discretization of the symmetrized weak first - order formulation of maxwell s equations has formally the same matrix vector formulation as for the acoustic wave equation studied in , with the same coercivity property of the calderon operator .because of this structural similarity , the stability results of , which are shown using the matrix vector setting , remain valid for the maxwell case without any modification . on the other hand ,their translation to the functional analytic setting differs to a great extent .therefore further care is required in the consistency analysis .in section [ section : recap helmholtz ] we recapitulate the basic theory for maxwell s equation in the laplace domain .based on buffa and hiptmair , and further on , we describe the right boundary space , which allows for a rigorous boundary integral formulation for maxwell s equations . then the boundary integral operators are obtained in a usual way from the single and double layer potentials . in section [ section : calderon ]we prove the crucial technical result of the present work , a coercivity property of the calderon operator for maxwell s equation in the laplace domain .this property translates to the continuous - time maxwell s equations later , in section [ subsection : calderon op for maxwell s eqn ] , via an operator - valued herglotz theorem .in section [ section : boundary int form ] we study the interior exterior coupling of maxwell s equations , resulting in an interior problem coupled to an equation on the boundary with the calderon operator .we derive a first order symmetric weak formulation , which is the maxwell analogue of the formulation of for the acoustic wave equation . together with the continuous - time version of the coercivity property of the calderon operator , this formulation allows us to derive an energy estimate . later on this analysisis transfered to the semi - discrete and fully discrete settings .section [ section : discretization ] presents the details of the discretization methods : in space we use discontinuous galerkin finite elements with centered fluxes in the domain , coupled to continuous linear boundary elements on the surface .time discretization is done by the leapfrog scheme in the interior domain , while on the boundary we use convolution quadrature based on the second - order backward differentiation formula .an extra term stabilizes the coupling , just as for the acoustic wave equation .the matrix vector formulation of the semidiscrete problem has the same anti - symmetric structure and the same coercivity property as for the acoustic wave equation , and therefore the stability results shown in can be reused here . in sections [ section : semidiscrete results ] and [ section : fully discrete results ]we revise the parts of the results and proofs of where they differ from the acoustic case , which is mainly in the estimate of the consistency error .finally , we arrive at the convergence error bounds for the semi- and full discretizations . to our knowledge ,the proposed numerical discretizations in this paper are the first provably stable and convergent semi- and full discretizations to interior exterior coupling of maxwell s equations .we believe that the presented analysis and the techniques , which we share with , can be extended further : to other discretization techniques for the domain , such as edge element methods , higher order discontinuous galerkin methods , and different time discretizations in the domain , together with higher order runge kutta based convolution quadratures on the boundary .for ease of presentation we consider only constant permeability and permittivity .however , it is only important that the permeability and permittivity are constant in the exterior domain and in a neighbourhood of the boundary . in the interiorthese coefficients may be space - dependent and discontinuous . in the latter casethe equations can be discretized in space with the dg method as described in .in this paper we focus on the appropriate boundary integral formulation and on the numerical analysis of the proposed numerical methods .numerical experiments are intended to be presented in subsequent work .concerning notation , we use the convention that vectors in are denoted by italic letters ( such as ) , whereas the corresponding boldface letters are used for finite element nodal vectors in , where is the ( large ) number of discretization nodes .hence , any boldface letters appearing in this paper refer to the matrix vector formulation of spatially discretized equations .functions defined in the domain are denoted by letters from the roman alphabet , while functions defined on the boundary are denoted by greek letters .let us consider the _ time - harmonic maxwell s equation _ , obtained as the laplace transform of the second order maxwell s equation ( with constant permeability and permittivity ) : where is the boundary of a bounded piecewise smooth domain ( or a finite collection of such domains ) , not necessarily convex , with exterior normal .we shortly recall some useful concepts and formulas regarding the above problem , based on and . for the usual trace we will use the notation .tangential _ and _ magnetic _ traces are defined , respectively , as these traces are also often called _ dirichlet trace _ and _ neumann trace _ , motivated by the analogue of green s formula for maxwell s equations ( for sufficiently regular functions ) : we introduce an important notation , the }_{\gamma}= \int_{\gamma}(\operatorname{\gamma}w \times \nu ) \cdot \operatorname{\gamma}v \,{\textrm{d}}\sigma , \ ] ] which appears on the right - hand side of .we note that the relation }_{\gamma}= { [ } \operatorname{\gamma}_t w , \operatorname{\gamma}_t v { ] } _ { \gamma} ] .then , the above mentioned _ proper trace space _ is given as : with norm the tangential trace satisfies the following analogue of the trace theorem .[ lemma : trace ineq ] the trace operator is continuous and surjective . the following lemma clarifies the role of the anti - symmetric pairing }_{{\gamma}} ] can be extended to a continuous bilinear form on . with thispairing the space becomes its own dual .the above results clearly point out that a natural choice of trace space is }_{{\gamma}}\big) ] denotes the jumps in the boundary traces .a further notation is the average of the inner and outer traces on the boundary : . on vectorsboth operations are acting componentwise .for every and , formula defines . because of the jump relations \ ! ]} = & \ \hbox{id } , & { [ \![\gamma_n \circ { { \mathcal d}}(s)]\ ! ] } = & \ 0 , \\ { [ \![\gamma_t \circ { { \mathcal s}}(s)]\ ! ] } = & \ 0 , & { [ \![\gamma_n \circ { { \mathcal d}}(s)]\ ! ] } = & \ \hbox{id},\end{aligned}\ ] ] and are reconstructed from by .let us now define the boundary integral operators .as opposed to the general second order elliptic case , due to additional symmetries of the problem , they reduce to two operators and , see ( * ? ? ?* section 5 ) .they satisfy in ( * ? ? ?* section 5 ) the continuity of these operators was proven , without giving an explicit dependence on .such bounds are crucial in the analysis later , therefore we now show -explicit estimates for the boundary integral operators .our result is based on .[ lemma : int operator s bounds ] for the boundary integral operators are bounded as these estimates can be shown by adapting the arguments of ( * ? ? ?* section 4.2 ) .in particular , by using the anti - symmetric pairing }_{{\gamma}} ] ( acting componentwise on ) , the analogue of green s formula and using the definition of the traces , we obtain }_{{\gamma}}=&\ \ { [ } { [ \![\operatorname{\gamma}_n u]\ ! ] } , { \{\!\!\{\operatorname{\gamma}_t u\}\!\!\ } } { ] } _ { { \gamma}}+ { [ } { [ \![\operatorname{\gamma}_t u]\ ! ] } , -{\{\!\!\{\operatorname{\gamma}_n u\}\!\!\ } } { ] } _ { { \gamma}}\\ = & \ { [ } \operatorname{\gamma}_n^- u , \operatorname{\gamma}_t^- u { ] } _ { { \gamma}}- { [ } \operatorname{\gamma}_n^+ u , \operatorname{\gamma}_t^+ u { ] } _ { { \gamma}}\\ = & \ s \ \big(\|s{^{-1}}\operatorname{curl}u\|_{l^2({\mathbb{r}}^3\setminus{\gamma})}^2 + { \varepsilon}\mu \|u\|_{l^2({\mathbb{r}}^3\setminus{\gamma})}^2 \big).\end{aligned}\ ] ] we further obtain \ ! ] } \big\|_{{\mathcal{h}_\gamma}}^2 \\ & \leq \ c \big ( \|\operatorname{curl}u\|_{l^2({\mathbb{r}}^3\setminus{\gamma})^3}^2 + \|u\|_{l^2({\mathbb{r}}^3\setminus{\gamma})^3}^2 \big)\\ & = \ c |s|^2 \big ( \|s{^{-1}}\operatorname{curl}u\|_{l^2({\mathbb{r}}^3\setminus{\gamma})^3}^2 + |s|^{-2 } \|u\|_{l^2({\mathbb{r}}^3\setminus{\gamma})^3}^2 \big)\\ & \leq \ c |s|^2 \max\{1,|s|^{-2}({\varepsilon}\mu){^{-1}}\ } \big(\|s{^{-1}}\operatorname{curl}u\|_{l^2({\mathbb{r}}^3\setminus{\gamma})^3}^2 + { \varepsilon}\mu \|u\|_{l^2({\mathbb{r}}^3\setminus{\gamma})^3}^2 \big ) , \end{aligned}\ ] ] and for we use the fact that : \ ! ] } \big\|_{{\mathcal{h}_\gamma}}^2 \\ & \leq \ c ( { \varepsilon}\mu){^{-1}}\big(\|s{^{-1}}\operatorname{curl}\operatorname{curl}u\|_{l^2({\mathbb{r}}^3\setminus{\gamma})^3}^2 + \|s{^{-1}}\operatorname{curl}u\|_{l^2({\mathbb{r}}^3\setminus{\gamma})^3}^2 \big ) \\ & = \ c \big ( { \varepsilon}\mu \|s u\|_{l^2({\mathbb{r}}^3\setminus{\gamma})^3}^2 + ( { \varepsilon}\mu){^{-1}}\|s{^{-1}}\operatorname{curl}u\|_{l^2({\mathbb{r}}^3\setminus{\gamma})^3}^2 \big ) \\ & \leq \ c |s|^2 \max\{1 , |s|^{-2}({\varepsilon}\mu){^{-1}}\ } \big(\|s{^{-1}}\operatorname{curl}u\|_{l^2({\mathbb{r}}^3\setminus{\gamma})^3}^2 + { \varepsilon}\mu\|u\|_{l^2({\mathbb{r}}^3\setminus{\gamma})^3}^2 \big)\end{aligned}\ ] ] where , for the first inequalities in both estimates , we used the trace inequality of lemma [ lemma : trace ineq ] .extraction of factors and dividing through completes the proof .let us consider the first order formulation of maxwell s equations , in the following form : with appropriate initial and boundary conditions .if the initial conditions satisfy the last two equations , then they hold for all times , see , therefore these conditions are assumed to hold . the permeability and permittivityis denoted by and , respectively , and they are assumed to be positive constants , while denotes the electric current density . using the relation , the above equation can be written as the second order problem with .setting , applying laplace transformation , and writing instead of , we obtain the time - harmonic version .we recall an operator - valued continuous - time herglotz theorem from ( * ? ? ?* section 2.2 ) , which is crucial for transferring the coercivity result of lemma [ lemma : coercivity ] from the maxwell s equation in the laplace domain to the time - dependent maxwell s equation .we describe the result in an abstract hilbert space setting .let be a complex hilbert space , with dual and anti - duality .let and be both analytic families of bounded linear operators for , satisfying the uniform bounds : for any integer , we define the integral kernel for a function ,{{\mathcal h}}) ] , with , and for all .consider the second order formulation of maxwell s equations in three dimensions : , \\{ e}(x,0 ) = & \ { e}_0 & { \qquad \hbox { in } } & { \mathbb{r}}^3 , \\ { \partial}_t { e}(x,0 ) = & \ { h}_0 & { \qquad \hbox { in } } & { \mathbb{r}}^3.\end{aligned}\ ] ] let be a bounded lipschitz domain , with boundary , and further assume that the initial values and are supported within .we rewrite this problem as an interior problem over : , \\ { e}^-(x,0 ) = & \ { e}_0 & { \qquad \hbox { in } } & { \omega } , \\ { \partial}_t { e}^-(x,0 ) = & \ { h}_0 & { \qquad \hbox { in } } & { \omega},\end{aligned}\ ] ] and as an exterior problem over : , \\{ e}^+(x,0 ) = & \ 0 & { \qquad \hbox { in } } & { \omega}^+ , \\ { \partial}_t { e}^+(x,0 ) = & \ 0 & { \qquad \hbox { in } } & { \omega}^+.\end{aligned}\ ] ] the two problems are _ coupled _ by the transmission conditions : using the temporal convolution operators of section [ subsec : herglotz ] , the solution of the exterior problem is given as with boundary densities which satisfy the equation here is the temporal convolution operator with the distribution whose laplace transform is the calderon operator defined in . from now on ,we use maxwell s equations in their first order formulation on the interior domain ( and we omit the omnipresent superscript ) : , \ ] ] with the coupling through the calderon operator as where and .in addition , by we obtain where we also used .hence , and in the same way as in ( * ? ? ?* lemma 4.1 ) for the acoustic wave equation , the coercivity of the calderon operator for the time - harmonic maxwell s equation as given by lemma [ lemma : coercivity ] together with the operator - valued continuous - time herglotz theorem as stated in lemma 2.3 yields coercivity of the time - dependent calderon operator .[ lemma : time - cont coercivity ] with the constant from lemma [ lemma : coercivity ] we have that }_{{\gamma}}{\textrm{d}}t \\ & \\geq \beta c_t \int_0^t e^{-2t / t } \big ( ( { \varepsilon}\mu){^{-1}}\|{\partial}_t{^{-1}}{\varphi}(\cdot , t)\|_{{\mathcal{h}_\gamma}}^2 + \|{\partial}_t{^{-1}}\psi(\cdot , t)\|_{{\mathcal{h}_\gamma}}^2 \big ) { \textrm{d}}t \end{aligned}\ ] ] for arbitrary and for all ,{\mathcal{h}_\gamma}) ] with and , and with constant . a gronwall argument then yields the following energy estimate ; see ( * ? ? ?* lemma 4.2 ) .[ lemma : general energy - like est ] let the functions \to [ 0,\infty) ] , and , { \mathcal{h}_\gamma}) ] }_{{\gamma}}= { { \mathcal f}}(t).\ ] ] then , with , analogously to , a symmetric weak form of is obtained on using }_{{\gamma}},\ ] ] and using for the boundary term . here denotes the standard inner product .the coupled weak problem then reads : find and such that }_{{\gamma}}+ ( { j},w ) \\ & \\hphantom{({\varepsilon}{\partial}_t{e},w ) } = { { \textstyle \frac12}}(\operatorname{curl}{h } , w ) + { { \textstyle \frac12}}({h } , \operatorname{curl}w ) - { { \textstyle \frac12}}{[}\mu{^{-1}}{\varphi } , \operatorname{\gamma}_t w { ] } _ { { \gamma}}+ ( { j},w ) , \\ & \ ( \mu { \partial}_t{h},z ) = -{{\textstyle \frac12}}(\operatorname{curl}{e } , z ) - { { \textstyle \frac12}}({e } , \operatorname{curl}z ) + { { \textstyle \frac12}}{[}\operatorname{\gamma}_t { e } , \operatorname{\gamma}_t z { ] } _ { { \gamma}}\\ & \\hphantom{(\mu { \partial}_t{h},z ) } = -{{\textstyle \frac12}}(\operatorname{curl}{e } , z ) - { { \textstyle \frac12}}({e } , \operatorname{curl}z ) - { { \textstyle \frac12}}{[}\psi , \operatorname{\gamma}_t z { ] } _ { { \gamma}},\\ & \ \biggl{[}{\binom{\xi}{\eta } } , b({\partial}_t ) { { \binom{{\varphi}}{\psi}}}\biggr{]}_{{\gamma}}= { { \textstyle \frac12}}\big ( { [ } \xi,\mu{^{-1}}\operatorname{\gamma}_t { e}{]}_{{\gamma}}+ { [ } \eta,\operatorname{\gamma}_t { h}{]}_{{\gamma}}\big ) \end{aligned}\ ] ] hold for arbitrary , and . while this weak formulation is apparently non - standard for maxwell s equations , we will see that it is extremely useful , in the same way as the analogous formulation proved to be for the acoustic case in . testing with , and , in , by using we obtain }_{{\gamma}}+ ( { j},{e } ) , \\ & \ ( \mu { \partial}_t{h},{h } ) = -{{\textstyle \frac12}}(\operatorname{curl}{e } , { h } ) - { { \textstyle \frac12}}({e } , \operatorname{curl}{h } ) - { { \textstyle \frac12}}{[}\psi , \operatorname{\gamma}_t { h}{]}_{{\gamma}},\\ & \ \biggl{[}{\binom{{\varphi}}{\psi } } , b({\partial}_t ) { { \binom{{\varphi}}{\psi}}}\biggr{]}_{{\gamma}}= { { \textstyle \frac12}}\big ( { [ } { \varphi},\mu{^{-1}}\operatorname{\gamma}_t { e}{]}_{{\gamma}}+ { [ } \psi,\operatorname{\gamma}_t { h}{]}_{{\gamma}}\big ) , \end{aligned}\ ] ] and summing up the three equations yield }_{{\gamma}}= ( { j},{e } ) .\ ] ] for , the coercivity of the continuous - time calderon operator , as stated in lemmas [ lemma : time - cont coercivity ] and [ lemma : general energy - like est ] , yields that the electromagnetic energy satisfies the energy estimate ( with ) for arbitrary .for the spatial discretization we use , as an example , the central flux discontinuous galerkin ( dg ) discretization from ( see also ) in the interior and continuous linear boundary elements on the surface . we triangulate the bounded polyhedral domain by simplicial triangulations , where denotes the maximal element diameter . for our theoretical resultswe consider a quasi - uniform and contact - regular family of such triangulations with ; see e.g. for these notions .we adopt the following notation from ( * ? ? ?* section 2.3 ) : the faces of , decomposed into boundary and interior faces : . the normal of an interior face is denoted by .it is kept fixed and is the outward normal of one of the two neighbouring mesh elements .we denote by that neighbouring element into which is directed .the outer faces of are used as the triangulation of the boundary .the dg space of vector valued functions , which are elementwise linear in each component , is defined as the boundary element space is taken as the corresponding nodal basis functions are denoted by and , respectively . jumps and averages over faces are denoted analogously as for trace operators on , see section [ subsection : boundary integral op ] : \!]}_f = \operatorname{\gamma}_f^-w - \operatorname{\gamma}_f^+w { \qquad \hbox { and } \qquad } { \{\!\!\{w\}\!\!\}}_f = { { \textstyle \frac12}}(\operatorname{\gamma}_f^-w + \operatorname{\gamma}_f^+w),\ ] ] where is the usual trace onto the face .we often omit the subscript as it will always be clear from the context .the discrete operator with centered fluxes was presented in ( * ? ? ?* section 2.3 ) : \ ! ] } , { \{\!\!\{w_h\}\!\!\ } } { ] } _ { f } .\end{aligned}\ ] ] by the arguments of the proof of lemma 2.2 in , we obtain that the discrete curl operator satisfies the discrete version of green s formula , }_{\gamma}.\ ] ] the operator is well defined on , with the broken sobolev space which is a hilbert space with natural norm and seminorm and , respectively . using the above discrete operator ,the semidiscrete problem reads as follows : find and such that for all and , [ eq : semidiscrete problem ] }_{{\gamma}}+ ( { j},w_h ) , \\[1 mm ] \label{eq : dg - bem } & \ ( \mu { \partial}_t{h}_h , z_h ) = -{{\textstyle \frac12}}(\operatorname{curl}_h { e}_h , z_h ) - { { \textstyle \frac12}}({e}_h , \operatorname{curl}_h z_h ) - { { \textstyle \frac12}}{[}\psi_h , \operatorname{\gamma}_t z_h { ] } _ { { \gamma}},\\[2 mm ] \nonumber & \ \biggl{[}{\binom{\xi_h}{\eta_h } } , b({\partial}_t ) { \binom{{\varphi}_h}{\psi_h } } \biggr{]}_{{\gamma}}= { { \textstyle \frac12}}\big ( { [ } \xi_h,\mu{^{-1}}\operatorname{\gamma}_t { e}_h{]}_{{\gamma}}+ { [ } \eta_h,\operatorname{\gamma}_t { h}_h { ] } _ { { \gamma}}\big ) .\end{aligned}\ ] ] all expressions are to be interpreted in a piecewise sense if necessary .we collect the nodal values of the semidiscrete electric and magnetic field into the vectors , and similarly the nodal vectors of the boundary densities are denoted by and .upright boldface capitals always denote matrices of the discretization .we obtain the following coupled system of ordinary differential equations and integral equations for the nodal values : the matrix denotes the symmetric positive definite mass matrix , while the other matrices are defined as which happens to be a symmetric matrix , and }_{{\gamma } } , \qquad { { \mathbf c}}_0=\mu{^{-1}}{{\mathbf c}}_1 .\ ] ] the matrix is given by where the blocks have entries }_{{\gamma}}{\qquad \hbox { and } \qquad } { { \mathbf k}}(s)|_{kk ' } = { { \textstyle \frac12}}{[}b_{k'}^{\gamma } , k(s ) b_k^{\gamma}{]}_{{\gamma}}.\ ] ] for this matrix we have the following coercivity estimate . [ lemma : coercivity - discrete ] with and from lemma [ lemma : coercivity ] , the matrix satisfies for and for all , where the mass matrix , for the inner product corresponding to the norm on , is defined by .the result follows from lemma [ lemma : coercivity ] on noting that for the vectors and and the corresponding boundary functions in we have }_{{\gamma}}\ ] ] and .let us emphasize the following observation : _ the above matrix vector formulation is formally the same as the one for the acoustic wave equation in ( * ? ? ?* section 5.1 ) , with the same coercivity estimate for the boundary operator by lemmas [ lemma : coercivity - discrete ] and [ lemma : time - cont coercivity ] . as an important consequence, the stability results proven in hold for the present case as well . _the choice of a dg method in the interior and of continuous boundary elements for the spatial discretizations is not necessary for our analysis .other space discretization methods , for instance the ones going back to raviart and thomas , ndlec , and many others , detailed in the excellent survey article , or locally divergence - free methods such as , could also be used as long as they yield a matrix vector formulation of the form and a coercivity estimate as in lemma [ lemma : coercivity - discrete ] . following ( * ? ? ?* section 2.3 ) we give a short recap of convolution quadrature and introduce some notation . for more details see and .convolution quadrature ( cq ) discretizes the convolution by the discrete convolution where the weights are defined as the coefficients of in the present paper we choose which corresponds to the second - order backward difference formula . from , it is known that the method is of order two , for functions that are sufficiently smooth including their extension by to negative values of .an important property of this discretization is that it preserves the coercivity of the continuous - time convolution in the time discretization .we have the following result .[ , lemma 2.3 ] [ lemma : cq coercivity ] in the setting of lemma [ lemma : herglotz ] condition ( i ) implies , for small enough and with , for any function with finite support .combining lemma [ lemma : coercivity ] and lemma [ lemma : cq coercivity ] yields the following coercivity property of the cq time - discretization of the time - dependent calderon operator considered in lemma [ lemma : time - cont coercivity ] .[ lemma : time - discrete coercivity ] in the situation of lemma [ lemma : time - cont coercivity ] , we have for and that }_{{\gamma}}\\ & \ \geq \beta c_t { { \mathit{\delta}t}}\sum_{n=0}^n e^{-2t_n / t } \big ( ( { \varepsilon}\mu){^{-1}}\|({\partial}_t^{{\mathit{\delta}t}}){^{-1}}{\varphi}(\cdot , t_n)\|_{{\mathcal{h}_\gamma}}^2 + \|({\partial}_t^{{\mathit{\delta}t}}){^{-1}}\psi(\cdot , t_n)\|_{{\mathcal{h}_\gamma}}^2 \big ) \end{aligned}\ ] ] for all sequences and in , with for a ( which depends only on and tends to 1 as goes to zero ) . similarly to , we use the leapfrog or strmer verlet scheme ( see , e.g. , ) in the interior : where the last substep of the previous step and the first substep can be combined to a step from to when no output at is needed : this is coupled with convolution quadrature on the boundary^{n+1/2 } = { \binom{{{\mathbf c}}_0^t \bar{\boldsymbol{e}}^{n+1/2}}{{{\mathbf c}}_1^t { \boldsymbol{h}}^{n+1/2 } } } + { \binom{0}{- \alpha { { \mathit{\delta}t}}^2 \mu{^{-1}}{{\mathbf c}}_1^t { { \mathbf m}}{^{-1}}{{\mathbf c}}_1 \dot{{\boldsymbol \psi}}^{n+1/2 } } } , \ ] ] where the operation is averaging in time and .the second term on the right - hand side is a stabilizing term , with a parameter .the role of this extra term becomes clear from the proof of the stability result for the acoustic wave equation ( * ? ? ?* lemma 8.1 ) , which applies to the maxwell case as well . like for the acoustic case ,the choice yields a stable scheme under the cfl condition .up to a factor 2 this is the cfl condition for the leapfrog scheme for the equation with natural boundary conditions . in each time step , a linear system with the matrix needs to be solved for and , where and by the coercivity lemma [ lemma : coercivity ] , is positive definite .moreover , is symmetric positive definite .using that the obtained discrete system is of the same form and with the same coercivity property as for the acoustic wave equation , the stability results carry over from section 6 of .only minor technical modifications are needed , such as using the appropriate energy and norms .the only point where the analysis of the semidiscrete problem deviates from the acoustic case is the consistency error estimates , which require special care .we consider a system with additional inhomogeneities \to l^2(\omega)^3 ] , which will later be obtained as the system of error equations with the defects of an interpolation of the exact solution .the coupled system }_{{\gamma}}+ ( j_h , w_h ) , \\[1 mm ] \label{eq : dg - bem - stab } & \ ( \mu { \partial}_t{h}_h , z_h ) = -{{\textstyle \frac12}}(\operatorname{curl}_h { e}_h , z_h ) - { { \textstyle \frac12}}({e}_h , \operatorname{curl}_h z_h ) - { { \textstyle \frac12}}{[}\psi_h , \operatorname{\gamma}_t z_h { ] } _ { { \gamma}}+(g_h , w_h ) , \\[2 mm ] \nonumber & \ \biggl{[}{\binom{\xi_h}{\eta_h } } , b({\partial}_t ) { \binom{{\varphi}_h}{\psi_h } } \biggr{]}_{{\gamma}}= { { \textstyle \frac12}}\big ( { [ } \xi_h,\mu{^{-1}}\operatorname{\gamma}_t { e}_h{]}_{{\gamma}}+ { [ } \eta_h,\operatorname{\gamma}_t { h}_h { ] } _ { { \gamma}}\big ) \\ & \ \hphantom{\biggl{[}{\binom{\xi_h}{\eta_h } } , b({\partial}_t ) { \binom{{\varphi}_h}{\psi_h } } \biggr{]}_{{\gamma}}= } + ( \xi_h,\rho_h)_{\gamma}+(\eta_h,\sigma_h)_{\gamma}\end{aligned}\ ] ] where denotes the inner product on , has the matrix - vector formulation where is the boundary mass matrix with entries . the solution of this system can be bounded in terms of by the stability results proven in lemma 6.16.3 in .we immediately translate the stability lemmas of into the functional analytic setting .the energy estimate of lemma 6.1 of becomes the following .[ lemma : semidiscr stability - energy estimate ] the semidiscrete energy satisfies the bound , for , provided that and .the estimates for the boundary functions of lemma 6.3 of now translate into the following .[ lemma : semidiscr stability - boundary functions ] for , the boundary functions are bounded as provided that , , and .we consider the projection of functions on and to continuous piecewise linear finite element functions by interpolation : let denote the operator of piecewise linear ( with respect to the triangulation ) and continuous interpolation in , and let denote the operator of piecewise linear continuous interpolation on . since the normal vector is constant on every face of , we then have which implies that maps into .moreover , this yields the very useful relation as is seen by noting that it is because of that we work in the following with interpolation operators rather than orthogonal projections .we recall the standard results for the interpolation errors .[ lemma : interpolation error ] there exists a constant , independent of , such that for all , the following interpolation error estimate is a standard result for boundary element approximations , see .[ lemma : interpolation error - boundary ] there exists a constant , independent of , such that for all , we remark that for piecewise smooth boundaries just the piecewise regularity is needed . for the boundary functions we have the following interpolation error bounds .[ lemma : interpolation error - calderon ] there exists a constant , increasing at most polynomially in and independent of , such that for any for all , { \mathcal{h}_\gamma}\cap h^{3/2}({\gamma})^3)$ ] with and .the proof is similar to that of lemma 7.2 in : first we bound the action of the blocks of , then we use plancherel s formula to bound the action of the convolution operator . by the boundedness of the boundary integral operators lemma [ lemma : int operator s bounds ] , for we obtain then , lemma [ lemma : interpolation error - boundary ] yields a similar estimate holds for the blocks , andso we obtain using plancherel s formula and causality then yields the stated bound . we study the defects ( or consistency errors ) obtained on inserting the interpolated solution into the semidiscrete variational formulation .these defects are defined by }_{{\gamma}}\\ & ( d_h^h , z_h ) = ( \mu { \partial}_t{i_h}{h},z_h ) - { { \textstyle \frac12}}(\operatorname{curl}_h { i_h}{e},z_h ) - { { \textstyle \frac12 } } ( { i_h}{e},\operatorname{curl}_h z_h ) \\ & \hphantom{(d_h^h , z_h ) = } + { { \textstyle \frac12}}{[}\pi_h \psi,\operatorname{\gamma}_t z_h{]}_{{\gamma}}\\ & ( \xi_h , d_h^\psi)_{\gamma}+ ( \eta_h , d_h^{\varphi})_{{\gamma}}= \biggl{[}{\binom{\xi_h}{\eta_h } } , b({\partial}_t ) { \binom{\pi_h{\varphi}}{\pi_h\psi } } \biggr{]}_{{\gamma}}\\ & \hphantom{(\xi_h , d_h^\psi)_{\gamma}+ ( \eta_h , d_h^{\varphi})_{{\gamma}}= } - { { \textstyle \frac12}}\big ( { [ } \xi_h,\mu{^{-1}}\operatorname{\gamma}_t { i_h}{e}{]}_{{\gamma}}+ { [ } \eta_h,\operatorname{\gamma}_t { i_h}{h}{]}_{{\gamma}}\big)\end{aligned}\ ] ] for all and . these defects are bounded as follows .[ lemma : defect bounds ] if the solution of maxwell s equations is sufficiently smooth , then the defects satisfy the first - order bounds , for , the constant grows only polynomially with .we begin with the defect .we have for , }_{{\gamma}},\end{aligned}\ ] ] where we used the discrete green s formula .since , the boundary term vanishes by the relation .we further note that and because is a continuous function and so has no jumps on inner faces .the exact solution satisfies maxwell s equation and hence subtracting the two equations therefore yields with the interpolation error bounds of lemma [ lemma : interpolation error ] the right - hand terms are estimated as times the norm of . we thus conclude that similarly we estimate the defect for the magnetic equation . for the boundary defects we have for all , using the boundary equation , }_{{\gamma}}\ ] ] where are given by which is bounded by in the norm by lemmas [ lemma : interpolation error - calderon ] and [ lemma : interpolation error ] .it then follows that also the defects , which are interpolated by , are bounded in the same way , using lemma [ lemma : hdiv - dual ] : if we differentiate twice with respect to time before estimating and commute interpolations and time derivatives , this yields the stated bound for the boundary defects . [ theorem : semidiscrete error bound ] assume that the initial data and have their support in .let the initial values of the semidiscrete problem be chosen as the interpolations of the initial values : and . if the solution of maxwell s equations is sufficiently smooth , then the error of the dg bem semidiscretization satisfies , for , the first - order error bound where the constant grows at most polynomially in .we insert the interpolated solution into the semidiscrete variational formulation and apply the stability lemmas , lemmas [ lemma : semidiscr stability - energy estimate ] and [ lemma : semidiscr stability - boundary functions ] , to the error equations that have the defects in the role of the inhomogeneities .we then use the defect bounds of lemma [ lemma : defect bounds ] to arrive at a first - order error bound for .the interpolation error estimates of lemma [ lemma : interpolation error ] and [ lemma : interpolation error - calderon ] together with the triangle inequality then complete the proof .similarly to the semidiscrete case , the stability analysis of the full discretization only depends on the formulation of the fully discrete problem and , which again coincides with the acoustic case in form and relevant properties .hence , the analysis of the full discretization can be carried over directly from ( * ? ? ?* section 8) .the original results are again translated into the current functional analytic setting .we show stability results under the cfl condition the fully discrete electric and magnetic field satisfies the inequality below .[ lemma : fully discr stability - interior ] under the cfl condition and for a stabilization parameter , the discrete energy is bounded , at , by where is independent of , and . using , the above result also yields a bound on .for the boundary densities we have the following fully discrete estimate .[ lemma : fully discr stability - boundary ] under the cfl condition and for a stabilization parameter , the discrete boundary functions are bounded , at , by where is independent of , and . the following convergence estimate for the full discretization is then shown in the same way as in the proof of theorem 9.1 of , using the consistency errors of the spatial discretization given in section [ subsection : consistency ] , using known error bounds of the leapfrog scheme and convolution quadratures , and applying lemmas [ lemma : fully discr stability - interior ] and [ lemma : fully discr stability - boundary ] .[ theorem : fully discrete error bound ] assume that the initial conditions and , and the inhomogeneity have their supports in . let the initial values of the semidiscrete problem be chosen as the interpolations of the initial values : and .if the solution of maxwell s equations is sufficiently smooth , and under the cfl condition and with a stabilization parameter , the error of the dg bem and leapfrog convolution quadrature discretization and is bounded , at , by where the constant grows at most polynomially in .we have given a stability and error analysis of semi- and full discretizations of maxwell s equations in an interior non - convex domain coupled with time - domain boundary integral equations for transparent boundary conditions . a key result for the analysis of this problem is the coercivity estimate of the calderon operator proved in lemma [ lemma : coercivity ] , which is preserved under trace - space conforming boundary discretizations and translates from the laplace domain to the continuous - time domain by the operator - valued version of herglotz theorem as restated in lemma [ lemma : herglotz ] and to the convolution quadrature time discretization by lemma [ lemma : time - discrete coercivity ] .another important aspect is that the symmetrized weak formulation , as first proposed in for the acoustic wave equation , is preserved under space discretization to yield a finite - dimensional system of the form . in this paperthe space discretization is exemplified by a dg discretization in the interior and continuous boundary elements .other interior discretizations that are commonly used for maxwell s equations , such as edge elements , could equally be used as long as they lead to a matrix formulation .similarly , other trace - space conforming boundary elements such as raviart thomas elements could be used , since they also preserve the coercivity of lemma [ lemma : coercivity - discrete ] .once the matrix formulation has the structure with the coercivity of lemma [ lemma : coercivity - discrete ] , the analysis in shows stability of the spatial semi - discretization and of the full discretization with a stabilized leapfrog method .together with estimates for the consistency error , which we derive in sections [ subsection : interpolation error bounds ] and [ subsection : consistency ] in an exemplary way for the particular space discretization considered , we then obtain error bounds for the semi - discretization . moreover , using known error bounds for the consistency error of the leapfrog method and of the convolution quadrature time discretization on the boundary , we finally obtain error bounds of the full discretization .we claim no originality on the constituents of the discretization of maxwell s equation in space and time , in the interior and on the boundary .the novelty of this paper is the stability and error analysis of their coupling .it is remarkable that , in spite of the fundamentally different functional - analytic framework , the stability analysis extends directly from the acoustic to the maxwell case .this becomes possible because we show here that the coercivity and the matrix formulation of the discretization are of the same type for both maxwell and the acoustic case . on the other hand ,the analysis of the consistency errors depends strongly on the functional - analytic setting and is different for discretizations of maxwell s equations and the acoustic wave equation .we thank two anonymous referees for their helpful comments .we are grateful for the helpful discussions on spatial discretizations with ralf hiptmair ( eth zrich ) during a birs workshop ( 16w5071 ) in banff .this work was supported by the deutsche forschungsgemeinschaft ( dfg ) through sfb 1173 .approximation of integral equations by finite elements .error analysis . in r.dautray and j .- l .lions , editors , _ mathematical analysis and numerical methods for science and technology _ , volume 4 , chapter xiii , pages 359370 .springer , 1990 .
|
maxwell s equations are considered with transparent boundary conditions , for initial conditions and inhomogeneity having support in a bounded , not necessarily convex three - dimensional domain or in a collection of such domains . the numerical method only involves the interior domain and its boundary . the transparent boundary conditions are imposed via a time - dependent boundary integral operator that is shown to satisfy a coercivity property . the stability of the numerical method relies on this coercivity and on an anti - symmetric structure of the discretized equations that is inherited from a weak first - order formulation of the continuous equations . the method proposed here uses a discontinuous galerkin method and the leapfrog scheme in the interior and is coupled to boundary elements and convolution quadrature on the boundary . the method is explicit in the interior and implicit on the boundary . stability and convergence of the spatial semidiscretization are proven , and with a computationally simple stabilization term , this is also shown for the full discretization .
|
the minority game ( mg ) is a game of repeated choice of ( odd ) non - communicating agents where the agents ending up in the minority receive positive pay - offs .this is a variant of the el farol bar problem .existence of a phase transition has already been reported in the mg .one of the aims of the study of mg is to bring the populations in either choices sufficiently close to and in doing so to keep the time of convergence and fluctuations low .as is readily seen , a random choice would make the time of convergence virtually zero , but the fluctuation would be of order .this makes the system socially very inefficient .this is because , although agents could be in the minority at the same time , due to fluctuation this number may be much smaller than that , thereby making the system socially inefficient due to poor resource utilization . to increase social efficiency ,this fluctuation has to be minimised .several adaptive strategies were studied to improve upon this situation . however , regarding the fluctuation , even for the most complex strategies only the pre - factor could be made smaller and the fluctuation still scales with .a large misuse of resource is therefore likely in this situation .there have been some recent attempts to study a generalization of the mg problem in the kolkata paise restaurant ( kpr ) problem , where there are agents and choices ( restaurants ) .keeping a finite comfort level ( equal in most cases ) for the restaurants ( say , one agent per choice ) one arrives at similar problem of finding the most efficient strategy ( non - dictated ) for resource allocation .it was shown that a simple crowd - avoiding , stochastic , non - dictated strategy led to the most efficient resource allocation for the problem in a very short convergence time ( practically independent of the system size ) .stochastic strategies have been used also for the mg problem .recently , dhar _ et al . _ used the kpr strategy in the mg problem to find that the fluctuation could be made arbitrarily small in a convergence time of the order .this , of course , is the best possible strategy so far in terms of resource allocation in the mg .however , this situation differs a bit from the classic mg problem .the most important difference is that , being a crowd - avoiding strategy , it requires that the agents know not only whether they were in the minority or majority the previous evening , but also the number of excess people in the majority if he or she were there .the present study intends to determine if knowing the number of excess is indeed necessary in reaching a state with minimum fluctuations .we show that even in a more realistic situation where exact knowledge of the number of excess people is not available to the agents , it is possible to reach a state with minimum fluctuations .a natural relaxation of the complete knowledge of the excess crowd would be to make a guess about the actual value .we show analytically for an idealized situation , where all agents make the same guess , that if the guess value is smaller than twice the real value , the minimum fluctuation state ( or the absorbing state ) can be reached .however , for any larger error , a residual fluctuation proportional to the excess error made , stays in the system in the steady state .further , in a generalized and more realistic version , where the guesses differ and are random for each agent in each step ( annealed disorder ) , we analytically and numerically find similar behavior in the fluctuations in the system only in terms of the average guess value this time .we show that depending upon the guessing power of the agents , there is a continuous phase transition between full ( minimum fluctuation or absorbing phase ) and partial ( system with residual fluctuation or active phase ) efficiencies for resource allocation in this problem .we also analyze the case where the knowledge of the excess population is completely unknown to the agents .it is shown both analytically and numerically that zero fluctuation can be reached by following an annealing schedule . to further incorporate the effects of real situations, we consider the fraction of agents who decide completely randomly . when the number of random traders is 1 , although the fluctuation can be at its minimum , he or she is always in the majority .however , for more than one random traders , this situation can be avoided , maintaining a minimal fluctuation .the rest of the paper is organized as follows : in sec .[ sec:2 ] , we discuss different strategies followed by the agents to minimize the fluctuation .we show that by tuning a suitable parameter , an active - absorbing phase transition takes place .we elaborate on other strategies where the agent s knowledge of the excess population is only partial or is completely absent . in sec .iii we discuss the effects of random traders . finally , in sec .iv , we give concluding remarks .in the strategy followed in ref . , the agents in the majority shift with the probability and the agents in the minority remain with their choice ( ) , where the total population ( ) is divided as and with , where and are the populations in the two choices at time . in this strategy , the agents can reach the zero fluctuation limit in time .although the resource utiliszation is maximum in that case , its distribution is highly asymmetric in the sense that after the dynamics stops in the limit , the agents in the minority ( majority ) stay there for ever ; hence , only one group always benefits . apart from this , in this strategy the knowledge of is made available to all the agents , which is not a general practice in mg . in the following subsections ,we introduce several variants of the above mentioned strategy .primarily we intend to find if it is possible to avoid the freezing of the dynamics while keeping the fluctuation as low as possible .we then discuss if it is possible to achieve such states without knowing the magnitude of .let us assume that the agents know the value of .we then intend to find a strategy where the dynamics of the game does not stop and the fluctuation can be made as small as required . to do that we propose the following strategy : the shifting probability of the agents in majority is [ where and is a constant ] and that from the minority remains zero .we will see that a steady state can be reached in this model where the fluctuation can be arbitrarily small .now , to understand when such a steady state value is possible , note that when the transfer of the crowd from majority to minority is twice the difference of the crowd , the minority then will become the majority and will have the same amount of excess people as in the initial state .quantitatively , if the initial populations were and roughly , and if is shifted from majority to minority , then the situation would continue to repeat itself , as the transfer probability solely depends on the excess crowd . of course , this is possible only when .more formally , if the steady - state value of is , then the steady state condition would require this , on simplification yields either or for different values of and .the solid lines shows the analytical results for the pure and annealed disordered cases .both match very well with the simulation points .inset shows the log - log plot near the critical point for the disordered case , confirming .all simulation data are shown for .,width=321 ] clearly , for , would be the valid solution , since the above equation predicts a negative value for , which indicated no steady - state saturation until it decreases to zero .therefore , one can predict a phase transition of the active - absorbing type by tuning the value of .when , the system will reach the minimum fluctuation state where and the dynamics stops ( the dynamics will differ qualitatively for and ; see the appendix ) . for , however , a residual fluctuation will remain in the system , keeping it active .physically , this would mean that until the guessed value of the crowd is not too incorrect ( twice as large ) , the agents can still find the minimum fluctuation state .however , when the guess becomes too wild , a fluctuation remains in the system .therefore , it is now possible to define an order parameter for the problem as and its saturation values behaves as when and when , for , with giving the order parameter exponent for this continuous transition . in fig .[ op ] we plot the results of numerical simulation ( ) as well as the analytical expression for the order parameter . we find satisfactory agreement . in this simple situation ,it is possible to calculate the time dependent behavior of the order parameter both at and above the critical point .suppose , at a given instant , the populations in the two choices and are and respectively with . by definition , the amount of the population to be shifted from to using this strategy would be when is small compared to , i.e. , when is close to or for large time if . clearly , and , giving ( assuming population inversion ; see the appendix for a general treatment ) therefore , the time evolution of the order parameter reads neglecting the last term and integrating , one gets .\ ] ] the above equation signifies an exponential decay of the order parameter in the subcritical region ( ) .it also specifies a time scale as diverging as the critical point is approached .these behaviors are confirmed by numerical simulations . in eq .( [ trns ] ) , the approximation was made to keep the leading order term only .if , however , the first correction term is kept , the expression becomes the time evolution equation of the order parameter then becomes now , if we consider the dynamics exactly at the critical point , i.e. , , then the first term in the right - hand - side is zero . the last term can be neglected .therefore , the order parameter becomes in the long time limit , giving .therefore we see that under this simple approximation , the usual mean field active - absorbing transition exponents are obtained .these are confirmed using numerical simulations as well .a general solution of the dynamics ( valid for all values at all times ) is shown in the appendix , which in the limiting cases yield the results mentioned above . in the above mentioned strategy one can find a steady state for any value of the fluctuation .however , unlike the common practice in mg , the value of is exactly known to all the agents . in the disordered case for different values .the estimate is .inset shows the uncollapsed data .the straight line at the critical point gives .simulation data is shown for .,width=321 ] in the disordered case for different system sizes ( ) at .the estimate is .inset shows the uncollapsed data .the linear part in the inset confirms .,width=302 ] here we consider the case where each agent can only make a guess about the value of .therefore for the -th agent where is a uniformly distributed random number in the range $ ] and is an annealed variable ( i.e. , changes at each time step randomly ) with .clearly , where ( ) denotes average of over randomness . with the analogy of the previous case , we expect a transition from zero to finite activity near . in fig . [ op ]we plot the steady state values of the order parameter against .clearly , the active - absorbing transition takes place at ( note that this is the same point where the transition for the pure case took place ) . note that irrespective of whether population inversions occurs , one can generally write with this leads to first consider the steady state where ( above the critical point ) .after simplification , the above equation reduces to .\ ] ] one can numerically compare the solution of this equation with the simulations , which agrees well ( see fig . [ op ] ) .a small expansion of the above equation yields , giving . also , for small the dynamical equation would yield ( at or above the critical point ) now , by neglecting the square term ( in presence of the linear term above the critical point ) one would obtain and by keeping the square term ( in absence of the linear term at the critical point ) one would obtain .thus all the exponents of the pure case are recovered for annealed disorder .one can numerically verify the above exponent values using the following scaling form of the order parameter ( writing ) where is the space dimension , which we take to be 4 in this mean field limit . at the critical point, the order parameter follows a power - law relaxation ( see inset of fig .[ nu - rand ] ) with .( in eq .( [ eq : dis ] ) ) the average pay - offs of the agents are plotted for different values having different ranges as indicated .the monotonic decay with increasing clearly indicates that agents with higher are more likely to be in the majority ( see last para of sec .iic).,width=321 ] in fig .[ nu - rand ] we plot against . by knowing , can be tuned to get data collapse .the estimate of is .similarly , in fig .[ z - rand ] we plot against . again by tuning , data collapse can be found .the estimate for comes out to be . thus the analytical estimates are verified and the scaling relation is satisfied .we have also tried the case of quenched disorder ( s are fixed for the agents for all time ) . above the critical point ,when population inversion occurs , this would imply that agents with higher would change side with higher probability and are more likely to be in the majority . a plot of average pay - off for agents having different values verify this statement by showing the monotonic decay of the pay - off with increasing ( fig .[ quench_epsi ] ) . in mg ,information about the excess crowd is generally unknown to the agents . in the strategies mentioned above, the excess population is known to the agents either exactly or approximately . herewe consider the case , where knowledge of is not known to the agents .the agents follow a simple time evolution function for the time variation of the excess population . to begin with, we consider an annealing schedule where is taken close to . in figs .[ quench ] we plot the time variation of the actual value of excess population as well as .we see that decreases very quickly .furthermore , it appears that there is a simple relation between and such that this implies that in time .therefore , in this strategy , even if the actual value of the excess crowd is not supplied to the agents , they can find a state where the fluctuation practically vanishes .are plotted for different functional forms of .left : in log - linear scale the excess population are plotted for exponential decay .right : for power law ( decay , with different values of ) . for the simulations.,width=321 ] we have also checked whether this is true for some other functions as well .we have taken functional forms such as for all cases mentioned above we plot ( see figs . [ quench ] ) and to check if they are equal .we conclude that this relation is not dependent on the functional form of ( as long as it is not too fast ; see discussions below ) .the response of the order parameter to the assumed trial function can be somewhat understood as follows : the dynamical equation for would be where .considering the case ( when population inversion takes place ) one would arrive at a general solution of the above equation will be of the form where is a constant .this continuum limit is valid only for the functions that do not decay too fast . considering , one can show that the dominant term of the solution will be of the form as seen numerically .however , evaluation of eq .( [ sol ] ) for for gives therefore , ( as in eq .( [ eq : rel ] ) ) is only true when , which is the measure of slowness required in to reduce . in the case where is large or decays too fast ( than the limit mentioned above ) , one would simply have ( following eq .( [ qdyn ] ) ) , would in fact saturate to a finite value and no population inversion will take place .this is also seen numerically .according to the strategies mentioned above , if the excess population is known to the agents ( which in this case is in fact a measure of the stock s price ) the fluctuations can have any small value .however , in real markets , there are agents who follow certain strategies depending on the market signal ( chartists ) and also some agents may decide completely randomly ( random traders ) . herewe intend to investigate the effect of having random traders in the market , while the rest of the populations follow the strategies mentioned above .when a single random trader is present , even when , that trader would choose randomly irrespective of whether he or she is in the minority or majority .this will create a crossover between majority and minority with an average time of two time steps . in this way , the asymmetry in the resource distribution can be avoided . however , that single agent will always be in the majority . as is discussed in sec .ii , when all agents follow the strategy described by eq .( [ strategy ] ) , after some initial dynamics , implying that they do not change side at all .however , with a single random trader , in an average time period 2 , as he or she selects alternatively between the two choices , the rest of the population is divided equally between the two choices and it is the random trader who creates the majority by always making himself or herself a loser . this situation can be avoided when there is more than one random trader . in that case, it is not possible always to have them in the majority .there will be configurations where some of the random traders can be in the minority , making their time period of wining to be 2 ( due to the symmetry of the two choices ) .the absorbing state ( for ) , therefore , never appears with random traders , though the fluctuation becomes non - zero for more than one random traders .however , if the number of random traders ( , where is the fraction of random traders ) is increased , the fluctuation in the excess population will grow eventually to (see fig .[ noise ] ) .therefore , the most effective strategy could be the one in which ( i ) the fluctuation is minimum and ( ii ) the average time period of gain will be 2 for all the agents , irrespective of the fact whether they are random traders or chartists .these two are satisfied when the number of random traders is 2 . furthermore , if one incorporates the random traders in the strategy described in sec .iib , even the knowledge of the excess population will not be exactly needed to reach a state of very small fluctuations .are plotted against for different fractions of the random traders . for the simulations.,width=340 ]in the stochastic strategy minority game , a very efficient strategy is described by eq .( [ strategy ] ) , where the agents very quickly ( in time ) get divided almost equally ( and ) between the two choices . this strategy guarantees that a single cheater , who does not follow this strategy , will always be a loser .however , the dynamics in the system stops very quickly , making the resource distribution highly asymmetric ( people in the majority stays there for ever ) thereby making this strategy socially unacceptable .we present here modifications in the above mentioned strategy to avoid this absorbing state .the presence of a single random trader ( who picks between the two choices completely randomly ) will avoid this absorbing state and the asymmetric distribution .however , this will always make that random trader a loser .but the presence of more than one random trader will avoid that situation too , making the average time period of switching between majority and minority for all the traders ( irrespective of whether they are chatists or random traders ) to be 2 .we show ( in sec .[ sec:2 ] ) that by varying a parameter , the agents can achieve any value of the fluctuation .this is an active - absorbing type phase transition and we also find the critical exponents analytically ( , , ) , which are well supported by numerical simulations .then we go on to reduce the knowledge of the agents about the excess populations , which was exactly known to the agents in the earlier strategies .we assume that the agents can only make a guess about the excess population .we show using numerical simulations and also using approximate analytical calculations that when value of the average guess of the agents are not too bad ( less than twice the actual value ) they can still reach the state of zero fluctuation .once again the fluctuation values increase continuously , when the guess becomes worse .this is again an active - absorbing type phase transition with similar values for critical indices .next we consider the case when the knowledge of excess crowd is completely absent for the agents . in this casethe agents assume a time variation ( annealing schedule , see sec .iic ) for the excess population and they do not look at the actual value .it is shown for several choices of the functional form that the actual value of the excess population essentially follows the assumed form and thereby goes to zero in finite time ( depending on the assumed functional form ) .again we show both analytically and numerically that for slow enough annealing schedule the attainment of zero fluctuation in time can be guaranteed .finally , as mentioned before , we also consider the effect of having random traders in the market who decide absolutely randomly in sec .we have presented several stochastic strategies for minority game .we have shown that utilizing the existence of a continuous transition ( from to ) the agents can find the state of zero or arbitrarily small amount of fluctuation ( thereby making the system socially efficient ) in time with or without knowing the excess population . presence or evolution of a few ( minimum 2 ) random traders in the population will not only help the entire population to get out of the absorbing state ( for ) , but also will have a minimal fluctuation or maximum social efficiency . stated briefly , as a population of agents try to evolve a strategy such that each of them individually belong to minority among the two choices and , of them will follow the stochastic strategy given by eq .( [ strategy ] ) , while the rest 2 will follow a purely random strategy .this will ensure , as shown here , that fluctuation ( ) will have arbitrarily small value ( giving maximum social efficiency ) achieved in time and the absorbing state will never appear while everyone will have an average period of 2 in the minority / majority .decrease in the number of random traders will either enforce the absorbing state or indefinite stay in the majority for the random trader , while increase in the number beyond 2 will increase , eventually converging to its value .as shown , even precise value of in eq .( [ strategy ] ) in unnecessary and appropriate guesses ( see sec .iib ) or appropriate annealing of ( see sec .iic ) can achieve almost the same results .we are extremely thankful for the suggestions made by an anonymous referee for a generalized version of the eq .( [ diff1 ] ) and its solution , as outlined in the appendix and also for pointing out eqs .( [ ann ] ) and ( [ satu ] ) . 0.5 cmin the calculations leading to eqs .( [ diffo ] ) and ( [ diff1 ] ) we have assumed that the two choices alternatively becomes minority and majority ( population inversion happens ) . that of course is our concern while studying the active phase ( ) .however , as is clearly seen from the strategy , this population inversion only happens when .so , a more general solution of the dynamics valid for all can be done as follows : consider the auxiliary variable . putting this in eq .( [ diff1 ] ) and neglecting term , one arrives at the recursion relation clearly , which leads to for the special case of one gets back eq .( [ eq : delta ] ) .also , one can always define the time scale for the above equation as near the critical point , the already obtained power - law divergence ( ) is recovered .e. moro , in _ advances in condensed matter and statistical mechanics _ ,edited by e. korutcheva and r. cuerno ( nova science publishers , new york , 2004 ) , arxiv:0402651v1 ; a. de martino and m. marsili , j. phys .a , * 39 * , r465 ( 2006 ) .
|
we show that in a variant of the minority game problem , the agents can reach a state of maximum social efficiency , where the fluctuation between the two choices is minimum , by following a simple stochastic strategy . by imagining a social scenario where the agents can only guess about the number of excess people in the majority , we show that as long as the guessed value is sufficiently close to the reality , the system can reach a state of full efficiency or minimum fluctuation . a continuous transition to less efficient condition is observed when the guessed value becomes worse . hence , people can optimize their guess for excess population to optimize the period of being in the majority state . we also consider the situation where a finite fraction of agents always decide completely randomly ( random trader ) as opposed to the rest of the population who follow a certain strategy ( chartist ) . for a single random trader the system becomes fully efficient with majority - minority crossover occurring every 2 days on average . for just two random traders , all the agents have equal gain with arbitrarily small fluctuations .
|
in this article we continue our mathematical analysis of a model for the degradation of host tissue by extracellular bacteria .this model was introduced in and consists of a reaction - diffusion equation coupled with an ordinary differential equation . in proved the existence of solutions to the time - dependent problem and the convergence to a limit problem in the ` large - degradation - rate ' limit .here we turn to the question of existence and behaviour of travelling - wave solutions .there is an increasing interest in models which support the understanding of bacterial infections , and we refer to and for further background and references .this paper is in effect concerned with the specific issue of how rapidly a bacterial infection in , for example , a burn wound may invade the underlying tissue ( with dire potential consequences for the patient , notably mortality due to septicemia ) . for the type of model with which we are concerned here, the relevant invasion speed is expected to be governed by the corresponding travelling - wave problem .typically , the smallest possible wave - speed is realized by a large class of solutions .accordingly determining the minimal speed of travelling waves becomes a central question ( with obvious implications for the amount of time available for medical treatment , for instance ) . in a dimensionless form , the model in given by the equations where describes the concentration of degradative enzymes , the volume fraction of healthy tissue and are positive constants .the key parameter here is the _ degradation - rate _ , which is very large in practice .equations , are considered in a time - space cylinder , with the upper half space of as the spatial domain . finally , the system is complemented by initial conditions for and , a neumann condition on the lateral boundary for and a decay condition for and in the far field . in gave a precise mathematical formulation and proved the existence and uniqueness of solutions to a slightly more general system , including the possibility of a diffusion term in .one noteworthy aspect of , is the convergence of solutions to the solution of a stefan - like free boundary problem as the degradation rate tends to infinity .this _ large - degradation - rate limit _ was identified by a formal asymptotic analysis in and was proved in .reaction - diffusion systems of the general form where is vector - valued , is a given nonlinearity and is a diagonal positive - semi - definite matrix , appear in a lot of different scientific areas .one - dimensional travelling waves are solutions on of the special form where is called the speed and the profile of this travelling wave .the question of existence and behaviour of travelling waves is of enormous interest in many of the applications and pertinent results for the vector case remain restricted to rather specific systems .the system , has , as we will see in remark [ rem : mono ] , one stable equilibrium in and one unstable equilibrium in and therefore belongs to the class of monostable systems .scalar monostable equations where are well - studied , especially the famous fisher equation , that is with , introduced in .the rigorous analysis of equations of this type also started in the 1930s with the work of kolmogorov , petrovskii and piskunov . under an extra assumption on proved the existence of travelling waves for all speeds , where can be found explicitly in terms of by a linearisation about ( corresponding to a degenerate node in the travelling - wave phase plane ) .moreover , they proved that the solutions to with initial data decaying sufficiently fast propagate with speed . for more general monostable propagation speed was found to be either equal to or larger than and therefore one distinguishes between a _ linear _ or _ nonlinear _ selection of the propagation speed ( the terminology _ pulled _ and _ pushed _ fronts , respectively , having an equivalent meaning ) .aronson and weinberger ( see also hadeler and rothe and for other pioneering work on such matters and for a recent review ) proved that for general monostable , in both the linear and the nonlinear selection case , the propagation speed for solutions with initial data decaying sufficiently fast is given by the minimal speed of travelling waves .they showed that monotonic travelling waves exist for all speeds and none for , where ; solutions of with sufficiently rapidly decaying initial data propagate with speed .for the nonlinear selection cases , , rothe and roquejoffre proved that , if the initial data decays sufficiently rapidly , the large - time solutions to not only propagate with speed but also approach the profile of a travelling wave with minimal speed .whereas the connection between large - time behaviour and existence of travelling waves for monostable equations is satisfactorily resolved , the distinction between nonlinear or linear selection is still a challenging question , see for example , , .only a few rigorous results for general monostable nonlinearities are available . in a variational characterisation of travelling waves and a concrete criterion for whether linear or nonlinear selection occurs for a given equation was derived .even fewer general analytical results are available for monostable _ systems_. an existence theorem for travelling waves was proved in for monotone monostable systems , which are systems of the form in which the jacobian matrix has only nonnegative off - diagonal elements .results on the existence of travelling waves and the long time behaviour of for monostable gradient systems , that is for and nonlinearities with , were given in . to the best of our knowledgethere are no more general results on the question of whether linear or nonlinear selection will occur . in this articlewe prove that for all there exist monotone travelling waves for the system , for all speeds and no speeds .the minimal speed in general depends on the parameters .we prove that for all , where is explicitly given in terms of , the minimal speed is larger than the value obtained from a linearisation at the unstable equilibrium .surprisingly enough , for the minimal speed of travelling waves is identical to the minimal speed of travelling waves for the stefan - like limit problem that was formulated in .our analysis is based on two main facts .one is the monotone structure of the system , which makes possible the use of comparison principles for the parabolic problem .the second is a remarkable reduction in order of the travelling - wave equations that occurs when is given by the minimal speed of travelling waves of the large - degradation limit .we obtain the existence of travelling waves with speed for , .finally , a comparison argument allows us to prove that nonlinear selection occurs for sufficiently large values of , the minimal speed in this regime being identical to .this paper is organised as follows . in section [ sec - exist ]we prove the existence of travelling waves for the reaction - diffusion system , . in section [ sec - limit ]we recall the formulation of the large degradation limit problem and consider travelling - wave solutions for this problem . in section [ sec - speed ]we return to the reaction - diffusion system and investigate the selection of the minimal speed .section [ sec - conv ] deals with the convergence of travelling waves for the reaction - diffusion system to travelling waves of the stefan - like free boundary problem as the reaction rate approaches infinity .finally , we give some conclusions and remarks on open problems in section [ sec - concl ] .in this section we prove the existence of monotone travelling waves .first we fix some notation and make some remarks .[ rem : mono ] a system of the general form is called _ monotone _if the off - diagonal elements of the jacobian matrix are non - negative and _ strictly monotone _ if they are positive ( see ) .a system of the form with two stationary points is called _ monostable _ if one of the stationary points is stable and the other is unstable .+ the system , is of the form with it follows that , is a monotone but not strictly monotone system .further , and are the only stationary points of , and we obtain with one positive and one negative eigenvalue , and with two negative eigenvalues .therefore , is a monostable monotone system .as remarked before , for a one - dimensional travelling wave of , with speed , the functions are solutions of , on .therefore have to satisfy the _ travelling - wave equations _ we restrict our investigations to functions taking values only in ] then we observe that moreover and by the maximum principle , which we apply once for the scalar equation and once for the scalar equation , we deduce that ,\label{eq : dom -- u - eps}\end{gathered}\ ] ] thus which proves , .the estimate follows from and .+ in addition , by comparing on with we obtain which yields . by similar argumentsone proves that corresponding properties hold for travelling - wave solutions of , .let be a monotone travelling wave for , .define to be the positive solutions of then hold . we prove now the first statement in theorem [ the - exist ] .[ prop : ex - tw - c0 ] for each , where there exists a monotone travelling wave for , .moreover , the value is finite .assume first that and fix an arbitrary and a subsequence such that by lemma [ lem : volpert ] there exists a sequence of monotone travelling waves for , , with if and such that since travelling waves are invariant under space shifts , we can assume without loss of generality that by , the monotonicity of and , there exists such that for all , ),\label{eq : conv - u - i}\\ w_{{\varepsilon}_i } & \to & w\quad\text { pointwise almost everywhere in } { \ensuremath{\mathbb{r}}}\label{eq : conv - w - i}\end{aligned}\ ] ] hold for a subsequence .multiplying , by a function and integrating we deduce due to , we can pass to the limit in these equations and get follows that solve , and , by a bootstrapping argument , that are smooth .moreover , and yield that and by lemma [ lem - tw - basic ] we obtain that and that is a monotone travelling wave .since was arbitrary , the first part of the proposition is proved . to prove that , let be two smooth strictly monotonically decreasing functions with then there exists a constant such that for all and all we can estimate the same ratios for all and by a constant depending only on )} ] . by lemma [ lem : volpert ] and the definition of in it follows that in particular, is finite .we now complete the proof of theorem [ the - exist ] .[ prop : c0=cmin ] there is no monotone travelling wave for , with speed .assume that with , satisfies , .we then obtain that from and we deduce that a differentiation in yields that which gives , together with , that with as in . using ,in we obtain that where is independent of . from lemma [ lem : volpert ] and the definition of , and we deduce that which is a contradiction to our assumption .the reaction - diffusion system , converges to a stefan - like free boundary problem as tends to infinity , see . for solutions of this limit problem holds and the spatial domain splits in a region where , and a region where and . if we denote their common boundary at time by then satisfy and a continuity and jump condition on , \,&=\ , 0,\label{st - gamma2}\\ - [ \nabla{u}_\infty(t,.)\cdot\nu(t,.)]\,&=\,\gamma [ { w}_\infty(t,.)]\vec{v}(t,.)\cdot\nu(t,.)\label{st - gamma } , \end{aligned}\ ] ] where and are the velocity and the unit normal of the free boundary , pointing into , and $ ] denotes the jump across the free boundary from the region to .+ as for the reaction - diffusion system , travelling - wave solutions are given by a speed and _ profile functions _ , we are interested in monotone travelling waves which connect unity and zero , due to the shift invariance of travelling waves , the condition and the monotonicity of we can assume that from - we then obtain and the continuity and jump condition [ prop : ex - tw - lim ] for all , where there exists a unique solution of - .this solution is given by where for , there does not exist any solution .we deduce that and , hold if and only if holds , with as in .similarly and are satisfied if and only if satisfies with . since we obtain that the jump condition is satisfied if and only if satisfies . finally , by and , the solution is nonnegative if and only if .in this section we prove that for sufficiently large values a nonlinear selection principle determines the minimal speed of travelling waves .the threshold is obtained explicitly in terms of the given constant .first , we have to analyse the behaviour of travelling - wave solutions at infinity .the linear selection principle for the minimal speed is based on the analysis of the linearised system at the unstable stationary point of , . in the next lemmawe show that solutions have to decay exponentially to zero as tends to infinity .[ lem - decay - infty+ ] let be a monotone travelling wave .then where is a negative root of the cubic equation with the definitions the system , is equivalent to and the linearized system at is given by the eigenvalues of are the solutions of . since is negative at and becomes positive as there exists a positive eigenvalue .the other two solutions of satisfy the equation which has , depending on the values of , either two negative roots , one repeated negative root or two complex - conjugate roots with negative real part . since converges to zero as , the curve is for sufficiently large values of contained in the stable manifold and converges exponentially to zero ; see for example section 2.7 . by theorem xiii.4.5there exists a solution of the linearized system and with as , where is the real part of an eigenvalue of .one checks that has no eigenvector with a component equal to zero ; therefore holds as .let us show that in fact is real .assume that with .then describes , as , a spiral around the origin contained in the plane spanned by the real and imaginary part of an eigenvector of with eigenvalue .but then , since the difference between and decays exponentially faster than , has to take values outside the set , which is a contradiction to the assumption that is a monotone travelling wave .this shows that is real .thus follows from and .the equation connects the speed and the decay rate at of a travelling wave .we now further analyze this relation .[ lem : curve - lambda - c ] for all and all there exists a unique value such that satisfies for .the function attains a positive minimum at a unique value and are given by moreover with from we obtain that is given by that is strictly positive and that tends to infinity as or .therefore the positive minimum is attained at a value and holds . by thisimplies moreover , by , one checks that , is equivalent to , .in particular , has only one zero and we deduce that for and for .[ cor - c(lambda ) ] the minimal speed of travelling waves satisfies the estimate according to lemma [ lem - decay - infty+ ] for a monotone travelling wave with speed , a negative root of exists . on the other hand the minimal value of such that has a negative solution .we now state the result corresponding to lemma [ lem - decay - infty+ ] if we consider approaching .[ lem - decay - infty- ] let be a monotone travelling wave and let .then where is given by for the limits in , are either equal to or equal to . with the system ,is equivalent to and the linearized system at is given by the matrix has the positive eigenvalues and one negative eigenvalue since as we deduce that is for sufficiently small contained in the unstable manifold of at . using theorem xiii.4.5 we obtain the existence of a solution of the linearized system and a with as , where or .one checks that holds for .thus , if then the trajectory of as has to be tangential to the eigenspace of corresponding to the eigenvalue .on the other hand this eigenspace is spanned by a vector , where for .therefore leaves the region , which is a contradiction to .this proves that for and implies that the trajectory of for is tangential to the eigenspace corresponding to the eigenvalue , which is spanned by the vector .therefore and follows from . bywe deduce and thus holds .[ cor : decay- ] let .if we consider two different monotone travelling waves for , , the one with the lower speed converges faster to unity as approaches . for a monotone travelling wave , by lemma [ lem - decay - infty- ] the convergence of to unity as is exponential with rate for and rate for , where is given by .since we deduce that both convergence rates are decreasing with .it was observed in that the travelling - wave equations , are for the speed defined in remarkable in being equivalent to a system of two first - order equations .[ lem : red ] let .then is a monotone travelling wave if and only if satisfy multiplying by and using we see that and are equivalent for .next we obtain from that and we see that implies .conversely , from and we deduce that whose solutions are the condition that converge exponentially to unity as implies that and therefore holds .this reduction allows us to prove the existence of a travelling wave with speed for , by a phase - plane analysis for , .[ prop - red ] for all there exists a monotone travelling wave for , . as tends to infinity , decay exponentially to zero with decay rate the system ,has the two stationary points and .we define the set and observe that is an invariant region for , ( see figure [ fig : red ] ) . + nextwe consider the linearisation at of , which is given by for .the eigenvalues of this linear system are , as defined in , and an eigenvector with eigenvalue is given by one checks that holds for the components of and deduces that the eigenspace corresponding to intersects the set defined in ., the incoming arc the stable manifold at and the arrows the direction field.,height=245 ] by the stable manifold theorem , see for example , theorem 2.7 , there exists a trajectory that , taking as its parameter variable , converges to as and starts at in , since the stable manifold is tangential to the eigenspace corresponding to the eigenvalue .following this trajectory back with decreasing we can not leave , since otherwise the trajectory would stay in as increases and thus could not reach any point in at . in , with decreasing , the trajectory has to be monotone in both components and therefore has to approach the stationary point .thus the trajectory connects to and satisfies , . by lemma [ lem : red ]this shows that is a monotone travelling wave for , . moreover are , for sufficiently large , in the stable manifold of , and we deduce that they converge exponentially fast to zero , with decay rate given by .the existence of a travelling wave with speed implies immediately the following estimate .the minimal speed of travelling waves satisfies as we will see , for sufficiently large values of the minimal speed is identical to the value . in this sectionwe further investigate the deacy of travelling waves to zero as tends to .with this aim we analyze the functions defined in lemma [ lem : curve - lambda - c ] : is the speed of a travelling wave with decay rate at .[ rem - decay - red ] corresponding to the reduction , of the travelling - wave system , , we find that the equation for the possible decay rates factorises for the speed .the value is for all a negative root of with .the decay rate of is given by the other negative root of this equation , which is the value defined in .[ lem : decay ] for consider the values as defined in lemma [ lem : curve - lambda - c ] and the values as given in , .then there exists a unique , which is is given explicitely by , such that moreover , for hold , see figure [ fig : dec ] .we have proved in lemma [ lem : curve - lambda - c ] that the functions attain their minimum at a unique value , for convenience we recall that by proposition [ prop - red ] , remark [ rem - decay - red ] and the definition of in lemma [ lem : curve - lambda - c ] holds .next we see from that is strictly decreasing in and that since by there is a unique value such that holds . by , and we deduce that and by , that by , and this yields finally one derives from , and that is given by to prove the inequalities , we first observe that implies that for there is a between and such that . by ,we conclude that and deduce that lies between and . since is monotonically decreasing in we obtain from that which proves , .the conclusions of lemma [ lem : decay ] are illustrated in figure [ fig : dec ] .+ for different values of and the decay rates of the travelling waves with speed as found in proposition [ prop - red].,height=283 ] by the previous results we can now compare the decay of two different travelling waves as , similarly as in corollary [ cor : decay- ] for the convergence to unity as . [ lem : comp - decay ] let and assume that is a monotone travelling wave with speed . then , as , decay slower to zero than does the travelling wave obtained in proposition [ prop - red ] . bythe function is monotonically decreasing for . from and we therefore deduce that since we have assumed that this implies that the travelling wave decays with a rate . on the other hand , by proposition [ prop - red ] ,the rate of the exponential deacy of as is given by .in this section we prove that for , where is given in lemma [ lem : decay ] , the minimal speed of travelling waves for the reaction - diffusion system , is identical to the minimal speed of travelling waves for the stefan - like limit problem - .in particular there is nonlinear selection of the minimal speed for .this result follows from a comparison principle which is formulated in the next theorem . in general , invariant region argumentsdo not apply for elliptic systems , but here a shift parameter is chosen to play the role of the time parameter in the proof of comparison principles for parabolic systems .[ the - sel ] let and be two monotone travelling waves and assume that .let denote the decay rates at of and respectively .then holds and , as tends to infinity , can not converge exponentially slower to zero than do .in particular , a travelling wave has minimal speed if and only if its decay rate at is the minimal one among all travelling waves .let us assume that holds . since the travelling wave converges by corollary [ cor : decay- ] faster to as than does .in particular since we have assumed that , the decay of at is slower than the decay of and we deduce from lemma [ lem - decay - infty+ ] that this implies that there is a shift , such that for holds and such that there exists a with from equations , we obtain for assume that , which gives and . then yields which is a contradiction .+ if then implies and yields which is also a contradiction .thus we deduce that .the final conclusion of the theorem follows now by a contradiction argument .the comparison principle theorem [ the - sel ] and lemma [ lem : comp - decay ] imply that for no monotone travelling wave exists with lower speed than .[ cor : minspeed ] for the minimal speed of travelling waves is given by assume there is a monotone travelling wave with and let be the monotone travelling wave with speed which we have found in proposition [ prop - red ] .by lemma [ lem : comp - decay ] the functions decay slower to zero at than do , which is a contradiction to theorem [ the - sel ] .therefore holds and , recalling , the conclusion follows .we complete our investigations by proving that the travelling waves of , are , for large values of , close to a travelling waves of the limit problem - .[ prop : speed - lim - k ] let , , be a sequence of monotone travelling waves for , with speed and then , as tends to infinity , where is the unique travelling - wave solution of the limit problem with speed and .we recall that , that and that by , holds uniformly in , where was defined in .this yields the existence of a subsequence and monotone decreasing functions with and such that integrating equation over , we obtain and by fatou s lemma we see that which implies that by , we obtain that and for . fromwe deduce that the equations and , yield the estimate and we obtain that and as . by and, holds and we further deduce from that which gives , substracting , for .further we find from that holds . by fatou s lemmathis implies that and as . by and since the limits as of , we deduce from ( * ? ? ?* lemma 2.4 ) .+ the equations , yield that satisfy for all and , according to - , we can pass in this equation to the limit .this yields since are monotone decreasing from unity to zero and satisfy , we deduce that there is a such that therefore yields that and that the jump condition has to be satisfied .this shows that is a travelling wave with speed of the limit system , .we conclude our investigations with a brief summary and discussion of our results .our results on the existence of travelling waves for the system , and the selection mechanism of the minimal speed are summarized in figure [ fig : dec2 ] . as in the preceding figurewe have plotted the functions which give for a travelling wave with speed the possible rates of the exponential decay to zero at . by the circles , squares and diamonds in figure [ fig : dec2 ]we have indicated the decay rates which in fact are realized by a travelling wave : presuming that there is linear selection for travelling waves exists for all speeds . by theorem [ the - sel ] the decay rates of these travelling waves corresponds to values on the increasing branch of .this behaviour changes for : the decay rate of is on the decreasing branch of the solution curve . by corollary [ cor : minspeed ] is the minimal speed and the decay rates of travelling waves with larger speeds are on the increasing branch . + to give an explanation of what happens in the nonlinear selection regime the speed falls below the minimal speed we consider the linearization of , at , which was given in .for all speeds in a neighbourhood of the linearized system has two negative eigenvalues ; the stable manifold for , at is two - dimensional .one checks that the eigenspace corresponding to a negative eigenvalue intersects with the set . a monotone travelling wave exists for and approaches tangentially to the eigenspace corresponding to the larger negative eigenvalue ( ` slow decay ' ) , see figure [ fig : dec2 ] .we expect also for an orbit connecting with .such an orbit will also converge to tangentially to the eigenspace corresponding to the larger negative eigenvalue but comes from the ` wrong ' side , taking negative values for . for the threshold there still exists a monotone travelling wave ; this travelling wave approaches tangentially to the eigenspace of the smaller negative eigenvalue ( ` fast decay ' ) .travelling waves often determine the long - time behaviour of solutions for the initial - value problem for , .typically , for solutions with sufficiently fast decaying initial data , the propagation speed of pertubations from the unstable equilibrium is given by the minimal speed of travelling waves .the proof of such a result , as well as the uniqueness of travelling waves , for the system , is not in the scope of the present article .nevertheless our result that the travelling wave with minimal speed has the fastest decay supports that conjecture . for the reaction - diffusion system , in arbitrary space - dimension analysis yields a family of supersolutions : consider for a travelling wave with speed and an arbitrary real number .the functions defined by satisfy since is negative is a supersolution . in order to construct a subsolution onehas to control the dimension - depending correction term in . in view of the applications ,the robustness of the wave - speed to changes in the parameter values is a valuable feature : see the explicit formula for the minimal speed and proposition [ prop : speed - lim - k ] . for the mathematical analysis of reaction - diffusion _ systems _ and the selection of the minimal speed the model that we have derived is a good paradigm .we prove that nonlinear selection occurs and determine explicitly the minimal speed .one crucial ingredient is the exact first integral obtained in lemma [ lem : red ] , which is a special property of the system , .other results , in particular the comparison principle theorem [ the - sel ] and the observation that the fastest decay is realized by a travelling wave with minimal speed , can be extended to general monotone systems .d. g. aronson and h. f. weinberger .nonlinear diffusion in population genetics , combustion , and nerve pulse propagation . in _ partial differential equations and related topics ( program , tulane univ ., new orleans , la . , 1974 ) _ , pages 549 .lecture notes in math . , vol . 446 .springer , berlin , 1975 .m. lucia , c. b. muratov , and m. novaga .linear vs. nonlinear selection for the propagation speed of the solutions of scalar reaction - diffusion equations invading an unstable equilibrium ., 57:616636 , 2004 .
|
we study travelling - wave solutions for a reaction - diffusion system arising as a model for host - tissue degradation by bacteria . this system consists of a parabolic equation coupled with an ordinary differential equation . for large values of the ` degradation - rate parameter ' solutions are well approximated by solutions of a stefan - like free boundary problem , for which travelling - wave solutions can be found explicitly . our aim is to prove the existence of travelling waves for all sufficiently large wave - speeds for the original reaction - diffusion system and to determine the minimal speed . we prove that for all sufficiently large degradation rates the minimal speed is identical to the minimal speed of the limit problem . in particular , in this parameter range , _ nonlinear _ selection of the minimal speed occurs .
|
there has been considerable recent activity in the area of inference under shape constraints , that is , inference about a ( say ) function under the constraint that satisfies certain qualitative properties , such as monotonicity or convexity on certain subsets of its domain .this approach is appealing for two main reasons : first , such shape constraints are sometimes direct consequences of the problem under investigation ( see , e.g. , hampel , , or wang et al . , ) , or they are at least plausible in many problems .it is then desirable that the result of the inference reflect this fact .there is also the hope that imposing these constraints will improve the quality of the resulting estimator in some sense .the second reason is that alternative nonparametric estimators such as , for example , kernel estimators , typically require the choice of a tuning parameter such as a bandwidth .a good choice for such a tuning parameter is usually far from trivial and injects a certain amount of subjectivity into the estimator .in contrast , inference under shape constraints often results in an explicit solution that does not depend on a tuning parameter . in the context of density estimation , grenander ( ) derived the nonparametric maximum likelihood estimator of a density function that is nonincreasing on a half - line .this estimator is given explicitly by the left derivative of the least concave majorant of the empirical distribution function .however , this result does not carry over to the problem of estimating a unimodal density with unknown mode , as then the nonparametric mle does not exist ; see , for example , birg ( ) .even if the mode is known , the estimator suffers from inconsistency near the mode , the so - called spiking problem ; see , for example , woodroofe and sun ( ) .these results are unfortunate since the constraint of unimodality is cited as a reasonable assumption in many problems. it was argued in walther ( ) that log - concave densities are an attractive and natural alternative choice to the class of unimodal densities : the class of log - concave densities is a subset of the class of the unimodal densities , but it contains most of the commonly used parametric distributions and is thus a rich and useful nonparametric model .moreover , it was shown in walther ( ) that the nonparametric mle of a univariate log - concave density exists and can be computed with readily available algorithms . due to these attractive properties , there has been considerable recent research activity about the statistical properties of the mle , computational aspects , applications in modeling and inference , as well as about the multivariate case . as an example, figure [ fig0 ] shows a scatterplot of measurements on 569 individuals from the wisconsin breast cancer data set ; see section [ applications ] for a more detailed description .the data were clustered using a two - component normal mixture model fitted with the em - algorithm ; see , for example , fraley and raftery ( ) .the contour lines of the fitted normal components are shown in the left plot , while the right plot shows the contour lines that obtain when the normal mle is replaced by the log - concave mle in the em algorithm .the log - concave mle automatically adapts to the multivariate skewness of the data and results in a superior clustering : each observation is either a benign or a malignant instance .these labels were not used for the fitting but can be employed to assess the quality of the clustering .the em algorithm with the log - concave mle resulted in 121 misclassified instances versus 144 for the gaussian mle .this article gives an overview of recent results about inference and modeling with the log - concave mle .section [ basics ] gives some basic properties and applications of log - concave distributions .section [ statprop ] addresses the mle and its statistical properties .computational aspects are surveyed in section [ computation ] , while section [ multivariate ] describes recent advances in the multivariate setting .section [ applications ] reviews applications of the log - concave mle for various modeling and inference problems .section [ outlook ] lists some open problems for future work .a function on is log - concave if it is of the form for some concave function .a prime example is the normal density , where is a quadratic in .further , most common univariate parametric densities are log - concave , such as the normal family , all gamma densities with shape parameter , all weibull densities with exponent , all beta densities with both parameters , the generalized pareto and the logistic density ; see , for example , marshall and olkin ( ) .log - concave functions have a number of properties that are desirable for modeling : marginal distributions , convolutions and product measures of log - concave distributions are again log - concave ; see , for example , dharmadhikari and joag - dev ( ) .notably , the first two properties are not true for the class of unimodal densities .log - concave distributions may be skewed , and this flexibility is relevant in a number of applications ; see , for example , section [ applications ] . on the other hand , log - concave distributions necessarily have subexponential tails and nondecreasing hazard rates ; see , for example , karlin ( ) and barlow and proschan ( ) .there are several alternative characterizations and designations for the class of univariate log - concave distributions : ibragimov ( ) proved that these are precisely the distributions whose convolution with a unimodal distribution is always unimodal ; thus , log - concave distributions are sometimes referred to as strongly unimodal .log - concave densities are also precisely the polya frequency functions of order 2 , as well as precisely those densities for which the location family has monotone likelihood ratio in ; see karlin ( ) .log - concave distribution models have been found useful in economics ( see , e.g. , an , , ; bagnoli and bergstrom , and caplin and nalebuff , ) , in reliability theory ( see , e.g. , barlow and proschan , ) and in sampling and nonparametric bayesian analysis ( see , e.g. , gilks and wild , ; dellaportas and smith , and brooks , ) .recent advances in inference have led to fruitful applications of log - concave distributions in other areas such as clustering , some of which will be discussed in section [ applications ] .if are i.i.d . observations from a univariate log - concave density ( [ logconcave ] ) , then the nonparametric mle exists , is unique , and is of the form , where is continuous and piecewise linear on ] ; see walther ( ) , rufibach ( ) or pal , woodroofe and meyer ( ) .an example is plotted in figure [ fig1 ] .consistency of with respect to the hellinger metric was established in pal , woodroofe and meyer ( ) , while dmbgen and rufibach ( ) provide results on the uniform consistency on compact subsets of the interior of the support : if belongs to a hlder class with exponent ] .further , under some regularity conditions , the c.d.f . of is asymptotically equivalent to the empirical c.d.f . : if then is of order uniformly over compact subsets of the interior of the support .moreover , on the set of knots of .the resulting uniform -consistency of outperforms , for example , c.d.f.s of kernel estimators using a nonnegative kernel with optimally chosen bandwidth .while empirical evidence suggests that performs well over the whole line , establishing the corresponding theoretical results is still an open problem .balabdaoui , rufibach and wellner ( )derive the pointwise limiting distributions of , , and likewise for and , where is the smallest integer such that .they show that these limiting distributions depend on the `` lower invelope '' of an integrated brownian motion process minus a drift term that depends on .maximizing the log - likelihood function under the constraint is equivalent to maximizing over the set of all concave functions ; see silverman ( ) .due to the piecewise linear form of the solution , one can write this as a finite - dimensional optimization problem as follows : for the ordered data write and denote the slope between and by , .then the optimization problem is to maximize under the constraint that the vector belongs to the cone . is a concave function on which needs to be maximized over the convex cone .this is precisely the type of problem for which the iterative convex minorant algorithm ( icma ) was developed ; see groeneboom and wellner ( ) and jongbloed( ) .the key idea of that algorithm is to approximate the concave function locally around the current candidate solution by a quadratic form , which is then maximized by a newton procedure over the cone by using the pool - adjacent - violators algorithm .this procedure is then iterated to the final solution .walther ( ) , pal , woodroofe and meyer ( ) and rufibach ( ) successfully employ the icma for this problem .the last reference gives a very detailed description of the algorithm and also compares the icma to several other algorithms that can be used for this problem , such as an interior point method ; see , for example , terlaky and vial ( ) .the icma shows a clearly superior performance in these simulation studies .recently , dmbgen , hsler and rufibach ( ) have computed the log - concave mle with an active set algorithm ; see , for example , fletcher ( ) .active set algorithms have the attractive property that they find the solution in finitely many steps , while the iterations of the icma have to be terminated by a stopping criterion .it appears that the active set algorithm provides the most efficient method for computing the mle to date .both the icma and the active set algorithm for computing the log - concave mle are available with the package `` ` logcondens ` , '' which is accessible from `` ` cran ` .'' an alternative way to compute the mle with convex programming algorithms is described in koenker and mizera ( ) .another advantage of the log - concave mle is that sampling from is quite straightforward : first , compute the c.d.f . at the ordered sample by integrating the piecewise exponential function .next , generate a random index with .then generate $ ] and set .if set , otherwise set .then has density .the definition of a log - concave density does not depend on the underlying dimension ; see ( [ logconcave ] ) . the fact that the mle does not require the choice of a tuning parameter makes its use even more attractive in a multivariate setting , where , for example , a kernel estimator requires the difficult choice of a bandwidth matrix .the structure of the multivariate mle is analogous to the univariate case ; see , for example , cule , samworth and stewart ( ) : the support of the mle is the convex hull of the data , and there is a triangulation of this convex hull such that is linear on each simplex of the triangulation .figure [ fig2 ] depicts an example for two - dimensional data .the multivariate mle has already shown promise in a number of applications ; see section [ applications ] .the computation of the mle requires an approach that is different from the univariate setting , as the multivariate piecewise linear structure of does not allow to write this optimization problem in terms of a simple ordering of the slopes .cule , samworth and stewart ( ) show how the mle can be computed by solving a nondifferentiable convex optimization problem using shor s -algorithm ; see kappel and kuntsevich ( ) .cule , samworth and stewart ( ) report a robust and accurate performance of this algorithm , which they implemented in the ` r ` package ` logconcdead ` ; see cule , gramacy and samworth ( ) .however , the computation time increases quickly with sample size and dimension .cule , samworth and stewart ( ) report computation times of about 1 sec for observations in two dimensions , to 37 min for a sample of size in four dimensions .it is therefore desirable to develop faster algorithms for this problem .cule , samworth and stewart ( ) investigate the finite sample performance of the multivariate mle via a simulation study .they compare the mean integrated squared error of the mle with that of a kernel estimator with gaussian kernel and a bandwidth that is either chosen to minimize the mean integrated squared error ( using knowledge about the density that would not be available in practice ) or determined by an empirical bandwidth selector based on least squares cross validation .the mle outperforms both of these estimators except for small sample sizes , and the improvement can be quite dramatic . on the other hand , in view of the work of birg and massart ( ) , it seems unlikely that the mle will achieve optimal rates of convergence in dimensions , due to the richness of the class of concave functions .it would thus be helpful to have theoretical results about the performance of the multivariate mle .deriving such results is an open problem .one of the most fruitful applications of log - concave distributions has been in the area of clustering . a principled and successful approach to assign the observations to clustersis via the mixture model , where the mixture proportions are nonnegative and sum to unity , and the component distributions model the conditional density of the data in the cluster ; see , for example , mclachlan and peel ( ) .typically one assumes a parametric formulation for the component distributions , such as the normal model ; see , for example , fraley and raftery ( ) .then the em algorithm provides an elegant solution to fit the above mixture model and to assign the data to one of the components : the em algorithm iteratively assigns the data based on the current maximum likelihood estimates of the component distributions , and then updates those estimates based on these assignments .an important advantage of using a mixture model for clustering is that it provides not only an assignment of the data to the components , but also a measure of uncertainty for this assignment via the posterior probabilities that the observation belongs to the component : .a disadvantage of this approach is that it depends on the parametric formulation in several important ways : if the parametric model is misspecified , then the accuracy of the clustering may deteriorate and the measure of uncertainty may be considerably off .for some data , such as those in figure [ fig1 ] , no appropriate parametric model may be available .another disadvantage is that each parametric model requires a different implementation of the em algorithm based on certain theoretical derivations ; see , for example , mclachlan and krishnan ( ) .therefore , it is desirable to have an em - type clustering algorithm with nonparametric component distributions .this would allow for a universal software implementation with flexible component distributions . as was expounded in sections [ introduction ] and [ basics ], the class of log - concave distributions provides a flexible model , and , moreover , the mle exists .thus , one may attempt to mimic the em - type clustering algorithm that works so well in the parametric context .this idea was successfully carried out in chang and walther ( ) and in cule , samworth and stewart ( ) . in related work ,eilers and borgdorff ( ) use a nonparametric smoother in place of the log - concave mle in the m - step , with a penalty term that moves the estimate toward a log - concave function .chang and walther ( ) report a clear improvement compared to the parametric em algorithm when the parametric model is not correct , and a performance that is almost similar to the gaussian em algorithm in the case where the gaussian model is correct .thus , the use of log - concave component distributions provides a flexible methodology for clustering , and this flexibility does not entail any noticeable penalty in the special case where a parametric model is appropriate .chang and walther ( ) also consider a multivariate extension by modeling each component distribution with log - concave marginals and a normal copula for the dependence structure .this simple multivariate extension avoids the more challenging task of estimating a multivariate log - concave density , but it is flexible enough for many situations . figure [ fig3 ] compares the fitted components with those for the gaussian model for simulated bivariate data .the log - concave model automatically picks up the skewness in the -direction and results in a noticeably improved error rate for the clustering ; see chang and walther ( ) for details.=1 cule , samworth and stewart ( ) extend this approach by using the multivariate log - concave mle for each component .they apply the log - concave em algorithm to the wisconsin breast cancer data of street et al .( ) and obtain only 121 misclassified instances compared to 144 with the gaussian em algorithm .figure [ fig4 ] shows a scatterplot of the data and the fitted log - concave mixture .the contour plots of the fitted components from the gaussian em algorithm and the log - concave em algorithm are given in figure [ fig0 ] .c + developing principled methodology for selecting an appropriate number of components is an open problem .methodology for testing for the presence of mixing in the log - concave model is given by walther ( ) and walther ( ) , where the latter approach uses the fact that a log - concave mixture allows the representation for some and a concave function .while log - concave distributions allow for flexible modeling , the structure provided by a log - concave estimator has turned out to result in advantageous properties in a number of other inference problems : dmbgen and rufibach ( ) use the fact that the hazard rate of a log - concave density is automatically monotone and construct a simple plug - in estimator of the hazard rate which is nondecreasing .rates of convergence for automatically translate to rates for the hazard rate estimator .mller and rufibach ( ) report an improved performance for certain problems in extreme value theory when employing a log - concave estimator .dmbgen , hsler and rufibach ( ) show how the assumption of log - concavity allows the estimation of a distribution based on arbitrarily censored data using the em algorithm .they replace the log - likelihood function by a function that is linear in .this function can be interpreted as the conditional expectation of the log - likelihood function given the available data and represents the e - step in the em algorithm .the m - step consists of maximizing this function using the active set algorithm described in section [ computation ] .balabdaoui , rufibach and wellner ( ) investigate the mode of as an estimator of the mode of .estimation of the mode of a unimodal density has received considerable attention in the literature .typically , some choice of bandwidth or tuning parameter is required due to the problems with the mle of a univariate density described in section [ introduction ] .the mle of a log - concave density does not suffer from this problem and provides an estimate of the mode as a by - product .balabdaoui , rufibach and wellner ( ) establish the limiting distribution of this estimator and show that the estimator is optimal in the asymptotic minimax sense .log - concave distributions constitute a flexible nonparametric class which allows modeling and inference without a tuning parameter .the mle has favorable theoretical performance properties and can be computed with available algorithms .these advantageous properties have resulted in tangible improvements in a number of relevant problems , such as in clustering and when handling censored data . as for future work, there is clearly the potential for similar improvements in a host of other problems , such as regression ( see , e.g. , eilers , ) or cox regression under shape constraints on the hazard rate .further , it would be useful to study the consequences of model misspecification .for example , the mode of the log - concave mle is a useful tool for data analysis. it would thus be interesting to investigate how far off this mode can be from the population mode in the case where the population distribution is unimodal but not log - concave .the outstanding performance of the multivariate mle reported in the simulation studies in cule , samworth and stewart ( ) lends importance to a theoretical investigation of its convergence properties .finally , it would be desirable to develop faster algorithms for computing the multivariate mle . for modeling with heavier , algebraic tails , it may be of interest to consider the more general class of -concave densities ; see avriel ( ) , borell ( ) and dharmadhikari and joag - dev ( ) .first results about nonparametric estimation and computational issues in this class were obtained in koenker and mizera ( ) and seregin ( ) .thanks to kaspar rufibach and a referee for comments and several references , to richard samworth for providing figures , and to jon wellner for bringing the work of arseni seregin to my attention .work supported by nsf grant dms-05 - 05682 and nih grant 1r21ai069980 .pal , j. , woodroofe , m. and meyer , m. ( 2007 ) . estimating a polya frequency function . in _complex datasets and inverse problems : tomography , networks and beyond _( r. liu , w. straderman , c .- h .zhang , eds . ) 239249 .ims , beachwood , oh . street , w. m. , wolberg , w. h. and mangasarian , o. l. ( 1993 ) .nuclear feature extraction for breast tumor diagnosis .is & t / spie 1993 international symposium on electronic imaging : science and technology , san jose , ca , 1905 , 861870 .
|
log - concave distributions are an attractive choice for modeling and inference , for several reasons : the class of log - concave distributions contains most of the commonly used parametric distributions and thus is a rich and flexible nonparametric class of distributions . further , the mle exists and can be computed with readily available algorithms . thus , no tuning parameter , such as a bandwidth , is necessary for estimation . due to these attractive properties , there has been considerable recent research activity concerning the theory and applications of log - concave distributions . this article gives a review of these results . .
|
during the past two decades , there has been much interest in various nonparametric and semi - parametric techniques to model time series data with possible nonlinearity .both estimation and specification testing problems have been systematically examined for the case where the observed time series satisfy a type of stationarity . for more details and recent developments ,see robinson , fan and gijbels , hrdle _ et al . _ , fan and yao , gao , li and racine and the references therein .as pointed out in the literature , the stationarity assumption seems too restrictive in practice .for example , when tackling economic and financial issues from a time perspective , we often deal with non - stationary components . in reality , neither prices nor exchange rates follow a stationary law over time .thus practitioners might feel more comfortable avoiding restrictions like stationarity for processes involved in economic time series models .there is much literature on parametric linear and nonlinear models of non - stationary time series , but very little work has been done in nonparametric and semi - parametric nonlinear cases . in nonparametric estimation of nonlinear regression and autoregression of non - stationary time series models and continuous - time financial models ,existing studies include phillips and park , karlsen and tjstheim , bandi and phillips , karlsen _ et al . _ , schienle and wang and phillips .recently , gao _ et al . _ considered nonparametric specification testing in both autoregression and cointegration models.=-1 consider a nonparametric regression model of the form where and are non - stationary time series , is an unknown function defined in and is a sequence of strictly stationary errors .we may apply a nonparametric method to estimate , where is a sequence of positive weight functions ; see karlsen __ and wang and phillips . as pointed out in the literature for the case where the dimension of is larger than three , may not be estimated by with reasonable accuracy due to `` the curse of dimensionality '' .the curse of dimensionality problem has been clearly illustrated in several books , such as silverman , hastie and tibshirani , green and silverman , fan and gijbels , hrdle _ et al . _ , fan and yao and gao .there are several ways to circumvent the curse of dimensionality .perhaps one of the most commonly used methods is semi - parametric modelling , which is taken to mean partially linear modelling in this context . in this paper , we propose using a partially linear model of the form where is an unknown -dimensional vector ; is some continuous function ; is a sequence of either stationary or non - stationary regressors , as assumed in a1 below ; is a null recurrent markov process ( see section [ s2 ] below for detail ) ; and is an error process . as discussed in section [ s3.2 ] below , can be relaxed to be either stationary and heteroscedastic or non - stationary and heteroscedastic .an advantage of the partially linear approach is that any existing information concerning possible linearity of some of the components can be taken into account in such models . were among the first to study this kind of partially linear model .it has been studied extensively in both econometrics and statistics literature . with respect to development in the field of semi - parametric timeseries modelling , various estimation and testing issues have been discussed for the case where both and are strictly stationary ( see , e.g. , hrdle _ et al ._ and gao ) since the publication of robinson . for the case where is a sequence of either fixed designs or strictly stationary regressors butthere is some type of unit root structure in , existing studies , such as juhl and xiao , have discussed estimation and testing problems . to the best of our knowledge ,the case where either is a sequence of non - stationary regressors or both and are non - stationary has not been discussed in the literature .this paper considers the following two cases : ( a ) where is a sequence of strictly stationary regressors and is a sequence of non - stationary regressors ; and ( b ) where both and are non - stationary . in this case, model ( [ cgl1.3 ] ) extends some existing models ( robinson , hrdle _ et al . _ , juhl and xiao and gao ) from the case where is a sequence of strictly stationary regressors to the case where is a sequence of non - stationary regressors .since the invariant distribution of the null recurrent markov process does not have any compact support , however , the semi - parametric technique used in stationary time series can not be directly applicable to our case . in this paper , we will develop a new semi - parametric estimation method to address such new technicalities when establishing our asymptotic theory .the main objective of this paper is to derive asymptotically consistent estimators for both and involved in model ( [ cgl1.3 ] ) . in a traditional stationary timeseries regression problem , some sort of stationary mixing condition is often imposed on the observations to establish asymptotic theory . in this paper , it is interesting to find that the proposed semi - parametric least - squares ( sls ) estimator of is still asymptotically normal with the same rate as that in the case of stationary time series when certain smoothness conditions are satisfied .in addition , our nonparametric estimator of is also asymptotically consistent , although the rate of convergence , as expected , is slower than that for the stationary time series case .the rest of the paper is organized as follows .the estimation method of and and some necessary conditions are given in section [ s2 ] .the main results and some extensions are provided in section [ s3 ] .section [ s4 ] provides a simulation study .an analysis of an economic data set from the united states is given in section [ s5 ] .an outline of the proofs of the main theorems is given in section [ s6 ] .supplementary material section gives a description for a supplemental document by chen , gao and li , from which the detailed proofs of the main theorems , along with some technical lemmas , are available .let be a markov chain with transition probability and state space , and be a measure on . throughout the paper , assumed to be -irreducible harris recurrent , which makes asymptotics for semi - parametric estimation possible .the class of stochastic processes we are dealing with in this paper is not the general class of null recurrent markov chains .instead , we need to impose some restrictions on the tail behavior of the distribution of the recurrence time of the chain .this is what we are interested in : a class of null recurrent markov chains . a markov chain is null recurrent if there exist a small non - negative function ( the definition of a small function can be found in the supplemental document ) , an initial measure , a constant and a slowly varying function such that \sim \frac{1}{\gamma(1+\beta ) } n^{\beta}l_{f}(n ) \qquad \mbox{as } n\rightarrow\infty,\vadjust{\goodbreak}\ ] ] where stands for the expectation with initial distribution and is the usual gamma function .it is shown in karlsen and tjstheim that when there exist some small measure and small function with and , , such that then is null recurrent if and only if where and is the invariant measure as defined in karlsen and tjstheim . furthermore ,if ( [ cgl2.3 ] ) holds , by lemma 3.4 in karlsen and tjstheim , is a strongly consistent estimator of , where , in which is the conventional indicator function and is a small set as defined in karlsen and tjstheim .we then introduce a useful decomposition that is critical in the proofs of asymptotics for nonparametric estimation in null recurrent time series .let be a real function defined in .we now decompose the partial sum into a sum of independent and identically distributed ( i.i.d . )random variables with one main part and two asymptotically negligible minor parts .define where the definitions of and will be given in the supplemental document .then from nummelin s result , we know that is a sequence of i.i.d . random variables . in the decomposition ( [ cgl2.4 ] ) of , plays the role of the number of observations .it follows from lemma 3.2 in karlsen and tjstheim that and converge to zero almost surely when they are divided by .furthermore , karlsen and tjstheim show that if ( [ cgl2.2 ] ) holds and latexmath:[ ] is assumed in a2(ii ) and a3(ii ) , we have = e \bigl [ \bigl(x_t^{\tau }\theta_0 + g(v_t ) + \epsilon_t \bigr)|v_t = v \bigr ] = h(v)^{\tau}\theta_0 + g(v).\ ] ] this implies that ] . in the case where is a sequence of stationary random variables , various estimation methods for and in model ( [ cgl1.3 ] ) have been studied by many authors ( see , e.g. , robinson , hrdle _ et al . _ and gao ) .we now propose an sls estimation method based on the kernel smoothing .for every given , we define a kernel estimator of by where is a sequence of weight functions given by in which is a probability kernel function and is a bandwidth parameter . replacing by in model ( [ cgl1.3 ] ) and applying the sls estimation method, we obtain the sls estimator , , of by minimizing over .this implies where , , and . and then estimated by this kind of estimation method has been studied in the literature ( see , e.g. , hrdle _ et al . _ ) . when is a sequence of either fixed designs or stationary regressors with a compact support , the conventional weighted least - squares estimators ( [ cgl2.10 ] ) and ( [ cgl2.11 ] ) work well in both the large and small sample cases .since the invariant distribution of null recurrent markov chain might not have any compact support , it is difficult to establish asymptotic results for the estimators ( [ cgl2.10 ] ) and ( [ cgl2.11 ] ) owing to the random denominator problem involved in .hence , to establish our asymptotic theory , we apply the following weighted least - squares estimation method ( see , e.g. , robinson ) . define where and is a sequence of positive numbers satisfying some conditions .furthermore , let throughout this paper , we propose to estimate by and by as may be seen from equation ( [ cgl2.9 ] ) , further discussion on the semi - parametric estimation method depends heavily on the structure of and .this paper is concerned with the following two cases : ( i ) where is a sequence of strictly stationary regressors and independent of ; and ( ii ) where is a sequence of non - stationary regressors with the non - stationarity being generated by . before stating the main assumptions, we introduce the definition of mixing dependence .the stationary sequence is said to be mixing if as , where in which denotes a sequence of fields generated by . since its introduction by rosenblatt , mixing dependence is a property shared by many time series models ( see , e.g. , withers and gao ) . for more details about limit theorems for mixing processes , we refer to lin and lu and the references therein .the following assumptions are necessary to derive the asymptotic properties of the semi - parametric estimators .there exist an unknown function and a stationary process such that .\(i ) suppose that is a stationary ergodic markov process with =0 ] for some , where stands for the euclidean norm .furthermore , we suppose that ] , >0 ] for some .furthermore , the process is mixing with where is the mixing coefficient of .\(i ) the invariant measure of the null recurrent markov chain has a uniformly continuous density function .\(ii ) let , and be mutually independent .let be the density function of let furthermore , there exists a sequence of fields such that is adapted to . with probability 1 , where is the conditional density function of given .\(i ) the function is differentiable and the derivative is continuous in .in addition , for large enough where is the derivative of , the definitions of and are given in a4 above.=1 \(ii ) the function is differentiable and the derivative is also continuous in .in addition , for large enough and where is small enough .\(i ) the probability kernel function is a continuous and symmetric function having some compact support .\(ii ) the sequences and both satisfy as for some .moreover , [ rem2.1 ] ( i ) while some parts of assumptions a1a3 may be non - standard , they are justifiable in many situations .condition a1 assumes that is generated by .this is satisfied when the conditional mean function ] .condition a1 is also commonly used in the stationary case ( see , e.g. , linton ) .there are various examples in this kind of situation ( see , e.g. , in the univariate case where , in which is a sequence of i.i.d .errors with =0 ] , and independent of . in this case , = v ] is positive definite .if is strictly stationary and independent of , then as , suppose that both and are non - stationary .if , in addition , is satisfied , then ( [ cgl3.2 ] ) still holds .[ rem3.1 ] ( i ) theorem [ thm3.1 ] shows that the standard normality can still be an asymptotic distribution of the sls estimate even when non - stationarity is involved .theorem [ thm3.1](ii ) further shows that the conventional rate of is still achievable when the non - stationarity in is purely generated by and certain conditions are imposed on the functional forms of and .\(ii ) since the asymptotic distribution and asymptotic variance in ( [ cgl3.2 ] ) are mainly determined by the stationary sequences and , the above conclusion extends theorem 2.1.1 of hrdle _ et al . _ for the case when , and are all strictly stationary .in addition , when is assumed to be strictly stationary and independent of in theorem [ thm3.1](i ) , the covariance matrix reduces to the covariance matrix of of the form ) ( x_1 - e[x_1])^{\tau } ] ] .while it is difficult to consider some general non - stationarity for , it is possible to consider a general inhomogeneous case in assumption [ as3.1 ] to allow for a bivariate functional form of such that the non - stationarity of is caused by both the involvement of and the dependence on . in this case , may be estimated nonparametrically by where , in which both are probability kernel functions and are bandwidth parameters for .assumption [ as3.2](ii ) allows for inclusion of endogeneity , heteroscedasticity and deterministic trending . in the case where we have either or with =e[e_t|v_t]=0 ] or = e[\sigma(u_t ) e_t ] = e[\epsilon_t] ] . under assumptions [ as3.1][as3.3 ] , model ( [ cgl1.3 ] )can be written as either \\[-8pt ] x_t & = & h \biggl(v_t , \frac{t}{n } \biggr ) + u_t , \nonumber\end{aligned}\ ] ] where or , or \\[-8pt ] x_t & = & h \biggl(v_t , \frac{t}{n } \biggr ) + u_t , \nonumber\end{aligned}\ ] ] where or .estimation of and in ( [ eq3.2a ] ) is similar to what has been proposed in section [ s2 ] .since model ( [ eq3.2b ] ) is a semi - parametric additive model , one will need to estimate based on the form with before both and can be individually estimated using the marginal integration method as developed in section 2.3 of gao .in both cases , one will need to replace in ( [ eq2.1 ] ) and in ( [ cgl2.12 ] ) by of ( [ eq3.1 ] ) and , respectively . since the establishment and the proofs of the corresponding results of theorems [ thm3.1 ] and [ thm3.2 ] for models ( [ eq3.2a ] ) and ( [ eq3.2b ] ) involve more technicalities than those given in appendices b and c of the supplemental document , we wish to leave the discussion of models ( [ eq3.2a ] ) and ( [ eq3.2b ] ) to a future paper .to illustrate our estimation procedure , we consider a simulated example and a real data example in this section . throughout the section , the uniform kernel }(v)$ ]is used . a difficult problem in simulationis the choice of a proper bandwidth . from the asymptotic results in section [ s3 ], we can find that the rates of convergence are different from those in the stationary case with being replaced by . in practice ,we have found it useful to use a semi - parametric cross - validation method ( see , e.g. , section 2.1.3 of hrdle _ et al . _ ) .[ ex4.1 ] consider a partially linear time series model of the form where with and is a sequence of i.i.d .random variables generated from , is generated by an ar model of the form in which is a sequence of i.i.d .random variables generated from , and are mutually independent .we then choose the true value of as , the true form of as and consider the following cases for . , where is a sequence of i.i.d . random variables , , where is defined as in case ( i ) ..0lll@ & & ae & se + 200 & & 0.0137 & 0.0144 + 700 & & 0.0117 & 0.0086 + 1200 & & 0.0064 & 0.0062 + 200 & & 0.0172 & 0.0215 + 700 & & 0.0149 & 0.0126 + 1200 & & 0.0079 & 0.0108 + .0lll@ & & ae & se + 200 & & 0.1158 & 0.0575 + 700 & & 0.0894 & 0.0341 + 1200 & & 0.0628 & 0.0210 + 200 & & 0.1391 & 0.0582 + 700 & & 0.1299 & 0.0437 + 1200 & & 0.1075 & 0.0367 + for the case of with sample size ; the solid line is the true line , and the dashed curve is the estimated curve . ] for the case of with sample size ; the solid line is the true line , and the dashed curve is the estimated curve . ]it is easy to check that the random walk defined in this example corresponds to a null recurrent process and the assumptions in section [ s2 ] are satisfied here .we choose sample sizes and as the number of replications in the simulation .the simulation results are listed in tables [ t1 ] and [ t2 ] and the plots are given in figures [ f1][f6 ] .for the case of with sample size ; the solid line is the true line , and the dashed curve is the estimated curve . ] for the case of with sample size ; the solid line is the true line , and the dashed curve is the estimated curve . ] for the case of with sample size ; the solid line is the true line , and the dashed curve is the estimated curve . ] for the case of with sample size ; the solid line is the true line , and the dashed curve is the estimated curve . ]the performance of is given in table [ t1 ] .the `` ae '' in table [ t1 ] is defined by , where is the value of in the replication .`` se '' is the standard error of . from table[ t1 ] , we find that the estimator of performs well in the small and medium sample cases and it improves when the sample size increases.=1 the performance of the nonparametric estimator is given in table [ t2 ] .the `` ae '' in table [ t2 ] is the mean of the absolute errors in 1000 replications .the absolute error is defined by , where for , and are the maximum and minimum of the random walk , respectively .`` se '' in table [ t2 ] is the standard error . from table [ t2 ] , we find that the nonparametric estimate of performs well in our example and it improves when the sample size increases .figures [ f1][f3 ] compare the true nonparametric regression function and its nonparametric estimator for the case of when the sample sizes are 200 , 700 and 1200 , respectively .figures [ f4][f6 ] compare the true nonparametric regression function with its nonparametric estimator for the case of when the sample sizes are 200 , 700 and 1200 , respectively .the solid line is and the dashed line is the nonparametric estimator .we can not forecast the trace of the random walk because of its non - stationarity .hence , we estimate the true regression function according to the scope of and we can not estimate in other points out of the scope since there is not enough sample in the neighborhood of each of such points .that is why the scopes of the abscissa axis are different in figures [ f1][f6 ] .we can also find that the performance of the nonparametric estimate of improves as the sample size increases.=-1we use monthly observations on the u.s .share price indices , long - term government bond yields and treasury bill rates from jan/1957dec/2009 .the data are obtained from the international monetary fund s ( imf ) international financial statistics ( ifs ) .the share price series used is ifs series 11162zf .the long - term government bond yield , which is the 10-year yield , is from the ifs series 11161zf .the treasury bill rate is from ifs series 11160czf .figure [ f7](a)(c ) gives the data plots of the share prices , the long - term bond yields and the treasury bill rates . over the period of jan/1957dec/2009 with 624 observations .( a ) treasury bill rates ; long - term bond yields ; ( c ) share prices.[f7 ] ] to see whether there exist some statistical evidences for the three series to have the unit root type of non - stationarity , we carry out a dickey fuller ( df ) unit root test on the three series .we first fit the data by an model of the form where share price at time or long - term bond yield at time or treasury bill rate at time .then , by using the least - squares estimation method , we estimate the parameter for the three series : for the share price series , ; for the long - term bond yield series , ; and for the treasury bill rate series , .then we calculate the dickey fuller statistics and compare them with the critical values at the significance level .the simulated values for the long - term bond yields , treasury bill rates and share prices are , and , respectively .in addition , we also employ an augmented df test and the nonparametric test proposed in gao _ et al . _ for checking the unit root structure of .the resulting values are very similar to those obtained above . and in case a.[f8 ] ] therefore , both the estimation results and the simulated values suggest that there is some strong evidence for accepting the null hypothesis that a unit root structure exists in these series at the significance level .we then consider the following modelling problem : x_t & = & h(v_t ) + u_t,\end{aligned}\ ] ] where case a : is the share price , is the long - term bond yield and is the treasury bill ; and case b : is the long - term bond yield , is the share price and is the treasury bill . for case a , the resulting estimator of is and the plots of the estimates of and are given in figure [ f8 ] . for caseb , the resulting estimator of is and the the plots of the estimates of and are given in figure [ f9].=1 and in case b.[f9 ] ] figures [ f8 ] and [ f9 ] show that increases in treasury bill rates tend to lead to increases in long - term bond yields and decreases in share prices .such findings are supported by the theory of finance and consistent with existing studies .moreover , figures [ f7][f9 ] clearly indicate our new findings that both null recurrent non - stationarity and nonlinearity can be simultaneously exhibited in the share price , the long - term bond yield and the treasury bill rate variables . due to the cointegrating relationship among the stock price , the treasury bill rate and the long - term bond yield variables, our experience suggests that models ( [ eq3.2a ] ) and ( [ eq3.2b ] ) might be more suitable for this empirical study .we will have another look at this data after models ( [ eq3.2a ] ) and ( [ eq3.2b ] ) have been fully studied .in this section , we provide only one key lemma and then an outline of the proofs of theorems [ thm3.1 ] and [ thm3.2 ] .the detailed proofs of the theorems are available from the supplemental document by chen , gao and li .[ lem6.1 ] under the conditions of theorem [ thm3.1 ] , we have as , proof of theorem [ thm3.1 ] in view of lemma [ lem6.1 ] and the decomposition & = & \sum_{t=1}^{n}\widetilde{x}_{t}\widetilde{g}(v_{t})f_t + \sum_{t=1}^{n}\widetilde{x}_{t}\epsilon_{t}f_t - \sum_{t=1}^{n}\widetilde{x}_{t}f_t \biggl(\sum_{k=1}^{n}w_{nk}(v_{t})\epsilon_{k } \biggr),\end{aligned}\ ] ] in order to prove theorem [ thm3.1 ] , we need only to show that for large enough \sum_{t=1}^{n}\widetilde{x}_{t}f_t \biggl\{\sum_{k=1}^{n}w_{nk}(v_{t } ) \epsilon_{k } \biggr\ } & = & \mathrm{o}_p\bigl(\sqrt{n}\bigr ) , \label{eq : jiti2}\\[-2pt ] n^{-1/2}\sum_{t=1}^{n}\widetilde{x}_{t}\epsilon_{t}f_t & \stackrel{d}{\longrightarrow } & n ( 0 , \sigma_{\epsilon , u } ) , \label{eq : jiti3}\end{aligned}\ ] ] where .recall that , where . in order to prove ( [ eq : jiti1])([eq : jiti3 ] ) , it suffices to show that for large enough \sum_{t=1}^{n } \overline{u}_t\widetilde{g}(v_{t } ) f_t & = & \mathrm{o}_p\bigl(\sqrt{n}\bigr ) , \label{eq : jiti4b}\\[-2pt ] \sum_{t=1}^{n } \widetilde{g}(v_{t})\widetilde{h}(v_{t } ) f_t & = & \mathrm{o}_p\bigl(\sqrt{n}\bigr ) , \label{eq : jiti4}\\[-2pt ] \sum_{t=1}^{n } u_t\overline{\epsilon}_t f_t & = & \mathrm{o}_p\bigl(\sqrt{n}\bigr ) , \label{eq : jiti5a}\\[-2pt ] \sum_{t=1}^{n } \overline{u}_t\overline{\epsilon}_t f_t & = & \mathrm{o}_p\bigl(\sqrt{n}\bigr ) , \label{eq : jiti5}\\[-2pt ] \sum_{t=1}^{n } \widetilde{h}(v_{t})\overline{\epsilon}_t f_t & = & \mathrm{o}_p\bigl(\sqrt{n}\bigr ) , \label{eq : jiti6}\\[-2pt ] \sum_{t=1}^{n } \overline{u}_t \epsilon_{t } f_t & = & \mathrm{o}_p\bigl(\sqrt{n}\bigr ) , \label{eq : jiti6a}\\[-2pt ] \sum_{t=1}^{n } \widetilde{h}(v_{t } ) \epsilon_t f_t & = & \mathrm{o}_p\bigl(\sqrt{n}\bigr ) , \label{eq : jiti7}\\[-2pt ] n^{-1/2}\sum_{t=1}^{n}u_t \epsilon_{t}f_t&\stackrel{d}{\longrightarrow } & n ( 0 , \sigma_{\epsilon , u } ) , \vspace*{-2pt } \label{eq : jiti8}\end{aligned}\ ] ] where and . in the following ,we verify equations ( [ eq : jiti4a])([eq : jiti8 ] ) to complete the proofs of theorem [ thm3.1](i ) and theorem [ thm3.1](ii ) .note that , for theorem [ thm3.1](i ) , equations ( [ eq : jiti4 ] ) , ( [ eq : jiti6 ] ) and ( [ eq : jiti7 ] ) hold trivially . by the continuity of and , we have for , \\[-9.5pt ] & & \quad = \frac{g^{\prime}(v_t)}{n(n)h}\sum_{j=1}^n(v_j - v_t ) k \biggl(\frac{v_j - v_t}{h } \biggr ) \bigl(1+\mathrm{o}_p(1)\bigr ) .\nonumber\vspace*{-2pt}\end{aligned}\ ] ] thus , in view of ( [ appendix1 ] ) and lemma 3.4 of karlsen and tjstheim , in order to prove ( [ eq : jiti4a ] ) , it suffices to show that for large enough where .this kind of procedure of replacing by and ignoring a small - order term as involved in ( [ appendix1 ] ) will be used repeatedly throughout the proofs in appendices b and c of the supplemental document .we then may show that ( [ eq : jiti4b ] ) holds . similarly to ( [ appendix1 ] ) and ( [ appendix2 ] ) , we need only to show that where . the detailed derivations for ( [ appendix2 ] ) and ( [ appendix3 ] ) are available from appendix b of the supplemental document .the detailed proofs of ( [ eq : jiti5a ] ) , ( [ eq : jiti5 ] ) , ( [ eq : jiti6a ] ) and ( [ eq : jiti8 ] ) are also available from appendix b. this will complete the proof of theorem [ thm3.1](i ) .we then may prove theorem [ thm3.1](ii ) by completing the proofs of ( [ eq : jiti4 ] ) , ( [ eq : jiti6 ] ) and ( [ eq : jiti7 ] ) , which are again available from appendix b of the supplemental document .proof of theorem [ thm3.2 ] by the definition of , we have \\[-9.5pt ] & = & \sum_{t=1}^{n } w_{nt}(v)\bigl(\epsilon_t + g(v_t)-g(v)\bigr ) + \sum_{t=1}^{n } w_{nt}(v)x_{t}(\theta_0-\widehat{\theta}_n ) .\nonumber\vspace*{-2pt}\vadjust{\goodbreak}\end{aligned}\ ] ] let and .then , we have since is assumed to be stationary and mixing , by corollary 5.1 of hall and heyde and an existing technique to deal with the bias term ( see , e.g. , the proof of theorem 3.5 of karlsen _ et al . _ ) , we have as by ( [ cglb.43])([cglb.45 ] ) , it is sufficient to show that the proof of ( [ cglb.46 ] ) may then be completed by theorem [ thm3.1 ] and assumptions a1a6 .the details are available from appendix b of the supplemental document .this completes an outline of the proofs of theorems [ thm3.1 ] and [ thm3.2 ] .this work was started when the first and third authors were visiting the second author in 2006/2007 .the authors would all like to thank the editor , the associate editor and two references for their constructive comments on an earlier version .thanks also go to the australian research council discovery grants program for its financial support under grant numbers dp0558602 and dp0879088 .
|
in this paper , we consider a partially linear model of the form , , where is a null recurrent markov chain , is a sequence of either strictly stationary or non - stationary regressors and is a stationary sequence . we propose to estimate both and by a semi - parametric least - squares ( sls ) estimation method . under certain conditions , we then show that the proposed sls estimator of is still asymptotically normal with the same rate as for the case of stationary time series . in addition , we also establish an asymptotic distribution for the nonparametric estimator of the function . some numerical examples are provided to show that our theory and estimation method work well in practice . ,
|
it is a challenging problem for modeling and simulating non - equilibrium gas flows over a wide range of flow regimes .the difficulty arises from the different temporal and spatial scales associated with the flows at different regimes .for instance , for transition or free - molecule flows the typical time and length scales are the mean - collision time and the mean - free - path , respectively , while for continuum flows the hydrodynamic scale is much larger than the kinetic scale . for multiscale flows that involve different flow regimes ,one popular numerical strategy is to use the hybrid approach , which divides the flow domain into some subdomains and simulating the flow in each subdomain using different modeling according to the specific dynamics .for instance , in hybrid particle - continuum approaches , the domain is divided into some macro and micro subdomains , where particle - based methods such as molecular - dynamics ( md ) or direct simulation monte carlo ( dsmc ) are used in the micro parts , while the continuum navier - stokes equations are used in the macro parts .usually a buffer zone is employed in hybrid methods between neighboring subdomains to exchange flow information using different strategies .a common feature of the hybrid method is that they are based on _ numerical coupling _ of solutions from different flow regimes , which are limited to systems with a clear scale separation and may encounter significant difficulties for flows with a continuous scale variation . recently, some efforts have been made to develop numerical schemes for multiscale flows based on kinetic models ( e.g. the boltzmann equation or simplified models ) .such kinetic schemes attempt to provide a unified description of flows in different regimes by discretizing the same kinetic equation dynamically , so that the difficulties of hybrid methods in simulating cross - scale flows can be avoided .an example of kinetic schemes is the well - known discrete ordinate method ( dom ) ) , which is powerful for flows in the kinetic regime , but may encounter difficulties for near continuum flow computation due to the limitation of small time step and large numerical dissipations . in order to overcome this problem ,some asymptotic preserving ( ap ) schemes were developed ( e.g. , ) , which can recover the euler solutions in the continuum regime , but may encounter difficulties for the navier - stokes solutions .therefore , it is still desirable to design kinetic schemes that can work efficiently for flows in a wide ranges of regimes .the recent unified gas - kinetic scheme ( ugks ) provides a dynamical multiscale method which can get accurate solutions in both continuum and free molecular regimes .the ugks is a finite - volume scheme for the boltzmann - bgk equation , and the particle velocity is discretized into a discrete velocity set , like the dom .however , the update of the discrete distribution function considers the coupling of particle transport and collision process in one time step , and so the time step is not limited by the collision term .furthermore , the ugks adopts the local integral solution of the bgk equation in the reconstruction of the distribution function at cell interfaces for flux evaluation , which allows the scheme to change dynamically from the kinetic to hydrodynamic physics according to the local flow condition .it is noted , however , in the original ugks an additional evolutionary step for macroscopic variables is required such that extra computation costs are demanded .an alternative simpler ugks method , i.e. , the so - called discrete unified gas kinetic scheme ( dugks ) , was proposed recently .this scheme is also a finite - volume discretization of the boltzmann - bgk equation .but unlike the ugks , the evolution is based on a modified distribution function instead of the original one , which removes the implicitness in the update process of ugks . at the same time , the evolution of macroscopic variables is not required any more .furthermore , the distribution function at a cell interface is constructed based on the evolution equation itself instead of the local integral solution , so that the reconstruction is much simplified without scarifying the multiscale dynamics .the dugks has the same modeling mechanism as the original ugks .the dugks has been applied successfully to a number of gas flows ranging from continuum to transition regimes .the previous dugks is designed for low - speed isothermal flows where the temperature variation is neglected . in many non - equilibrium flows , however , temperature may change significantly ( e.g. high mach number flows ) or change differently from continuum flows ( e.g. micro - flows ) . under such circumstance , it is necessary to track the temperature evolution in addition to the fluid dynamics . in this work , a full dugks is developed for non - equilibrium gas flows where temperature variation is included .the scheme is constructed based on the bgk - shakhov model which can yield a correct prandtl number in the continuum regime .the rest of this paper is organized as follows . in sec .ii , the full dugks is presented and some discussions on its properties are presented . in sec .iii , a number of numerical tests , ranging from subsonic and hypersonic flows with different knudsen numbers , are conducted to validate the method . in sec .iv , a brief summary is given .in kinetic theory , the bgk model uses only one single relaxation time , which leads to a fixed unit prandtl number . in order to overcome this limitation , a number of improved models , such as the bgk - shakhov model and the ellipsoidal statistical model ,have been proposed based on different physical consideration . in -dimensional space, the bgk - shahkov model can be expressed as ,\ ] ] where is the velocity distribution function for particles moving in -dimensional physical space with velocity at position and time . here is a vector of length , consisting of the rest components of the particle velocity in 3-dimensional ( 3d ) space ; is a vector of length representing the internal degree of freedom of molecules ; is the relaxation time relating to the dynamic viscosity and pressure with , and is the shakhov equilibrium distribution function given by =f^{eq}+f_{\tpr},\ ] ] where is the maxwellian distribution function , is the prandtl number , is the peculiar velocity with being the macroscopic flow velocity , is the heat flux , is the gas constant , and is the temperature . the maxwellian distribution function is given by where is the density .the conservative flow variables are defined by the moments of the distribution function , where is the collision invariant , is the total energy , and is the international energy with being the specific heat capacity at constant volume . the pressure is related to density and temperature through an ideal equation of state , , and the heat flux is defined by the specific heat capacities at constant pressure and volume are and , respectively , and so the specific heat ratio is the stress tensor is defined from the second - order moment of the distribution function , the dynamic viscosity usually depends on the inter - molecular interactions . for example , for hard - sphere ( hs ) or variable hard - sphere ( vhs ) molecules , where is the index related to the hs or vhs model , is the viscosity at the reference temperature .the evolution of the distribution function depends only on the -dimensional particle velocity and is irrelevant to and . in order to remove the dependence of the passive variables , two reduced distribution functionscan be introduced , from eq ., we can obtain that and the heat flux and the stress tensor can be computed as the evolution equations for and can be obtained from eq . , [ eq : bgk - gh ] ,\ ] ] ,\ ] ] where the reduced shakhov distribution functions and are given by [ eq : reduced - bgk ] with ,\ ] ] ^{eq},\ ] ] and g^{eq}.\ ] ] with the definitions of the conserved variables , it is easy to verify that the collision terms and satisfy the following conservative laws , full dugks is constructed based on the two reduced kinetic equations .the scheme is a finite volume formulation of the kinetic equations . for simplicity ,we rewrite eq . in the following form , ,\ ] ] for or .the domain is decomposed into a set of control volumes ( cells ) , then the integration of eq . over cell centering at from time to with time step leads to ,\ ] ] where the midpoint rule is used for the time integration of the convection term , and the trapezoidal rule for the collision term .such treatment ensures the scheme is of second - order accuracy in time . here is the micro flux across the cell interface , where and are the volume and surface of cell , is the outward unit vector normal to the surface , and and are the cell - averaged values of the distribution function and collision term , respectively , e.g. , the update rule given by eq . is implicit due to the term which requires the conserved variables . in order to remove this implicity, we employ a technique as used in the development of the isothermal dugks , i.e. , we introduce a new distribution function , then eq . can be rewritten as where it is noted that from the conservative properties of the collision operators given by eq . , we can obtain that therefore , in practical computations we can track the distribution function and instead of the original ones , which can evolve explicitly according to eq ., provided the micro - flux at the cell interface at is obtained .in addition to the conserved variables , the heat flux and stress tensor can also be obtained from . actually , it can be shown that the key in evaluating is to reconstruct the distribution function at the cell interface .to do so we integrate eq .along the characteristic line within a half time step , ,\ ] ] where is a point at the interface of cell , and the trapezoidal rule is again used to evaluate the collision term .it is noted that the formulation is also implicit due to the collision term .similar to the treatment for , we introduce another distribution function to remove the implicity , then eq . ( [ eq : phi - face0 ] ) can be rewritten as where therefore , once is obtained , the distribution function can be determined from eq . .it is noted that the conserved variables can also be obtained from and like eq ., which means that can be obtained from directly .furthermore , the heat flux can also be determined from , then the shakhov distribution at cell interface and time can be evaluated , and subsequently the original distribution function can be calculated from eq . as now the task is to determine the .this is achieved through a reconstruction of the profile of in each cell .first , we determine the the cell - averaged distribution function at the cell center from the tracked distribution function . from eqs ., , and , we can obtain that it should be noted that and are related . actually , from eqs .and we can obtain that with this relation the computation can be simplified as noted in the following subsection .assuming that in each cell is linear , then we have where is the slope of in cell . as an example , in fig .[ fig : cell ] a 1d case is shown . in this case , in order to reconstruct the distribution function at the cell interface , the distribution function is approximated as the slope in each cell can be reconstructed from the cell - averaged values using some numerical limiters .for example , in the 1d case shown in fig .[ fig : cell ] , we can use the van leer limiter , i.e. , \dfrac{|s_1||s_2|}{|s_1|+|s_2|},\ ] ] where in summary , the procedure of the dugks at each time step can be listed as follows ( assuming is the cell interface of cell centered at ) : 1 .calculate the micro flux at cell interface and at time 1 .calculate from at each cell center with velocity according to eq .reconstruct the gradient of ( i.e. , ) in each cell using certain numerical limiters , e.g. , eq . in 1d case ; 3 .reconstruct the distribution function at according to eq . ;determine the distribution function at cell interface at time according to eq . ;calculate the conserved variables and heat flux from , see eqs . and ; 6 .calculate the original distribution function at cell interface and from and according to eq . ; 7 .calculate the micro flux through each cell interface from according to eq . ; 2 .calculate at cell center and time according to eq .update the cell - averaged in each cell from to according to eq . .the particle velocity is continuous in the above procedure . in practical computations, the velocity space will be discretized into a set of discrete velocities .usually the discrete velocity set is chosen as the abscissas of certain quadrature rules such as the gaussian - hermite or newton - cotes formula , and the integrals in the above procedure will be replaced by the quadrature .for example , the conserved variables can be computed as },\ ] ] where is the associate quadrature weights .we now discuss some important properties of the dugks .first , we will show the dugks has the asymptotic preserving ( ap ) property , namely ( i ) the time step is independent of the particle collision time for all knudsen numbers , and ( ii ) the scheme is consistent with the navier - stokes equations in the continuum limit . regarding the time step ,it is noted that the particle transport and collisions are coupled in the reconstruction of the interface distribution function , which is necessary for an ap scheme .this coupling also releases the constraint on the collision - time and the time step as in the operator - splitting schemes , and the time - step can be determined by the courant - friedrichs - lewy ( cfl ) condition , where is the cfl number , is the minimal grid spacing , is the maximum discrete velocity , and is the maximum flow velocity . determined in this way does not dependent on the relaxation time , and the dugks is uniformly stable with respect to the knudsen number .regarding point ( ii ) , it is noted that in the continuum limit as , the distribution function in a cell given by eq . can be approximated as where is the slope of at the cell interface .furthermore , follow the procedure given in the appendix b of ref . , we can show that then , with the aids of these results , we can obtain from eqs ., , and that ( refer to appendix b of ref . ) which recovers the chapman - enskog approximation for the navier - stokes solution .this fact suggests that the dugks can be viewed as a navier - stokes solver in the continuum limit .it is also note that the use of the mid - point and trapezoidal rules in eqs . and as well as the linear reconstruction of the distribution function at the cell interface ensures a second - order accuracy in both space and time in the continuum limit . on the other hand , in the free - molecule limit where , we can find from eq . that , and then from eq .that .furthermore , the relationship between and as shown in eq .gives that , which is just the collision - less limit .finally we point out some key differences between the present dugks and the ugks which is also designed for all knudsen number flows , although both share many common features such as multi - dimensional nature , ap property , and coupling of particle transport and collision .the first key difference is that the cell - averaged conserved variables and heat flux in each cell are required to evolve along with the cell - averaged distribution functions in the ugks , because the collision term is discretized with the trapezoidal rule and the evaluation of the implicit part needs these quantities .however , with the newly introduced distribution function , the implicity in the collision term is removed in the dugks , and and are not required to evolve .the second key difference between dugks and ugks lies in the reconstruction of the distribution function at cell interfaces . in the ugks ,the interface distribution function is constructed based on the integral solution of the kinetic equation with certain approximations , while in the present dugks it is constructed based on the characteristic solution which is much simpler .the third difference is that the dugks is solely based on the single relaxation kinetic model due to its combination of the distribution function and the collision term , but the ugks can be extended to the full boltzmann collision term as well . despite of these differences , we will show in next section that the present dugks can yield numerical predictions nearly the same as the ugks . ) and temperature ( ) ; right : stress ( ) and heat flux ( ) ., title="fig:",scaledwidth=48.0% ] ) and temperature ( ) ; right : stress ( ) and heat flux ( ) . , title="fig:",scaledwidth=48.0% ] ) and temperature ( ) ;right : stress ( ) and heat flux ( ) ., title="fig:",scaledwidth=48.0% ] ) and temperature ( ) ; right : stress ( ) and heat flux ( ) ., title="fig:",scaledwidth=48.0% ] and .left : density ( ) and temperature ( ) ; right : stress ( ) and heat flux ( ) ., title="fig:",scaledwidth=48.0% ] and .left : density ( ) and temperature ( ) ; right : stress ( ) and heat flux ( ) ., title="fig:",scaledwidth=48.0% ] , ) with different cell sizes and cfl number 0.95.,title="fig:",scaledwidth=48.0% ] , ) with different cell sizes and cfl number 0.95.,title="fig:",scaledwidth=48.0% ]the present dugks will be validated by a number of test problems in different flow regimes in this section .the problems include 1d and 2d subsonic / supersonic flows . in the simulationsthe van leer limiter will be used in the reconstruction of interface distribution function .the first test case is the argon shock structure from low to high mach numbers .the results of the present dugks simulations will be compared with the boltzmann solution , dsmc result , and ugks prediction .the densities , velocities , and temperatures at upstream ( , , ) and downstream ( , , ) satisfy the rankine - hugoniou conditions .the prandtl number and specific heat ratio for argon are and , respectively , and the viscosity depends on the temperature , , where relates to the inter - molecular interactions . the mean - free - path is related to the viscosity as , in the simulations the flow variables are normalized by the corresponding upstream quantities , and the characteristic density , length , velocity , and time are choosen to be , , , and , respectively . the computational domain is chosen to be . a uniform mesh with 100 cellsis used so that the mesh space is .the discrete velocity set is determined by the newton - cotes quadrature with 101 points distributed uniformly in ] are used to discretize the velocity space , and the newton - cotes quadrature is used to evaluate the velocity moments .the cfl number is set to be 0.95 in all simulations , and the output time is . in all cases the internal freedomis set to be so that the ratio of specific heats is . in order to make a comparison with the ugks , changes from to as in ref . , such that the flow ranges from continuum to free - molecular regimes . ).,title="fig:",scaledwidth=48.0%]).,title="fig:",scaledwidth=48.0%]).,title="fig:",scaledwidth=48.0% ] figure [ fig : tube10 ] shows the density , temperature , and velocity profiles as , as well as the ugks results and the solution of collision - less boltzmann equation ( see appendix a ) . in this case the corresponding knudsen number at the left boundary is about 12.77 and the flow falls in free molecular regime .it can be seen that the dugks results agree excellent with the collision - less boltzmann solution and the ugks data .as decreases to 0.1 , the flow falls in the slip regime .the results of the dugks in this case is shown in fig .[ fig : tube01 ] and compared with the solutions of the ugks method and collision - less boltzmann equation .the results of the dugks and ugks are nearly identical and some clear deviations from the collision - less boltzmann solutions , which is not surprising since collision effects are significant in such case . ).,title="fig:",scaledwidth=48.0%]).,title="fig:",scaledwidth=48.0%]).,title="fig:",scaledwidth=48.0% ] the results for are shown in fig .[ fig : tube-5 ] , where the exact solution of the euler equations , the results of gks scheme for navier - stokes equations ( bgk - ns ) , and the results of the ugks scheme , are shown together . in this casethe flow is in the continuum regime and the ugks becomes a shock capturing scheme for the euler equations .it can be seen that the dugks results agree well with those of the bgk - ns and ugks methods , but some deviations from the euler solution are observed . particularly , numerical oscillation appears at the contact wave , which may come from the numerical limiter in the reconstruction of flow variables at cell interfaces .we now test the unified property of the dugks with the two - dimensional riemann problem with constant initial data in each quadrant .the solution of the euler equations for this problem can have a number of different configurations with different initial setups , and a variety of numerical studies have been reported in the past two decades .here we choose one of the typical configurations as listed in ref . , where the initial condition is given by in our simulations , we set and .a uniform mesh is employed to discretize the physical domain , and the cfl number is set to be 0.5 in all simulations . as in the one - dimensional shock tube test, a reference viscosity at reference temperature is employed to characteristic the rarefication of the gas , and the local viscosity is determined by with . at the four boundariesthe boundary conditions are set to be , where is the outward unit normal vector . ).,scaledwidth=48.0% ] we first present the results as .the reference mean - free - path and the collision time are both in the order of , and the flow is in the continuum regime . in the simulationa discrete velocity set based on the half - range gauss - hermite quadrature is employed .the density contour at is shown in fig .[ fig : riemann-7 ] .it is clear that in this case the dugks becomes a shock capture scheme since now and .the configuration is also in excellent agreement with the solution of euler equations by different numerical methods ( e.g. , ) .we now test the dugks for the problem in free - molecular regime by choosing , respectively . in this casethe flow is highly nonequilibrium although the flow field is smooth . in order to capture the nonequilibrium effects ,the particle velocity space is discretized with a mesh points based on the half - range gauss - hermite quadrature . furthermore, a uniform mesh with cells is used in the physical space which is sufficient to obtain well - resolved solutions . in fig .[ fig : riemann2d ] the density , temperature , velocity magnitude ( ) , and streamlines , are shown at .for comparison , the results from the solution of collision - less boltzmann are also presented ( see appendix b ) .it can be seen that the flow patterns predicted by the dugks are quite similar to those of the collision - less boltzmann equation .the difference may be due to the boundary conditions used in the present simulation where a finite domain is used , while the solutions of the collision - less boltzmann equation are in the whole infinite domain .also , even with , there is still particle collision in the current dugks computation .however , the overall structure of the two solutions are in good agreement .the agreement with available data in both continuum and free molecular regimes suggests that the dugks has the nice dynamic adaptive property for multi - regime flows , which is desirable for multiscale flow simulations . .in ( a)-(c ) , the background and dashed lines are from the collision - less boltzmann equation , and the solid lines are the dugks results . in ( d ) , the dashed lines are the solutions of collision - less boltzmann equation , and the solid lines are the dugks results.,title="fig:",scaledwidth=48.0% ] . in ( a)-(c ) , the background and dashed lines are from the collision - less boltzmann equation , and the solid lines are the dugks results . in ( d ) , the dashed lines are the solutions of collision - less boltzmann equation , and the solid lines are the dugks results.,title="fig:",scaledwidth=48.0% ] . in ( a)-(c ) ,the background and dashed lines are from the collision - less boltzmann equation , and the solid lines are the dugks results . in ( d ) , the dashed lines are the solutions of collision - less boltzmann equation , and the solid lines are the dugks results.,title="fig:",scaledwidth=48.0% ] . in ( a)-(c ) , the background and dashed lines are from the collision - less boltzmann equation , and the solid lines are the dugks results . in ( d ) , the dashed lines are the solutions of collision - less boltzmann equation , and the solid lines are the dugks results.,title="fig:",scaledwidth=48.0% ]the multiscale nature of gas flows involving different regimes leads to significant difficulties in numerical simulations . in this paper a discrete unified gas kinetic scheme in finite - volume formulation is developed for multi - regime flows based on the shakhov kinetic model . with the use of discrete characteristic solution of the kinetic equation in the determination of distribution at cell interfaces , the transport and collision mechanisms are coupled together in the flux evaluation , which makes the dugks a dynamic multiscale approach for flow simulation which distinguishes it from many other numerical methods based on operator splitting approach .the coupling treatment of transport and collision also makes the dugks have some nice features such as multi - dimensional nature and the asymptotic preserving properties .the dugks is validated by several test problems ranging from continuum to free molecular flows .the numerical results demonstrate the accuracy and robustness of the scheme for multi - regime flow simulations .the tests also show that the dugks exhibits proper dynamic property according to local flow information , which is important for capturing multiscale flows . in the present simulations ,a fixed discrete velocity set is used for each test case .the computational efficiency can be greatly improved by adopting adaptive velocity techniques .for the shock tube problem , the solution of the collision - less boltzmann equation is by taking velocity moments of , we can obtain the conserved variables , \erfc(-\tilde{u}_1)+(u_1+x / t)(2 rt_2/\pi)^{1/2 } \exp(-\tilde{u}_2 ^ 2)\right\}\nonumber\\ & + & \dfrac{\rho_2}{4}\left\{[u_2 ^ 2+(k+3)rt_2]\erfc(\tilde{u}_2)-(u_2+x / t)(2 rt_2/\pi)^{1/2 } \exp(-\tilde{u}_2 ^ 2)\right\},\end{aligned}\ ] ] where , and is the complementary error function defined by the 2d riemann problem , the solution of the collision - less boltzmann equation is then the conserved variables can be obtained by taking the velocity moments of , \erfc(\tilde{v}_1)\nonumber\\ & & + \dfrac{\rho_2}{4}\left[\left({2rt_2}/{\pi}\right)^{1/2}e^{-\tilde{u}_2 ^ 2}+u_2\erfc(-\tilde{u}_2)\right]\erfc(\tilde{v}_2)\nonumber\\ & & + \dfrac{\rho_3}{4}\left[\left({2rt_3}/{\pi}\right)^{1/2}e^{-\tilde{u}_3 ^ 2}+u_3\erfc(-\tilde{u}_3)\right]\erfc(-\tilde{v}_3)\nonumber\\ & & + \dfrac{\rho_4}{4}\left[-\left({2rt_4}/{\pi}\right)^{1/2}e^{-\tilde{u}_4 ^ 2}+u_4\erfc(\tilde{u}_4)\right]\erfc(-\tilde{v}_4),\end{aligned}\ ] ] \erfc(\tilde{u}_1)\nonumber\\ & & + \dfrac{\rho_2}{4}\left[-\left({2rt_2}/{\pi}\right)^{1/2}e^{-\tilde{v}_2 ^ 2}+v_2\erfc(-\tilde{v}_2)\right]\erfc(-\tilde{u}_2)\nonumber\\ & & + \dfrac{\rho_3}{4}\left[\left({2rt_3}/{\pi}\right)^{1/2}e^{-\tilde{v}_3 ^ 2}+v_3\erfc(-\tilde{v}_3)\right]\erfc(-\tilde{u}_3)\nonumber\\ & & + \dfrac{\rho_4}{4}\left[-\left({2rt_4}/{\pi}\right)^{1/2}e^{-\tilde{v}_4 ^ 2}+v_4\erfc(\tilde{v}_4)\right]\erfc(\tilde{u}_4),\end{aligned}\ ] ] and with \left({2rt_1}/{\pi}\right)^{1/2}\nonumber\\ & & + \left[(k+2)rt_1+u_1 ^ 2+v_1 ^ 2\right]\erfc(\tilde{u}_1)\erfc(\tilde{v}_1),\end{aligned}\ ] ] \left({2rt_2}/{\pi}\right)^{1/2}\nonumber\\ & & + \left[(k+2)rt_2+u_2 ^ 2+v_2 ^ 2\right]\erfc(-\tilde{u}_2)\erfc(\tilde{v}_2),\end{aligned}\ ] ] \left({2rt_3}/{\pi}\right)^{1/2}\nonumber\\ & & + \left[(k+2)rt_3+u_3 ^ 2+v_3 ^ 2\right]\erfc(-\tilde{u}_3)\erfc(-\tilde{v}_3),\end{aligned}\ ] ] \left({2rt_4}/{\pi}\right)^{1/2}\nonumber\\ & & + \left[(k+2)rt_4+u_4 ^ 2+v_4 ^ 2\right]\erfc(\tilde{u}_4)\erfc(-\tilde{v}_4),\end{aligned}\ ] ] 50 g. a. radtke , j .- p .m. praud and n. g. hadjiconstantinou , phil .a * 371 * , 20120182 ( 2012 ) .s. t. oconnell and p. a. thompson , phys .e * 52 * , r5792 ( 1995 ) .t. werder , j. h. walther , and p. koumoutsakos , j. comput .phys . * 205 * , 373 ( 2005 ) .w. e , b. engquist , and z. y. huang , phys .b * 67 * , 092101 ( 2003 ). m. k. borg , d. a. lockerby , and j. m. reese , j. comput .255 * , 149 ( 2013 ) .h. a. carlson , r. roveda , i. d. boyd , and g. v. candler , aiaa paper 2004 - 1180 ( 2004 ) . j. y. yang and j. c. huang , j. comput . phys . * 120 * , 323 ( 1995 ) . z. h. li and h. x. zhang , j. comput. phys . * 193 * , 708 ( 2004 ) .a. n. kudryavtsev and a. a. shershnev , j. sci .* 57 * , 42 ( 2013 ) .s. pieraccini and g. puppo , j. sci . comput . *32 * , 1 ( 2007 ) .m. bennoune , m. lemo and l. mieussens , j. comput . phys . *227 * , 3781 ( 2008 ) .f. filbet and s. jin , j. comput .phys . * 229 * , 7625 ( 2010 ) .g. dimarco and l. pareschi , numer .51 * , 1064 ( 2013 ) .k. xu and j .- c .huang , j. comput .phys . * 229 * , 7747 ( 2010 ) .huang , k. xu , and p.yu , commun .* 12 * , 662 ( 2012 ) .p. l. bhatnagar , e. p. gross , and m. krook , phys. rev . * 94 * , 511 ( 1954 ) .z. l. guo , k. xu , and r. j. wang , phys .e * 88 * , 033305 ( 2013 ) .e. m. shakhov , fluid dyn .* 3 * , 95 ( 1968 ) .l. h. holway , phys .fluids * 9 * , 1658 ( 1966 ) . l. mieussens , j. comput .phys . * 253 * , 138 ( 2013 ) .b. van leer , j. comput . phys . * 23 * , 276 ( 1977 ) .k. xu , j. comput .phys . * 171 * , 289 ( 2001 ) . c. liu and k. xu , arxiv:1405.4479 [ math.na ] ( 2014 ) .s. harris , _ an introduction to the theory of the boltzmann equation _ ( dover publications , new york , 2004 ) .g. a. bird , _ molecular gas dynamics and the direct simulation of gas flows _( clarendon press , oxford , 1994 ) .t. ohwada , phys .fluids a * 5 * , 217 ( 1993 ) .k. xu and j .- c .huang , i m a j. appl . math . * 76 * , 698 ( 2011 ) .g. a. bird , phys .fluids * 13 * , 1172 ( 1970 ) .g. a. sod , j. comput .* 22 * , 1 ( 1978 ) . c. w. schulz - rinne , siam j. math24 * , 76 ( 1993 ) .c. w. schulz - rinne , j. p. collins , and h. m. glaz , siam j. sci .* 14 * , 1394 ( 1993 ) .t. zhang and y. zheng , siam j. math .anal * 21 * , 593 ( 1990 ) .t. chang , g .- q .chen , and s. yang , disc .. syst . * 1 * , 555 ( 1995 ) ; _ ibid _ * 6 * , 419 ( 2000 ) .lax and x .- d .liu , siam j. sci .* 19 * , 319 ( 1998 ) .a. kurganov and e. tadmor , numer .methods partial differential eq . * 18 * , 584 ( 2002 ) .b. shizgal , j. comput .phys . * 41 * , 309 ( 1981 ) .s. z. chen , k. xu , c. b. li , and q. d. cai , j. comput .phys . * 231 * , 6643 ( 2012 ) .s. brulla and l. mieussens , j. comput .phys . * 266 * , 226 ( 2014 ) .
|
this paper is a continuation of our earlier work [ z.l . guo _ et al . _ , phys . rev . e * 88 * , 033305 ( 2013 ) ] where a multiscale numerical scheme based on kinetic model was developed for low speed isothermal flows with arbitrary knudsen numbers . in this work , a discrete unified gas - kinetic scheme ( dugks ) for compressible flows with the consideration of heat transfer and shock discontinuity is developed based on the shakhov model with an adjustable prandtl number . the method is an explicit finite - volume scheme where the transport and collision processes are coupled in the evaluation of the fluxes at cell interfaces , so that the nice asymptotic preserving ( ap ) property is retained , such that the time step is limited only by the cfl number , the distribution function at cell interface recovers to the chapman - enskog one in the continuum limit while reduces to that of free - transport for free - molecular flow , and the time and spatial accuracy is of second - order accuracy in smooth region . these features make the dugks an ideal method for multiscale compressible flow simulations . a number of numerical tests , including the shock structure problem , the sod tube problem with different degree of non - equilibrium , and the two - dimensional riemann problem in continuum and rarefied regimes , are performed to validate the scheme . the comparisons with the results of dsmc and other benchmark data demonstrate that the dugks is a reliable and efficient method for multiscale compressible flow computation .
|
the occultation of distant stars by solar system bodies ( asteroids , dwarf planets , tnos , etc ) provides a method to characterise the nature of the solar system bodies to a resolution that can not be matched except by space probe observations .an occultation recording consists of an earth station observing the star and asteroid coalescence and monitoring the light output over time ( the light curve of the occultation ) . as the asteroid occults the star ,the light flux is reduced .the recording aims to capture the time ( utc ) when the light flux changes and the manner in which it changes to determine a chord through the body .the recording also allows detection of the presence ( if any ) of an atmosphere , and satellites or ring structures . with several earth stations observing the same event ,a series of adjacent chords can be drawn , providing more information about the asteroid and environs .the diameter of the parent body can be more precisely estimated , the body shape can be examined for oblateness , and any satellites or ring structures can have their orbits determined .all these measurements depend on the time stamp of each image in the occultation recording being referenced to a known time standard such as utc .accuracy of timebase should be to within a millisecond .in the case of a recent occultation of 10199 chariklo , a km diameter member of the centaur group orbiting between saturn and uranus , there were fourteen observing stations , spread across more than 1000 km of south america , of which eight observed the occultation .the occultation was remarkable because it was the first observation of two rings , of 7 and 3 km width , orbiting the primary body at a distance of 391 and 405 km .unfortunately , there were disparities in absolute time consensus between two of the observing stations , housed in the same observatory , of 1.6 seconds .the shadow transit speed of the occultation was calculated to be 21.6 km sec , and so this disparity represents a disjunction of about 35 km in measurements from two side - by - side stations .the measurements were able to be adjusted because the two systems were side - by - side and observed the same event , and previous observations indicated one system had a record of temporal fidelity while the other was known to have unexplained offsets up to 2.5 seconds from true .there were also timing disparities found with one other station which also observed the occultation , but it could not be adjusted for , because the offset was not able to be characterised . consequently , the information from this station was not used in the data reduction for the observation of the rings of chariklo .most occultation systems in use today rely on either global positioning systems ( gps ) based time sources for fidelity to utc , or use network time protocol ( ntp ) as the method to synchronise the imaging system computer clock with a stratum 1 timeserver through a link to the internet .previous methods of timestamping using the reception of specialised radio broadcasts such as radio wwv in the americas or radio vnc in australia are no longer available or soon to be phased out .the aim for accuracy of the timestamp is to be within a millisecond of true , and several gps based devices exist for the purpose of time - stamping analog video ( cvbs ) , including those devised by blackbox camera , alexander meier elektronik , pfd systems and iota .specialised digital occultation camera systems such as phot , pico , poets and moris trigger their acquisitions based on gps signals stated to be within a millisecond of utc .camera systems originally intended for astrophotography generally use the ntp based pc system clock to provide the header timestamp .the time stamp can be either written onto the image itself ( in the case of analogue video ) or embedded in the image header ( for digital video or fits files ) .the duration of a camera exposure has previously been verified in popular literature , by imaging the raster of an analog video screen ( cathode ray tube ) .the timebase of a raster is well controlled , and for short exposures provides an elegant solution .counting raster lines represented the basis for broadcast camera accreditation before digital cameras became common .unfortunately , crt displays are becoming rare , and connecting to the timebase of the display to provide utc synchronisation requires high voltage interfacing .time stamp fidelity to utc has previously been verified to field resolution ( 16.7 msec for ntsc , 20 msec for pal ) .this is done by observing an optical event such as a flashing light emitting diode ( led ) whose time of illumination is well established from electrical measurements and fiducial sources such as the 1pps signal from global positioning system ( gps ) receivers . while this does not establish the duration of the image , it does give an upper limit for the timestamp of the particular frame where the 1pps led was observed .one way to verify an image timestamp is to provide an optical device which is crafted to indicate the passage of time in an unambiguous way .when a camera under test takes an image of the device , the image contains information which can be decoded to produce an image start and image stop time .we describe such a device in this paper .it consists of an array of 500 leds , of which only one led is illuminated at any one time , and only for a short time ( e.g. 2 msec ) .the array begins its first ( top - left ) led illumination at a utc integer second boundary and over the course of that second illuminates each led for 2 msec , one after the other down the first column , then down the subsequent columns to the right .this is therefore a moving dot of light , and the camera system being verified records the moving dot of light . in each image , some of the leds are illuminated due to the camera recording during the time when those leds were active , while others are dark , and the position of the illuminated leds in each image provides an unambiguous optical timestamp .the device uses an internal gps receiver for reference to utc , and as per good metrology practice , has a timebase which is accurate to better than a tenth of the basic unit of measure ( i.e. for a 2 msec measure , the accuracy should be .2 msec ) . the time period from illumination ofthe first led to the last is the sweep time .the device provides 4 sweep time settings ; 1 second , 2 seconds , 5 seconds and 10 seconds . in this paperwe describe the results for the 1 second sweep as this has the most rigorous timing requirements , and use the 1 second , 2 second , and the 5 second sweep as a basis for testing two different cameras in sections [ sec : test1 ] and [ sec : test2 ] .we also describe an image analysis program for pc , mac , and linux , which can decode the clock display .source code , wiring diagrams and built applications are provided to aid the construction and use of the system .the present verification system has been named sexta ( southern exposure timing array ) after work by dangl - see acknowledgements - and was developed to verify image timestamps for a digital occultation camera and recorder system developed by the present authors .the camera system under test is set up to view the sexta panel , as shown in figure [ fig1 ] .the item numbers of this list correspond to figure [ fig1 ] items . 1 . a panel of 500 leds .the first led is illuminated at ut boundary , and each successive led glows for a set time . for a 1-second sweep ,the time is 2 msec .the illumination pattern is down a column , then to the right .a `` 1pps '' led that flashes with the arrival of the 1pps signal from the gps .the fiducial point of the 1pps led is the off - to - on transition .a `` lock '' led to show the dot matrix display ( dmd ) panel is locked to gps ; 4 .an `` almanac - ok '' led to indicate that the gps almanac is current ; 5 .a 7-segment led array next to the panel of leds to indicate ut hours , minutes , and seconds , and the number of satellites in the gps fix ; and 6 .an array of ten leds to indicate the last digit of ut integer seconds ( 0 - 9 ) . from figure [ fig1 ] , exposure start time is 12:34:56.038 ; exposure end time is 12:34:56.070 ut .the system has 5 satellites in view , the almanac is current and the panel is locked to gps . because the image does not contain a ut integer boundary , the 1pps led is not lit . with the camera and recorder under testwe take images of the panel .each image shows the optical timestamp provided by sexta , and is internally timestamped by the camera system using its own method .a timing analysis of the 1 second sweep was performed , with each led illumination being measured for duration .the results of the analysis are shown below in table [ tab1 ] ..the analysis of timing of the 1 sec sweep . [ cols="<,^",options="header " , ] [ tab1 ] the target of 2 msec illumination per led was met , with the error being less than the desired 0.2 msec accuracy .an oscilloscope was connected to the unit to determine the latency between the gps fiducial signal ( the 1pps ) and the illumination of the first led .this was measured at 0.165 msec .the latency between 1pps and ut unit seconds led changeover was measured at 0.036 msec .the latency between 1pps and the 1pps led illumination was measured at 0.007 msec .all of these times are below the 0.2 msec accuracy required .the sexta display is placed in the imaging system field of view at focus , and powered up .a reference image pattern is displayed for ten seconds , which the imaging system should acquire to aid the reader application with positional information of the leds on the panel .when the gps acquires a fix , the 1pps begins flashing , and the 500-led array engages in its synchronisation process , taking around 3 minutes to acquire lock .the lock led is illuminated when the process is complete .the gps may take some time ( 15 minutes ) to download a current almanac , which contains the gps - ut offset ; this is necessary to ensure sexta is producing a correct time stamp , and the currency of the almanac is indicated by the a - ok led .sexta is ready when the panel scrolls across at the determined rate , the 1pps led flashes every second , and the almanac - ok led and lock led are illuminated constantly .the 7-segment led array indicates ut hh : mm : ss and the number of satellites in the fix .the imaging system then acquires images of the panel at the desired settings . when saved , the images are analysed to determine the congruency of the timestamp as saved by the imaging system , and the sexta - delivered optical timestamp contained in the image . to ease the chore of reading optical timestamps , a reader application has been produced to automate the process .the application reads fits files ( and other common formats ) and can extract timestamp and exposure duration information from the fits header if these are present . in figure[ fig2 ] , the expected position of each led is shown by blue markers , with red markers where the led brightness is above the threshold level .the optical , fits derived and file creation time stamps are compared at the bottom of the window .the watec 120n analog video camera has been used for occultation recordings of pluto and main belt asteroids .the camera has the ability to synthesise long duration exposures by accumulating ( stacking ) short duration images in an internal buffer , then output the stacked images in accordance with the ntsc / pal standard .figure [ fig3 ] shows three consecutive video frames from a pal watec 120n camera with no accumulation ( i.e. 40 msec of imaging time per picture ) while recording a view of sexta .the images were time - stamped with a commercially available video time inserter ( iota - vti , videotimers inc , ca , usa ) ; the vti time stamp being shown at the bottom of each image , and circled on the left image in orange .the middle image has the sexta derived optical time stamp provided in the section below the image , along with the file creation time stamp .the first item of note is that there is very little dead time between frames .the left picture ends with the 499 led on the sexta panel being illuminated at the end of the fifth ut integer second .the middle picture shows the 500 led of the fifth second illuminated and then 19 leds ( 38 msec ) in the sixth second .the right picture shows the led for the 38 msec partially illuminated , indicating that the dead time is much less than 2 msec .the second item of note is that the middle of the illuminated led band is twice as bright as the other illuminated leds .this is because the watec camera records in an interlaced manner , with the even raster lines being exposed first , and the odd raster lines being exposed second ( thus seeing different times , even though they are adjacent to each other on the image ) .the bright led occurs where the second field begins exposure while the first field is still being exposed .the amount of field overlap or separation varies with exposure settings and must be determined for a given camera at a given setting .the third item of note is that the sexta central image timestamp reads 40 msec before the iota - vti timestamp .this is due to the delay induced by the buffer system of the camera and is the instrument delay ( i d ) time .it is common with analog video integrating cameras , with the amount of i d varying between different devices and settings , but constant for a given device and setting .the i d must be subtracted from the iota - vti timestamp to obtain the correct timestamp .the issue of i d varying with analog camera settings is a long standing problem .no automated means of addressing it presently exists .it remains a task for the operator to compile a table of i d for each camera setting , and then manually apply it where time values are needed .the sbig ( santa barbara instruments group ) family of cameras have been used for tno occultations on several occasions . unlike the video camera described above , theses cameras are driven by software running on a tethered pc ; this controls the camera gain , initiates image acquisition , and defines the region of interest ( roi ) on the ccd chip ; and the camera downloads the roi to storage on the computer .the camera control software and host pc are therefore critical parts of the imaging system .image timestamps are derived from the pc system clock , which is synchronised to ut by means of the network time protocol ( ntp ) .ntp requires an active internet connection to operate , and makes use of ntp servers available on the internet , to determine ut to a variable degree of error .the ntp system can , when connected to low stratum number ntp timeservers over a low latency network connection , offer pc system times which are within tens of milliseconds of true ut .we tested two sbig ccd cameras ; the st10xe using ccdops v5.6 ( santa barbara instruments group , ca , usa ) as the control program , and the st8 using both maximdl v5.03 ( diffraction limited , ottawa , canada ) and ccdsoft v5.00.210 ( software bisque , co , usa ) as the control programs .all cameras had mechanical shutters and a usb connection to the pc .we installed the sbig st10xe camera + ccdops on a windows 7 32-bit computer with a core i7 processor , 4 gb ram , a 1 tb 5400 rpm hard drive , and provided with an adsl2 + network connection of mbps .ntp was synchronised using a human machine interface called dimension4 ( d4 ) , freely available as a download , which allows the user to run ntp as a service on windows 7 machines , synchronise to designated ntp servers , and maintain a log of the offsets from ut over time .d4 was peered with a server from the australian ntp pool , and a log of the offsets was collected during the camera testing run .the offset time during the run was + 115 msec , i.e the pc was ahead of ut by this amount .the camera was set to take images with an exposure duration of 250 msec , and as frequently as the camera and pc software could work , which was an image about every 4 seconds .the imaging run was over 17 minutes ( 266 frames ) , which would be a reasonable period for a tno occultation recording .the sexta panel was configured to have a sweep time of 5 seconds , giving a temporal resolution of 10 msec per led illumination time .we installed the sbig st8 camera , maximdl and ccdsoft on a windows 7 64 bit computer with a core i7 processor , 8 gb ram , and provided ntp services via a lan stratum 3 ntp timeserver synced to two regional stratum 1 timeservers ( time.uwa.edu.au and dns.iinet.net.au ) .ntp on the pc was synchronised using tardis , a freely available interface with updates running every 60 seconds .the camera was set to take images of 1.9 second duration , and the sexta panel was configured for a sweep time of 2 seconds , giving a temporal resolution of 4 msec .we took 100 images with each control program , then rotated the camera housing 180 degrees so that the ccd saw the sexta panel sweeping from right to left instead of the normal left - to - right , and repeated the 100 exposures .this was to elucidate any effect that the mechanical shutter might have on the imaging exposure , as the shutter is not instantaneous in its operation but sweeps over the field always in one direction with respect to the ccd .the major finding was that the ccdops program wrote a timestamp to the header which resolved to the second , and no further .thus , if an image was begun at 01h 23 m 45.678s , the fits header would be written as 01:23:45.000 .this produced most of the error between optical timestamps and fits .the fits time was the image start time ( which is what the fits standard requires for the date - obs field ) , rather than the image central time which would be what an astronomer would use in calculations . secondly , the pc clock was ahead of ut by 115 msec at the time of the imaging run , as indicated by d4 .this is a high offset for an ntp synced computer , and a more reasonable result would be around 20 msec .possible reasons were the short time that d4/ntp was running on the computer ( about 4 hours before the imaging run ) which is known to cause larger offsets .we graphed the time ( see figure [ fig4 ] ) within any ut second when an image was started ( as measured by optical time stamp ) .we measured the error between fits and optical start time , which should have been between zero and one second ( due to the integer second resolution of the fits timestamp ) if the pc was synchronised perfectly with ut .we found that the knee in the graph occurred at the optical fraction second time around 875 to 895 msec , rather than 1000 msec .the disparity of 105 to 125 msec is in good agreement with the d4-reported offset of + 115 msec .thirdly , imaging cadence ( see figure [ fig5 ] ) was not particularly steady .the average image cadence was 3.91 sec , with a jitter of -32 to + 408 msec .this jitter would be difficult or impossible to detect using fits information as it presently stands .the fits header exposure times ( image duration ) were very consistent and agreed well with the optical information .the maximum and minimum exposure was 240 msec and 260 msec , with the mean and standard deviation being 250.9 msec.0034 msec .imaging time to download time was 0.25 vs 3.91 seconds , which is acceptable for a testing regime .see figure [ fig6 ] .the maximdl program wrote timestamps resolved to the centisecond , while ccdsoft wrote millisecond resolution timestamps to the fits header , but the delay between header start time and optical start time for maximdl was 802.2 msec average , with excursions of msec ; while with ccdsoft , the delay was much less , being 79.1 msec average , with excursions of msec .this represents an order of magnitude improvement in timestamp accuracy with ccdsoft .image cadence for maximdl and the st8 with a 1.9 sec exposure was 9.8535 sec average , with msec jitter .ccdsoft and the st8 had a very similar cadence of 9.9893 sec average , and an identical jitter of msec .image duration was identical for both maximdl and ccdsoft , with an average of 1.92 sec msec .the mechanical shutter introduced a small ( but measurable ) left - to - right time bias across the ccd of around 14 msec .that is , an event recorded near the leading edge of the ccd ( which opens to light first , and which we consider here as the left edge ) is delayed less than an event recorded near the trailing edge of the ccd ( which opens to light only when the shutter has traversed the ccd ) .this was confirmed when the camera was rotated 180 degrees with respect to the sexta panel , so that the sexta sweep was from right - to - left .see figure [ fig7 ] .the three ccd camera control software programs examined produced timestamps with widely varying fidelities to ut .the worst case was ccdops , with a delay of 1 second from true , due to integer second time recordings , and a cadence jitter of 400 msec .add to this an unknown ntp offset and it is easy to appreciate the difficulties experienced by the chariklo researchers mentioned in section 2 .the best case was ccdsoft , with a delay of less than 100 msec from true , and a cadence jitter of 40 msec - an order of magnitude improvement over the worst case .the ntp offset remains an undocumented quantity .the image duration variability for all programs was msec , and we speculate that this may be due to the exposure being timed by the hardware in the camera rather than the host pc .the most severe timing issue was the fact that the ntp offset was not recorded in the fits header by any of the programs tested .because of this , some other means must be employed to verify that ntp is operational and has reasonably low offsets .if this is not done , the fits timestamp would be in error from true by an unknown number of seconds .the imaging cadence variation is less amenable to simple fixes , and may depend on what the host computer is doing ( i.e. other housekeeping tasks ) .this topic is beyond the scope of this discussion .this examination of two ntp based cameras with three commercial programs is a good beginning , but can not be considered exhaustive testing of any system .it is entirely possible that further testing may uncover outlier events which compound any errors detected here by orders of magnitude .the sexta result for a given camera and recorder system does not necessarily provide assurance that the camera system will continue to perform in the same way in the future .such assurance would come from repeated testing over some reasonable period of time .the watec analog video camera and gps - based video time inserter examined here have been found to be stable and consistent in behaviour .this offers confidence that results obtained in the future can be relied upon .the sbig cameras and ntp - based time references examined here have more variable results which could compromise occultation recording timings to the extent seen in the chariklo occultation mentioned in section [ sec : motivation ] .some avenues of exploration remain to improve the method .a system for verifying time - stamped image time and duration , to 2 msec precision and within 1 msec of gps fiducial time , is described .the system is very low cost and requires minimal assembly .parts are readily obtainable .source code and wiring diagrams and a built app with source code for analysing the image time stamps are provided and available for download .the supplementary data with additional documentation , web links for more references , build notes , source code , and applications for windows 7 + , macos 10.7 + , and linux ubuntu 12.04 can be found at : http://www.kuriwaobservatory.com/sexta/ http://www.tonybarry.net/the original anecdotal work that inspired the development of sexta can be found at:- http://www.dangl.at/exta/exta_e.htm the author , gerhard dangl , has stated : - _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` ... because of the relative complex design required for this functions and the big display with a large number of leds the goal was not to develop this device for reproduction .so at the moment it will stay as a very useful prototype device . ''( page dated october 10 , 2012 , retrieved on 2014 - 07 - 24 08:41:00 ut ) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ as such , this work can not be tested or verified , or used by others for further investigation ; and this absence led to the development of sexta .the authors would also like to thank mr .edward dobosz of the western sydney amateur astronomy group for assistance with the testing of an sbig camera .t.b . was supported by a joint research engagement ( jre ) grant from the university of sydney ( 2012 2014 ) .
|
we describe an image timestamp verification system to determine the exposure timing characteristics and continuity of images made by an imaging camera and recorder , with reference to coordinated universal time ( utc ) . the original use was to verify the timestamps of stellar occultation recording systems , but the system is applicable to lunar flashes , planetary transits , sprite recording , or any area where reliable timestamps are required . the system offers good temporal resolution ( down to 2 msec , referred to utc ) and provides exposure duration and interframe dead time information . the system uses inexpensive , off - the - shelf components , requires minimal assembly and requires no high - voltage components or connections . we also describe an application to load fits ( and other format ) image files , which can decode the verification image timestamp . source code , wiring diagrams and built applications are provided to aid the construction and use of the device . occultations standards minor planets , asteroids instrumentation : miscellaneous methods : observational techniques : miscellaneous
|
the magic telescope on the canary island of la palma , located 2200 m above sea level at 2845 and 1754 , is an imaging atmospheric cherenkov telescope designed to achieve a low energy threshold , fast positioning , and high tracking accuracy .the magic design , and the currently ongoing construction of a second telescope ( magicii ; ) , pave the way for ground - based detection of gamma - ray sources at cosmological distances down to less than 25gev .after the discovery of the distant blazars 1es1218 + 304 at a redshift of =0.182 and 1es1011 + 496 at =0.212 , the most recent breakthrough has been the discovery of the first quasar at very high energies , the flat - spectrum radio source 3c279 at a redshift of =0.536 .these observational results were somewhat surprising , since the extragalactic background radiation in the mid - infrared to near - infrared wavelength range was believed to be strong enough to inhibit propagation of gamma - rays across cosmological distances .the apparent low level of pair attenuation of gamma - rays greatly improves the prospects of searching for very high energy gamma - rays from gamma - ray bursts ( grbs ) , cf .their remarkable similarities with blazar flares , albeit at much shorter timescales , presumably arise from the scaling behavior of relativistic jets , the common physical cause of these phenomena .since most grbs reside at large redshifts , their detection at very high energies relies on the low level of absorption .moreover , the cosmological absorption decreases with photon energy , favoring magic to discover grbs due to its low energy threshold . due to the short life times of grbs andthe limited field of view of imaging atmospheric cherenkov telescopes , the drive system of the magic telescope has to meet two basic demands : during normal observations , the 72-ton telescope has to be positioned accurately , and has to track a given sky position , i.e. , counteract the apparent rotation of the celestial sphere , with high precision at a typical rotational speed in the order of one revolution per day . for catching the grb prompt emission and afterglows , it has to be powerful enough to position the telescope to an arbitrary point on the sky within a very short time and commence normal tracking immediately thereafter . to keep the system simple , i.e. , robust , both requirements should be achieved without an indexing gear .the telescope s total weight of 72 tons is comparatively low , reflecting the use of low - weight materials whenever possible .for example , the mount consists of a space frame of carbon - fiber reinforced plastic tubes , and the mirrors are made of polished aluminum . in this paper, we describe the basic properties of the magic drive system . in section [ sec2 ] , the hardware components and mechanical setup of the drive systemare outlined .the control loops and performance goals are described in section [ sec3 ] , while the implementation of the positioning and tracking algorithms and the calibration of the drive system are explained in section [ sec4 ] .the system can be scaled to meet the demands of other telescope designs as shown in section [ sec5 ] .finally , in section [ outlook ] and section [ conclusions ] we draw conclusions from our experience of operating the magic telescope with this drive system for four years .the drive system of the magic telescope is quite similar to that of large , alt - azimuth - mounted optical telescopes .nevertheless there are quite a few aspects that influenced the design of the magic drive system in comparison to optical telescopes and small - diameter imaging atmospheric cherenkov telescopes ( iact ) .although iacts have optical components , the tracking and stability requirements for iacts are much less demanding than for optical telescopes . like optical telescopes , iacts track celestial objects , but observe quite different phenomena : optical telescopes observe visible light , which originates at infinity and is parallel . consequently , the best - possible optical resolution is required and in turn , equal tracking precision due to comparably long integration times , i.e. , seconds to hours .in contrast , iacts record the cherenkov light produced by an electromagnetic air - shower in the atmosphere , induced by a primary gamma - ray , i.e. , from a close by ( 5km-20 km ) and extended event with a diffuse transverse extension and a typical extension of a few hundred meters . due to the stochastic nature of the shower development, the detected light will have an inherent limitation in explanatory power , improving normally with the energy , i.e. , shower - particle multiplicity . as the cherenkov light is emitted under a small angle off the particle tracks , these photons do not even point directly to the source like in optical astronomy .nevertheless , the shower points towards the direction of the incoming gamma - ray and thus towards its source on the sky . for this reasonits origin can be reconstructed analyzing its image .modern iacts achieve an energy - dependent pointing resolution for individual showers of 6 - 0.6 .these are the predictions from monte carlo simulations assuming , amongst other things , ideal tracking .this sets the limits achievable in practical cases .consequently , the required tracking precision must be at least of the same order or even better .although the short integration times , on the order of a few nanoseconds , would allow for an offline correction , this should be avoided since it may give rise to an additional systematic error . to meet one of the main physics goals , the observation of prompt and afterglow emission of grbs , positioning of the telescope to their assumed sky position is required in a time as short as possible .alerts , provided by satellites , arrive at the magic site typically within 10s after the outburst .since the life times of grbs show a bimodal distribution with a peak between 10s and 100s . to achieve a positioning time to any position on the sky within a reasonable time inside this window , i.e. less than a minute , a very light - weight but sturdy telescope and a fast - acting and powerful drive system is required .the implementation of the drive system relies strongly on standard industry components to ensure robustness , reliability and proper technical support .its major drive components , described hereafter , are shown on the pictures in fig .[ figure2 ] .the azimuth drive ring of 20 m diameter is made from a normal railway rail , which was delivered in pre - bent sections and welded on site .its head is only about 74 mm broad and has a bent profile .the fixing onto the concrete foundation uses standard rail - fixing elements , and allows for movements caused by temperature changes .the maximum allowable deviation from the horizontal plane as well as deviation from flatness is mm , and from the ideal circle it is =8 mm . the rail support was leveled with a theodolite every 60 cm with an overall tolerance of mm every 60 cm . in betweenthe deviation is negligible .each of the six bogeys holds two standard crane wheels of 60 cm diameter with a rather broad wheel tread of 110 mm .this allows for deviations in the 11.5m - distance to the central axis due to extreme temperature changes , which can even be asymmetric in case of different exposure to sunlight on either side . for the central bearing of the azimuth axis ,a high - quality ball bearing was installed fairly assuming that the axis is vertically stable . for the elevation axis , due to lower weight ,a less expensive sliding bearing with a teflon layer was used .these sliding bearings have a slightly spherical surface to allow for small misalignments during installation and some bending of the elevation axis stubs under load .the drive mechanism is based on duplex roller chains and sprocket wheels in a rack - and - pinion mounting .the chains have a breaking strength of 19 tons and a chain - link spacing of 2.5 cm .the initial play between the chain links and the sprocket - wheel teeth is about 3mm-5 mm , according to the data sheet , corresponding to much less than an arcsecond on the telescope axes .the azimuth drive chain is fixed on a dedicated ring on the concrete foundation , but has quite some radial distance variation of up to 5 mm .the elevation drive chain is mounted on a slightly oval ring below the mirror dish , because the ring forms an integral part of the camera support mast structure .commercial synchronous motors ( type designation bosch rexroth mhd112c-058 ) are used together with low - play planetary gears ( type designation alpha gts210-m02 - 020b09 , ratio 20 ) linked to the sprocket wheels .these motors intrinsically allow for a positional accuracy better than one arcsecond of the motor axis . having a nominal power of 11kw, they can be overpowered by up to a factor five for a few seconds .it should be mentioned that due to the installation height of more than 2200 m a.s.l ., due to lower air pressure and consequently less efficient cooling , the nominal values given must be reduced by about 20% .deceleration is done operating the motors as generator which is as powerful as acceleration .the motors contain 70 nm holding brakes which are not meant to be used as driving brake .the azimuth motors are mounted on small lever arms . in order to follow the small irregularities of the azimuthal drive chain ,the units are forced to follow the drive chain , horizontally and vertically , by guide rolls .the elevation - drive motor is mounted on a nearly 1 m long lever arm to be able to compensate the oval shape of the chain and the fact that the center of the circle defined by the drive chain is shifted 356 mm away from the axis towards the camera .the elevation drive is also equipped with an additional brake , operated only as holding brake , for safety reasons in case of extremely strong wind pressure .no further brake are installed on the telescope . the design of the drive system control , c.f . , is based on digitally controlled industrial drive units , one for each motor .the two motors driving the azimuth axis are coupled to have a more homogeneous load transmission from the motors to the structure compared to a single ( more powerful ) motor .the modular design allows to increase the number of coupled devices dynamically if necessary , c.f . . at the latitude of la palma, the azimuth track of stars can exceed 180 in one night . to allow for continuous observation of a given source at night without reaching one of the end positions in azimuth .the allowed range for movements in azimuth spans from =-90 to =+318 , where =0 corresponds to geographical north , and =90 to geographical east . to keep slewing distances as short as possible ( particularly in case of grb alerts ), the range for elevational movements spans from =+100 to =-70 where the change of sign implies a movement _ across the zenith_. this so - called _ reverse mode _ is currently not in use , as it might result in hysteresis effects of the active mirror control system , still under investigation , due to shifting of weight at zenith .the accessible range in both directions and on both axes is limited by software to the mechanically accessible range . for additional safety ,hardware end switches are installed directly connected to the drive controller units , initiating a fast , controlled deceleration of the system when activated . to achieve an azimuthal movement range exceeding 360 , one of the two azimuth end - switches needs to be deactivated at any timetherefore , an additional _ direction switch _ is located at =164 , short - circuiting the end switch currently out of range .the motion control system similarly uses standard industry components .the drive is controlled by the feedback of encoders measuring the angular positions of the motors and the telescope axes .the encoders on the motor axes provide information to micro controllers dedicated for motion control , initiating and monitoring every movement .professional built - in servo loops take over the suppression of oscillations .the correct pointing position of the system is ensured by a computer program evaluating the feedback from the telescope axes and initiating the motion executed by the micro controllers .additionally , the motor - axis encoders are also evaluated to increase accuracy .the details of this system , as shown in figure [ figure2 ] , are discussed below .the angular telescope positions are measured by three shaft - encoders ( type designation hengstler ac61/1214eq.72olz ) .these absolute multi - turn encoders have a resolution of 4096 ( 10bit ) revolutions and 16384 ( 14bit ) steps per revolution , corresponding to an intrinsic angular resolution of 1.3 per step .one shaft encoder is located on the azimuth axis , while two more encoders are fixed on either side of the elevation axis , increasing the resolution and allowing for measurements of the twisting of the dish ( fig .[ figure3 ] ) . all shaft encoders used are watertight ( ip67 ) to withstand the extreme weather conditions occasionally encountered at the telescope site .the motor positions are read out at a frequency of 1khz from 10bit relative rotary encoders fixed on the motor axes . due to the gear ratio of more than one thousand between motor and load , the 14bit resolution of the shaft encoder system on the axescan be interpolated further using the position readout of the motors . for communication with the axis encoders ,a canbus interface with the canopen protocol is in use ( operated at 125kbps ) .the motor encoders are directly connected by an analog interface .the three servo motors are connected to individual motion controller units ( _ dkc _ , type designation bosch rexroth , dkc ecodrive03.3 - 200 - 7-fw ) , serving as intelligent frequency converters .an input value , given either analog or digital , is converted to a predefined output , e.g. , command position , velocity or torque .all command values are processed through a chain of built - in controllers , cf .[ figure4 ] , resulting in a final command current applied to the motor .this internal chain of control loops , maintaining the movement of the motors at a frequency of 1khz , fed back by the rotary encoders on the corresponding motor axes .several safety limits ensure damage - free operation of the system even under unexpected operation conditions .these safety limits are , e.g. , software end switches , torque limits , current limits or control - deviation limits . to synchronize the two azimuth motors , a master - slave setup is used .while the master is addressed by a command velocity , the slave is driven by the command torque output of the master .this operation mode ensures that both motors can apply their combined force to the telescope structure without oscillations . in principleit is possible to use a bias torque to eliminate play .this feature was not used because the play is negligible anyhow .the master for each axis is controlled by presetting a rotational speed defined by on its analog input .the input voltage is produced by a programmable micro controller dedicated to analog motion control , produced by z&b ( _ macs _ , type designation macs ) .the feedback is realized through a 500-step emulation of the motor s rotary encoders by the dkcs .elevation and azimuth movement is regulated by individual macss .the macs controller itself communicates with the control software ( see below ) through a canbus connection .it turned out that in particular the azimuth motor system seems to be limited by the large moment of inertia of the telescope ( , for comparison ; note that the exact numbers depend on the current orientation of the telescope ) . at the same time , the requirements on the elevation drive are much less demanding .+ _ magicii_for the drive system several improvements have been provided : + * 13bit absolute shaft - encoders ( type designation heidenhain roq425 ) are installed , providing an additional sine - shaped output within each step .this allows for a more accurate interpolation and hence a better resolution than a simple 14bit shaft - encoder .these shaft - encoders are also water tight ( ip64 ) , and they are read out via an endat2.2 interface . *all encoders are directly connected to the dkcs , providing additional feedback from the telescope axes itself .the dkc can control the load axis additionally to the motor axis providing a more accurate positioning , faster movement by improved oscillation suppression and a better motion control of the system . *the analog transmission of the master s command torque to the slave is replaced by a direct digital communication ( ecox ) of the dkcs .this allows for more robust and precise slave control .furthermore the motors could be coupled with relative angular synchronism allowing to suppress deformations of the structure by keeping the axis connecting both motors stable . * a single professional programmable logic controller ( plc ) , in german : _ speicherprogammierbare steuerung _ ( sps , type designation rexroth bosch , indracontrol sps l20 ) replaces the two macss .connection between the sps and the dkcs is now realized through a digital profibus dp interface substituting the analog signals .* the connection from the sps to the control pc is realized via ethernet connection . sinceethernet is more commonly in use than canbus , soft- and hardware support is much easier .the drive system is controlled by a standard pc running a linux operating system , a custom - designed software based on root and the positional astronomy library _ slalib _ .algorithms specialized for the magic tracking system are imported from the modular analysis and reconstruction software package ( mars ) also used in the data analysis .whenever the telescope has to be positioned , the relative distance to the new position is calculated in telescope coordinates and then converted to motor revolutions .then , the micro controllers are instructed to move the motors accordingly . since the motion is controlled by the feedback of the encoders on the motor axes , not on the telescope axes , backlash and other non - deterministic irregularitiescan not easily be taken into account .thus it may happen that the final position is still off by a few shaft - encoder steps , although the motor itself has reached its desired position . in this case, the procedure is repeated up to ten times .after ten unsuccessful iterations , the system would go into error state . in almost all casesthe command position is reached after at most two or three iterations . if a slewing operation is followed by a tracking operation of a celestial target position , trackingis started immediately after the first movement without further iterations .possible small deviations , normally eliminated by the iteration procedure , are then corrected by the tracking algorithm . to track a given celestial target position ( ra / dec , j2000.0 , fk5 ) , astrometric and misalignment corrections have to be taken into account .while astrometric corrections transform the celestial position into local coordinates as seen by an ideal telescope ( alt / az ) , misalignment corrections convert them further into the coordinate system specific to the real telescope . in case of magic , this coordinate system is defined by the position feedback system .the tracking algorithm controls the telescope by applying a command velocity for the revolution of the motors , which is re - calculated every second .it is calculated from the current feedback position and the command position required to point at the target five seconds ahead in time .the timescale of 5s is a compromise between optimum tracking accuracy and the risk of oscillations in case of a too short timescale . as a crosscheck, the ideal velocities for the two telescope axes are independently estimated using dedicated astrometric routines of slalib . for security reasons ,the allowable deviation between the determined command velocities and the estimated velocities is limited .if an extreme deviation is encountered the command velocity is set to zero , i.e. , the movement of the axis is stopped .the observation of grbs and their afterglows in very - high energy gamma - rays is a key science goal for the magic telescope .given that alerts from satellite monitors provide grb positions a few seconds after their outburst via the _ gamma - ray burst coordination network _ , typical burst durations of 10s to 100s demand a fast positioning of the telescope .the current best value for the acceleration has been set to 11.7mrads .it is constrained by the maximum constant force which can be applied by the motors .consequently , the maximum allowed velocity can be derived from the distance between the end - switch activation and the position at which a possible damage to the telescope structure , e.g. ruptured cables , would happen . from these constraints ,the maximum velocity currently in use , 70.4mrads , was determined .note that , as the emergency stopping distance evolves quadratically with the travel velocity , a possible increase of the maximum velocity would drastically increase the required braking distance . as safety procedures require ,an emergency stop is completely controlled by the dkcs itself with the feedback of the motor encoder , ignoring all other control elements .currently , automatic positioning by =180 in azimuth to the target position is achieved within 45s .the positioning time in elevation is not critical in the sense that the probability to move a longer path in elevation than in azimuth is negligible . allowing the telescope drive to make use of the reverse mode , the requirement of reaching any position in the sky within 30sis well met , as distances in azimuth are substantially shortened .the motor specifications allow for a velocity more than four times higher . in practice, the maximum possible velocity is limited by the acceleration force , at slightly more than twice the current value .the actual limiting factor is the braking distance that allows a safe deceleration without risking any damage to the telescope structure .with the upgraded magicii drive system , during commissioning in 2008 august , a maximum acceleration and deceleration of =30mrads and =90mrad / s and a maximum velocity of =290mrads and =330mrads could be reached . with these valuesthe limits of the motor power are exhausted .this allowed a movement of =180/360 in azimuth within 20s/33s .the intrinsic mechanical accuracy of the tracking system is determined by comparing the current command position of the axes with the feedback values from the corresponding shaft encoders .these feedback values represent the actual position of the axes with highest precision whenever they change their feedback values . at these instances ,the control deviation is determined , representing the precision with which the telescope axes can be operated . in the caseof an ideal mount this would define the tracking accuracy of the telescope . in figure [ figure5 ]the control deviation measured for 10.9h of data taking in the night of 2007 july 22/23 and on the evening of july 23 is shown , expressed as absolute deviation on the sky taking both axes into account . in almost all casesit is well below the resolution of the shaft encoders , and in 80% of the time it does not exceed 1/8 of this value ( ) .this means that the accuracy of the motion control , based on the encoder feedback , is much better than 1 on the sky , which is roughly a fifth of the diameter of a pixel in the magic camera ( 6 , c.f . ) . in the case of a real telescopeultimate limits of the tracking precision are given by the precision with which the correct command value is known .its calibration is discussed hereafter .to calibrate the position command value , astrometric corrections ( converting the celestial target position into the target position of an ideal telescope ) and misalignment corrections ( converting it further into the target position of a real telescope ) , have to be taken into account .the astrometric correction for the pointing and tracking algorithms is based on a library for calculations usually needed in positional astronomy , _ slalib _key features of this library are the numerical stability of the algorithms and their well - tested implementation .the astrometric corrections in use ( fig .[ figure6 ] ) performed when converting a celestial position into the position as seen from earth s center ( apparent position ) take into account precession and nutation of the earth and annual aberration , i.e. , apparent displacements caused by the finite speed of light combined with the motion of the observer around the sun during the year .next , the apparent position is transformed to the observer s position , taking into account atmospheric refraction , the earth s rotation , and diurnal aberration , i.e. , the motion of the observer around the earth s rotation axis .some of these effects are so small that they are only relevant for nearby stars and optical astronomy .but as optical observations of such stars are used to _ train _ the misalignment correction , all these effects are taken into account .imperfections and deformations of the mechanical construction lead to deviations from an ideal telescope , including the non - exact alignment of axes , and deformations of the telescope structure . in the case of the magic telescopes the optical axis of the mirror is defined by an automatic alignment system .this active mirror control is programmed not to change the optical axis once defined , but only controls the optical point spread function of the mirror , i.e. , it does not change the center of gravity of the light distribution .this procedure is applied whenever the telescope is observing including any kind of calibration measurement for the drive system . the precision of the axis alignment of the mirrors is better than 0.2 and can therefor be neglected .consequently , to assure reliable pointing and tracking accuracy , mainly the mechanical effects have to be taken into account .therefore the tracking software employs an analytical pointing model based on the tpoint telescope modeling software , also used for optical telescopes . this model , called _ pointing model _ , parameterizes deviations from the ideal telescope .calibrating the pointing model by mispointing measurements of bright stars , which allows to determine the necessary corrections , is a standard procedure .once calibrated , the model is applied online .since an analytical model is used , the source of any deviation can be identified and traced back to components of the telescope mount .+ corrections are parameterized by alt - azimuthal terms , i.e. , derived from vector transformations within the proper coordinate system .the following possible misalignments are taken into account : + zero point corrections ( _ index errors _ ) : : trivial offsets between the zero positions of the axes and the zero positions of the shaft encoders .azimuth axis misalignment : : the misalignment of the azimuth axis in north - south and east - west direction , respectively . for magicthese corrections can be neglected .non - perpendicularity of axes : : deviations from right angles between any two axes in the system , namely ( 1 ) non - perpendicularity of azimuth and elevation axes and ( 2 ) non - perpendicularity of elevation and pointing axes . in the case of the magic telescopethese corrections are strongly bound to the mirror alignment defined by the active mirror control .non - centricity of axes : : the once - per - revolution cyclic errors produced by de - centered axes .this correction is small , and thus difficult to measure , but the most stable correction throughout the years . *bending of the telescope structure * * a possible constant offset of the mast bending . * a zenith angle dependent correction .it describes the camera mast bending , which originates by magic s single thin - mast camera support strengthened by steel cables . * elevation hysteresis : this is an offset correction introduced depending on the direction of movement of the elevation axis .it is necessary because the sliding bearing , having a stiff connection with the encoders , has such a high static friction that in case of reversing the direction of the movement , the shaft - encoder will not indicate any movement for a small and stable rotation angle , even though the telescope is rotating .since this offset is stable , it can easily be corrected after it is fully passed .the passage of the hysteresis is currently corrected offline only . since the primary feedback is located on the axis itself , corrections for irregularities of the chain mounting or sprocket wheels are unnecessary .another class of deformations of the telescope - support frame and the mirrors are non - deterministic and , consequently , pose an ultimate limit of the precision of the pointing . to determine the coefficients of a pointing model , calibration data is recorded .it consists of mispointing measurements depending on altitude and azimuth angle .bright stars are tracked with the telescope at positions uniformly distributed in local coordinates , i.e. , in altitude and azimuth angle .the real pointing position is derived from the position of the reflection of a bright star on a screen in front of the magic camera .the center of the camera is defined by leds mounted on an ideal ( mm ) circle around the camera center , cf . .having enough of these datasets available , correlating ideal and real pointing position , a fit of the coefficients of the model can be made , minimizing the pointing residual .a 0.0003lux , 1/2 high - sensitivity standard pal ccd camera ( type designation wat-902h ) equipped with a zoom lens ( type : computar ) is used for the mispointing measurements .the camera is read out at a rate of 25frames per second using a standard frame - grabber card in a standard pc .the camera has been chosen providing adequate performance and easy readout , due to the use of standard components , for a very cheap price ( ) .the tradeoff for the high sensitivity of the camera is its high noise level in each single frame recorded .since there are no rapidly moving objects within the field of view , a high picture quality can be achieved by averaging typically 125frames ( corresponding to 5s ) .an example is shown in figure [ figure7 ] .this example also illustrates the high sensitivity of the camera , since both pictures of the telescope structure have been taken with the residual light of less than a half - moon . in the background individual stars can be seen .depending on the installed optics , stars up to 12 are visible . with our optics and a safe detection thresholdthe limiting magnitude is typically slightly above 9 for direct measurements and on the order of 5 4 for images of stars on the screen .an example of a calibration - star measurement is shown in figure [ figure8 ] . using the seven leds mounted on a circle around the camera center ,the position of the camera center is determined . only the upper half of the area instrumented is visible , since the lower half is covered by the lower lid , holding a special reflecting surface in the center of the camera .the led positions are evaluated by a simple cluster - finding algorithm looking at pixels more than three standard deviations above the noise level .the led position is defined as the center of gravity of its light distribution , its search region by the surrounding black - colored boxes . for simplicitythe noise level is determined just by calculating the mean and the root - mean - square within the individual search regions below a fixed threshold dominated by noise . since three pointsare enough to define a circle , from all possible combinations of detected spots , the corresponding circle is calculated . in case of misidentified leds , which sometimes occur due to light reflections from the telescope structure ,the radius of the circle will deviate from the predefined radius .thus , any such misidentified circles are discarded .the radius determination can be improved further by applying small offsets of the non - ideal led positions .the radius distribution is gaussian and its resolution is mm ( ) on the camera plane corresponding to .the center of the ring is calculated as the average of all circle centers after quality cuts .its resolution is . in this setup, the large number of leds guarantees operation even in case one led could not be detected due to damage or scattered light . to find the spot of the reflected star itself , the same cluster - finder is used to determine its center of gravity .this gives reliable results even in case of saturation .only very bright stars , brighter than 1.0 , are found to saturate the ccd camera asymmetrically . using the position of the star , with respect to the camera center , the pointing position corresponding to the camera center is calculated .this position is stored together with the readout from the position feedback system .the difference between the telescope pointing position and the feedback position is described by the pointing model . investigating the dependence of these differences on zenith and azimuth angle , the correction terms of the pointing model can be determined .its coefficients are fit minimizing the absolute residuals on the celestial sphere .figure [ figure9 ] shows the residuals , taken between 2006 october and 2007 july , before and after application of the fit of the pointing model . for convenience ,offset corrections are applied to the residuals before correction .thus , the red curve is a measurement of the alignment quality of the structure , i.e. , the pointing accuracy with offset corrections only . by fitting a proper model ,the pointing accuracy can be improved to a value below the intrinsic resolution of the system , i.e. , below shaft - encoder resolution . in more than 83% of all casesthe tracking accuracy is better than 1.3 and it hardly ever exceeds 2.5 .the few datasets exceeding 2.5 are very likely due to imperfect measurement of the real pointing position of the telescope , i.e. , the center of gravity of the star light .the average absolute correction applied ( excluding the index error ) is on the order of 4 .given the size , weight and structure of the telescope this proves a very good alignment and low sagging of the structure .the elevation hysteresis , which is intrinsic to the structure , the non - perpendicularity and non - centricity of the axes are all in the order of 3 , while the azimuth axis misalignment is .6 .these numbers are well in agreement with the design tolerances of the telescope ., scaledwidth=48.0% ] the ultimate limit on the achievable pointing precision are effects , which are difficult to correlate or measure , and non - deterministic deformations of the structure or mirrors .for example , the azimuth support consists of a railway rail with some small deformations in height due to the load , resulting in a wavy movement difficult to parameterize . for the wheels on the six bogeys , simple ,not precisely machined crane wheels were used , which may amplify horizontal deformations .other deformations are caused by temperature changes and wind loads which are difficult to control for telescopes without dome , and which can not be modeled .it should be noted that the azimuth structure can change its diameter by up to 3 cm due to day - night temperature differences , indicating that thermal effects have a non - negligible and non - deterministic influence .like every two axis mount , also an alt - azimuth mount has a blind spot near its upward position resulting from misalignments of the axis which are impossible to correct by moving one axis or the other . from the size of the applied correctionit can be derived that the blind spot must be on the order of around zenith .although the magic drive system is powerful enough to keep on track pointing about 6 away from zenith , for safety reasons , i.e. , to avoid fast movment under normal observation conditions , the observation limit has been set to .such fast movements are necessary to change the azimuth position from moving the telescope upwards in the east to downwards in the south . in the case of an ideal telescope , pointing at zenith , even an infinitely fast movement would be required . with each measurement of a calibration - star also the present pointing uncertainty is recorded .this allows for monitoring of the pointing quality and for offline correction . in figure [ figure10 ]the . since the distribution is asymmetric , quantiles are shown , from bottom to top , at 5% , 13% , 32% , 68% , 87% and 95% .the dark grey region belong to the region between quantiles 32% and 68% ., scaledwidth=48.0% ] evolution of the measured residuals over the years are shown .the continuous monitoring has been started in march 2005 and is still ongoing .quantiles are shown since the distribution can be asymmetric depending on how the residuals are distributed on the sky .the points have been grouped , where the grouping reflects data taken under the same conditions ( pointing model , mirror alignment , etc . ) .it should be noted , that the measured residuals depend on zenith and azimuth angle , i.e. , the distributions shown are biased due to inhomogeneous distributions on the sky in case of low statistics .therefore the available statistics is given in table [ table2 ] ..available statistics corresponding to the distributions shown in figure [ figure10 ] . especially in cases of low statistics the shown distribution can be influenced by inhomogeneous distribution of the measurement on the local sky .the dates given correspond to dates for which a change in the pointing accuracy , as for example a change to the optical axis or the application of a new pointing model , is known . [ cols="<,^",options="header " , ] the mirror focusing can influence the alignment of the optical axis of the telescope , i.e. , it can modify the pointing model .therefore a calibration of the mirror refocusing can worsen the tracking accuracy , later corrected by a new pointing model . although the automatic mirror control is programmed such that a new calibration should not change the center of gravity of the light distribution , it happened sometimes in the past due to software errors .the determination of the pointing model also relies on a good statistical basis , because the measured residuals are of a similar magnitude as the accuracy of a single calibration - star measurement .the visible improvements and deterioration are mainly a consequence of new mirror focusing and following implementations of new pointing models .the improvement over the past year is explained by the gain in statistics . on averagethe systematic pointing uncertainty was always better than three shaft - encoder steps ( corresponding to 4 ) , most of the time better than 2.6 and well below one shaft - encoder step , i.e. 1.3 , in the past year .except changes to the pointing model or the optical axis , as indicated by the bin edges , no degradation or change with time of the pointing model or a worsening of the limit given by the telescope mechanics could be found .with the aim to reach lower energy thresholds , the next generation of cherenkov telescopes will also include larger and heavier ones .therefore more powerful drive systems will be needed .the scalable drive system of the magic telescope is suited to meet this challenge . with its synchronous motors and their master - slave setup ,it can easily be extended to larger telescopes at moderate costs , or even scaled down to smaller ones using less powerful components .consequently , telescopes in future projects , with presumably different sizes , can be driven by similar components resulting in a major simplification of maintenance . with the current setup ,a tracking accuracy at least of the order of the shaft - encoder resolution is guaranteed .pointing accuracy already including all possible pointing corrections is dominated by dynamic and unpredictable deformations of the mount , e.g. , temperature expansion .currently , efforts are ongoing to implement the astrometric subroutines as well as the application of the pointing model directly into the programmable logic controller .a first test will be carried out within the dwarf project soon .the direct advantage is that the need for a control pc is omitted . additionally , with a more direct communication between the algorithms , calculating the nominal position of the telescope mechanics , and the control loop of the drive controller , a real time , and thus more precise, position control can be achieved . as a consequence , the position controller can directly be addressed , even when tracking , and the outermost position control - loop is closed internally in the drive controller .this will ensure an even more accurate and stable motion .interferences from external sources , e.g. wind gusts , could be counteracted at the moment of appearance by the control on very short timescales , on the order of milli - seconds .an indirect advantage is that with a proper setup of the control loop parameters , such a control is precise and flexible enough that a cross - communication between the master and the slaves can also be omitted .since all motors act as their own master , in such a system a broken motor can simply be switched off or mechanically decoupled without influencing the general functionality of the system .an upgrade of the magici drive system according to the improvements applied for magicii is ongoing .the scientific requirements demand a powerful , yet accurate drive system for the magic telescope . from its hardware installation and software implementation ,the installed drive system exceeds its design specifications as given in section [ design ] . at the same timethe system performs reliably and stably , showing no deterioration after five years of routine operation .the mechanical precision of the motor movement is almost ten times better than the readout on the telescope axes .the tracking accuracy is dominated by random deformations and hysteresis effects of the mount , but still negligible compared to the measurement of the position of the telescope axes .the system features integrated tools , like an analytical pointing model . fast positioning for gamma - ray burst followupis achieved on average within less than 45 seconds , or , if movements _ across the zenith _ are allowed , 30 seconds .thus , the drive system makes magic the best suited telescope for observations of these phenomena at very high energies . for the second phase of the magic project and particularly for the second telescope ,the drive system has been further improved . by design ,the drive system is easily scalable from its current dimensions to larger and heavier telescope installations as required for future projects .the improved stability is also expected to meet the stability requirements , necessary when operating a larger number of telescopes .the authors acknowledge the support of the magic collaboration , and thank the iac for providing excellent working conditions at the observatorio del roque de los muchachos .the magic project is mainly supported by bmbf ( germany ) , mci ( spain ) , infn ( italy ) .we thank the construction department of the mpi for physics , for their help in the design and installation of the drive system as well as eckart lorenz , for some important comments concerning this manuscript .r.m.w.acknowledges financial support by the mpg .his research is also supported by the dfg cluster of excellence `` origin and structure of the universe '' .e. lorenz , new astron .48 ( 2004 ) 339. j. cortina et al .( magic collab . ) , in : proc .ray conf . ,pune , india , august 2005 , vol . 5 , 359 .f. goebel ( magic collab . ) , in : proc .ray conf . , july 2007 , merida , mexico , preprint ( arxiv:0709.2605 ) e. aliu et al .( magic collab . ) , science , 16 october 2008 ( 10.1126/science.1164718 ) j. albert et al .( magic collab . ) , apj 642 ( 2006 ) l119 .j. albert et al .( magic collab . ) , apj 667 ( 2008 ) l21 .j. albert et al .( magic collab . ) , science 320 ( 2008 ) 1752 .r. s. somerville , j. r. primack , and s. m. faber , mon . not .. soc . 320( 2001 ) 504 .kneiske , t. m. , 2008 , chin .j. astron .suppl . 8 , 219 m. g. hauser and e. dwek , ara&a 39 ( 2001 ) 249. t. m. kneiske , t. bretz , k. mannheim , and d. h. hartmann , a&a 413 ( 2004 ) 807 .k. mannheim , d. hartmann , and b. funk , apj 467 ( 1996 ) 532. j. albert et al .( magic collab . ) , apj 667 ( 2007 ) 358 .w. s. paciesas et al ., apjs 122 ( 1999 ) 465 .t. bretz , d. dorner , and r. wagner , in : proc .ray conf . , august 2003 , tsukuba , japan , vol . 5 , 2943 . t. bretz , d. dorner , r. m. wagner , and b. riegel , in : proc . towards a network of atmospheric cherenkov detectors vii , april 2005 , palaiseau , france, http://root.cern.ch .p. t. wallace , tpoint a telescope pointing analaysis system , 2001 t. bretz and r. wagner , in : proc .ray conf . , august 2003 , tsukuba , japan , vol . 5 , 2947t. bretz and d. dorner , in : proc . towards a network of atmospheric cherenkov detectors vii , april 2005 , palaiseau ,t. bretz and d. dorner , in : international symposium on high energy gamma - ray astronomy , july 2008 .t. bretz , in : proc .ray conf . , pune , india , august 2005 , vol . 4 , 315 .d. dorner and t. bretz , in : proc . towards a network of atmospheric cherenkov detectors vii , april 2005 , palaiseau , france , p. 571575 .w. fricke , h. schwan , t. lederle , et al ., verffentlichungen des astronomischen rechen - instituts heidelberg 32 ( 1988 ) 1 .gamma - ray burst coordination network + http://gcn.gsfc.nasa.gov . c. baixeras et al .( magic coll . ) , in : proc .ray conf . ,pune , india , august 2005 , vol . 5 , 227 .wallace , slalib position astronomy library 2.5 - 3 , programmer s manual , 2005 , http://star-www.rl.ac.uk/star/docs/sun67.htx/sun67.html .b. riegel , t. bretz , d. dorner , and r. m. wagner , in : proc .ray conf . ,pune , india , august 2005 , vol . 5 , 219 .t. bretz et al ., in : proc . of workshop on blazar variability across the electromagnetic spectrum , pos(blazars2008)074 , palaiseau , france
|
the magic telescope is an imaging atmospheric cherenkov telescope , designed to observe very high energy gamma - rays while achieving a low energy threshold . one of the key science goals is fast follow - up of the enigmatic and short lived gamma - ray bursts . the drive system for the telescope has to meet two basic demands : ( 1 ) during normal observations , the 72-ton telescope has to be positioned accurately , and has to track a given sky position with high precision at a typical rotational speed in the order of one revolution per day . ( 2 ) for successfully observing grb prompt emission and afterglows , it has to be powerful enough to position to an arbitrary point on the sky within a few ten seconds and commence normal tracking immediately thereafter . to meet these requirements , the implementation and realization of the drive system relies strongly on standard industry components to ensure robustness and reliability . in this paper , we describe the mechanical setup , the drive control and the calibration of the pointing , as well as present measurements of the accuracy of the system . we show that the drive system is mechanically able to operate the motors with an accuracy even better than the feedback values from the axes . in the context of future projects , envisaging telescope arrays comprising about 100 individual instruments , the robustness and scalability of the concept is emphasized . magic , drive system , iact , scalability , calibration , fast positioning
|
in everyday life we constantly face tasks we must perform in the presence of sensory uncertainty . a natural and efficient strategyis then to use probabilistic computation .behavioral experiments have established that humans and animals do in fact use probabilistic rules in sensory , motor and cognitive domains . however , the implementation of such computations at the level of neural circuits is not well understood . in this work ,we ask how distributed neural computations can consolidate incoming sensory information and reformat it so it is accessible for many tasks .more precisely , how can the brain simultaneously infer marginal probabilities in a probabilistic model of the world ?previous efforts to model marginalization in neural networks using distributed codes invoked limiting assumptions , either treating only a small number of variables , allowing only binary variables , or restricting interactions .real - life tasks are more complicated and involve a large number of variables that need to be marginalized out , requiring a more general inference architecture . herewe present a distributed , nonlinear , recurrent network of neurons that performs inference about many interacting variables .there are two crucial parts to this model : the representation and the inference algorithm .we assume that brains represent probabilities over individual variables using probabilistic population codes ( ppcs ) , which were derived from using bayes rule on experimentally measured neural responses to sensory stimuli . here for the first time we link multiple ppcs together to construct a large - scale graphical model . for the inference algorithm ,many researchers have considered loopy belief propagation ( lbp ) to be a simple and efficient candidate algorithm for the brain . however, we will discuss one particular feature of lbp that makes it neurally implausible .instead , we propose that an alternative formulation of lbp known as tree - based reparameterization ( trp ) , with some modifications for continuous - time operation at two timescales , is well - suited for neural implementation in population codes .we describe this network mathematically below , but the main conceptual ideas are fairly straightforward : multiplexed patterns of activity encode statistical information about subsets of variables , and neural interactions disseminate these statistics to all other encoded variables for which they are relevant .in section [ ppc ] we review key properties of our model of how neurons can represent probabilistic information through probabilistic population codes .section [ trp ] reviews graphical models , loopy belief propagation , and tree - based reparameterization . in section [ sec : neuraltrp ] , we merge these ingredients to model how populations of neurons can represent and perform inference on large multivariate distributions .section [ experiments ] describes experiments to test the performance of network .we summarize and discuss our results in section [ conclusion ] .neural responses vary from trial to trial , even to repeated presentations of the same stimulus .this variability can be expressed as the likelihood function .experimental data from several brain areas responding to simple stimuli suggests that this variability often belongs to the exponential family of distributions with linear sufficient statistics : where depends on the stimulus - dependent mean and fluctuations of the neuronal response and is independent of the stimulus . for a conjugate prior , the posterior distribution will also have this general form , .this neural code is known as a linear ppc : it is a probabilistic population code because the population activity collectively encodes the stimulus probability , and it is linear because the log - likelihood is linear in . in this paper , we assume responses are drawn from this family , although incorporation of more general ppcs with nonlinear sufficient statistics is possible : .an important property of linear ppcs , central to this work , is that different projections of the population activity encode the natural parameters of the underlying posterior distribution .for example , if the posterior distribution is gaussian ( figure [ fig : ppcfig ] ) , then , with and encoding the linear and quadratic natural parameters of the posterior .these projections are related to the expectation parameters , the mean and variance , by and .a second important property of linear ppcs is that the variance of the encoded distribution is inversely proportional to the overall amplitude of the neural activity .intuitively , this means that more spikes means more certainty ( figure [ fig : ppcfig ] ) .the most fundamental probabilistic operations are the product rule and the sum rule .linear ppcs can perform both of these operations while maintaining a consistent representation , a useful feature for constructing a model of canonical computation . for a log - linear probability code like linear ppcs, the product rule corresponds to weighted summation of neural activities : .in contrast , to use the sum rule to marginalize out variables , linear ppcs require nonlinear transformations of population activity . specifically , a quadratic nonlinearity with divisive normalization performs near - optimal marginalization in linear ppcs .quadratic interactions arise naturally through coincidence detection , and divisive normalization is a nonlinear inhibitory effect widely observed in neural circuits .alternatively , near - optimal marginalizations on ppcs can also be performed by more general nonlinear transformations . in sum ,ppcs provide a biologically compatible representation of probabilistic information . and encode the natural parameters of the posterior( * b * ) corresponding posteriors over stimulus variables determined by the responses in panel a. the gain or overall amplitude of the population code is inversely proportional to the variance of the posterior distribution.,scaledwidth=70.0% ]to generalize ppcs , we need to represent the joint probability distribution of many variables .a natural way to represent multivariate distributions is with probabilistic graphical models . in this work ,we use the formalism of factor graphs , a type of bipartite graph in which nodes representing variables are connected to other nodes called factors representing interactions between ` cliques ' or sets of variables ( figure [ fig : trp2]a ) . the joint probability over all variablescan then be represented as a product over cliques , , where are nonnegative compatibility functions on the set of variables in the clique , and is a normalization constant .the distribution of interest will be a posterior distribution that depends on neural responses .since the inference algorithm we present is unchanged with this conditioning , for notational convenience we suppress this dependence on . in this paper , we focus on pairwise interactions , although our main framework generalizes naturally to richer , higher - order interactions . in a pairwise model ,we allow singleton factors for variable nodes in a set of vertices , and pairwise interaction factors for pairs in the set of edges that connect those vertices . the joint distribution is then the inference problem of interest in this work is to compute the marginal distribution for each variable , .this task is generally intractable .however , the factorization structure of the distribution can be used to perform inference efficiently , either exactly in the case of tree graphs , or approximately for graphs with cycles .one such algorithm is called belief propagation ( bp ) .bp iteratively passes information along the graph in the form of messages from node to , using only local computations that summarize the relevant aspects of other messages upstream in the graph : where is the time or iteration number , and is the set of neighbors of node on the graph .the estimated marginal , called the ` belief ' at a node , is proportional to the local evidence at that node and all the messages coming into node .similarly , the messages themselves are determined self - consistently by combining incoming messages except for the previous message from the target node .this message exclusion is critical because it prevents evidence previously passed by the target node from being counted as if it were new evidence .this exclusion only prevents overcounting on a tree graph , and is unable to prevent overcounting of evidence passed around loops .for this reason , bp is exact for trees , but only approximate for general , loopy graphs .if we use this algorithm anyway , it is called ` loopy ' belief propagation ( lbp ) , and it often has quite good performance .multiple researchers have been intrigued by the possibility that the brain may perform lbp , since it gives `` a principled framework for propagating , in parallel , information and uncertainty between nodes in a network '' . despite the conceptual appeal of lbp , it is important to get certain details correct : in an inference algorithm described by nonlinear dynamics , deviations from ideal behavior could in principle lead to very different outcomes .one critically important detail is that each node must send different messages to different targets to prevent overcounting .this exclusion can render lbp neurally implausible , because neurons can not readily send different output signals to many different target neurons .some past work simply ignores the problem ; the resultant overcounting destroys much of the inferential power of lbp , often performing worse than more nave algorithms like mean - field inference .one better option is to use different readouts of population activity for different targets , but this approach is inefficient because it requires many readout populations for messages that differ only slightly , and requires separate optimization for each possible target .other efforts have avoided the problem entirely by performing only unidirectional inference of low - dimensional variables that evolve over time .appealingly , one can circumvent all of these difficulties by using an alternative formulation of lbp known as tree - based reparameterization ( trp ) .insightful work by wainwright , jakkola , and willsky revealed that belief propagation can be understood as a convenient way of refactorizing a joint probability distribution , according to approximations of local marginal probabilities .for pairwise interactions , this can be written as where is a so - called ` pseudomarginal ' distribution of and is a joint pseudomarginal over and ( figure [ fig : trp2]a b ) , where and are the outcome of loopy belief propagation .the name pseudomarginal comes from the fact that these quantities are always locally consistent with being marginal distributions , but they are only globally consistent with the true marginals when the graphical model is tree - structured . these pseudomarginals can be constructed iteratively as the true marginals of a different joint distribution on an isolated tree - structured subgraph .compatibility functions from factors remaining outside of the subgraph are collected in a residual term .this regrouping leaves the joint distribution unchanged : the factors of are then rearranged by computing the true marginals on its subgraph , again preserving the joint distribution . in subsequent updates ,we iteratively refactorize using the marginals of along different tree subgraphs ( figure [ fig : trp2]c ) . on a tree graph .( * b * ) an alternative parameterization of the same distribution in terms of the marginals .( * c * ) two trp updates for a nearest - neighbor grid of variables.,scaledwidth=100.0% ] typical lbp can be interpreted as a sequence of local reparameterizations over just two neighboring nodes and their corresponding edge .pseudomarginals are initialized at time using the original factors : and . at iteration ,the node and edge pseudomarginals are computed by exactly marginalizing the distribution built from previous pseudomarginals at iteration : notice that , unlike the original form of lbp , operations on graph neighborhoods do not differentiate between targets .trp s operation only requires updating pseudomarginals , in place , using local information .these are appealing properties for a candidate brain algorithm .this representation is also nicely compatible with the structure of ppcs : different projections of the neural activity encode the natural parameters of an exponential family distribution .it is thus useful to express the pseudomarginals and the trp inference algorithm using vectors of sufficient statistics and natural parameters for each clique : . for a model with at most pairwise interactions, the trp updates ( [ bptrp ] ) can be expressed in terms of these natural parameters as where is the number of neighbors of , and , and are matrices and nonlinear functions ( for vertices and edges ) that are determined by the particular graphical model ( see below ) .since the natural parameters reflect log - probabilities , the product rule for probabilities becomes a linear sum in , while the sum rule for probabilities must be implemented by nonlinear operations on . in the concrete case of a gaussian graphical model ,the joint distribution is given by , where and are the natural parameters , and the linear and quadratic functions and are the sufficient statistics .when we reparameterize this distribution by pseudomarginals , we again have linear and quadratic sufficient statistics : two for each node , , and five for each edge , .each of these vectors of sufficient statistics has its own vector of natural parameters , and . to approximate the marginal probabilities ,the trp algorithm initializes the pseudomarginals to and . to update , we must extract the matrices and nonlinear functions that recover the univariate marginal distribution of a bivariate gaussian . for , this marginalis using this , we can determine the form of the weight matrices and the nonlinear functions in the trp updates ( [ paramupdates ] ) . where is the element of .notice that these nonlinear functions are all quadratic functions with a linear divisive normalization .an important feature of the trp updates is that they circumvent the ` message exclusion ' problem of lbp .the trp update for the singleton terms , ( [ bptrp ] ) and ( [ paramupdates ] ) , includes contributions from _ all the neighbors _ of a given node .there is no free lunch , however , and the price is that the updates at time depend on previous pseudomarginals at two different times , and .the latter update is therefore instantaneous information transmission , which is not biologically feasible . to overcome this limitation ,we observe that the brain can use fast and slow timescales instead of instant and delayed signals .we convert the update equations to continuous time , and introduce auxiliary variables which are lowpass - filtered versions of on a slow timescale : .the nonlinear dynamics of ( [ paramupdates ] ) are then updated on a faster timescale according to where the nonlinear terms depend only on the slower , delayed activity . by concatenating these two sets of parameters , , we obtain a coupled multidimensional dynamical system which represents the approximation to the trp iterations : here the weight matrix and the nonlinear function inherit their structure from the discrete - time updates and the lowpass filtering at the fast and slow timescales . to complete our neural inference network , we now embed the nonlinear dynamics ( [ dseq ] ) into the population activity . since different projections of the neural activity in a linear ppc encode natural parameters of the underlying distribution , we map neural activity onto by where is a rectangular embedding matrix that projects the natural parameters and their low - pass versions into the neural response space .these parameters can be decoded from the neural activity as , where is the pseudoinverse of . applying this basis transformation to ( [ dseq ] ) , we have .we then obtain the general form of the updates for the neural activity where and correspond to the linear and nonlinear computational components that integrate and marginalize evidence , respectively .the nonlinear function on inherits the structure needed for the natural parameters , such as a quadratic polynomial with a divisive normalization used in low - dimensional gaussian marginalization problems , but now expanded to high - dimensional graphical models .figure [ nn ] depicts the network architecture for the simple graphical model from figure [ fig : trp2]a , both when there are distinct neural subpopulations for each factor ( figure [ nn]a ) , and when the variables are fully multiplexed across the entire neural population ( figure [ nn]b ) .these simple , biologically - plausible neural dynamics ( [ eq : neuraltrp ] ) represent a powerful , nonlinear , fully - recurrent network of ppcs which implements the trp update equations on an underlying graphical model .a. ( * b * ) a cartoon shows how the same distribution can be represented as distinct projections of the distributed neural activity , instead of as distinct populations . in both cases , since the neural activities encode log - probabilities , linear connections are responsible for integrating evidence while nonlinear connections perform marginalization.,scaledwidth=90.0% ]we evaluate the performance of our neural network on a set of small gaussian graphical models with up to 400 interacting variables .the networks time constants were set to have a ratio of .figure [ fig : dynamics ] shows the neural population dynamics as the network performs inference , along with the temporal evolution of the corresponding node and pairwise means and covariances .the neural activity exhibits a complicated timecourse , and reflects a combination of many natural parameters changing simultaneously during inference .this type of behavior is seen in neural activity recorded from behaving animals .a.,scaledwidth=80.0% ] figure [ fig : performance ] shows that our recurrent neural network accurately infers the marginal probabilities , and reaches almost the same conclusions as loopy belief propagation .the data points are obtained from multiple simulations with different graph topologies , including graphs with many loops .figure [ fig : noiseperformance ] verifies that the network is robust to noise even when there are few neurons per inferred parameter ; adding more neurons improves performance since the noise can be averaged away . anddensely connected graphs with up to 25 variables .the expectation parameters ( means , covariances ) of the pseudomarginals closely match the corresponding parameters for the true marginals . ]square grid , in the presence of independent spatiotemporal gaussian noise of standard deviation 0.1 times the standard deviation of each signal .( * b * ) expectation parameters ( means , variances ) of the node pseudomarginals closely match the corresponding parameters for the true marginals , despite the noise .results are shown for one or five neurons per parameter in the graphical model , and for no noise ( i.e. infinitely many neurons).,scaledwidth=100.0% ]we have shown how a biologically - plausible nonlinear recurrent network of neurons can represent a multivariate probability distribution using population codes , and can perform inference by reparameterizing the joint distribution to obtain approximate marginal probabilities .our network model has desirable properties beyond those lauded features of belief propagation .first , it allows for a thoroughly distributed population code , with many neurons encoding each variable and many variables encoded by each neuron .this is consistent with neural recordings in which many task - relevant features are multiplexed across a neural population .second , the network performs inference in place , without using a distinct neural representation for messages , and avoids the biological implausibility associated with sending different messages about every variable to different targets .this virtue comes from exchanging multiple messages for multiple timescales .it is noteworthy that allowing two timescales prevents overcounting of evidence on loops of length two ( target to source to target ) .this suggests a novel role of memory in static inference problems : a longer memory could be used to discount past information sent at more distant times , thus avoiding the overcounting of evidence that arises from loops of length three and greater. it may therefore be possible to develop reparameterization algorithms with all the convenient properties of lbp but with improved performance on loopy graphs .previous results show that the quadratic nonlinearity with divisive normalization is convenient and biologically plausible interpretable , but this precise form is not necessary : other pointwise neuronal nonlinearities are capable of producing the same high - quality marginalizations in ppcs . in a distributed code ,the precise nonlinear form at the neuronal scale is not important as long as the effect on the parameters is the same .more generally , however , different nonlinear computations on the parameters implement different approximate inference algorithms .the distinct behaviors of such algorithms as mean - field inference , generalized belief propagation , and others arise from differences in their nonlinear transformations .even gibbs sampling can be described as a noisy nonlinear message - passing algorithm .although lbp and its generalizations have strong appeal , we doubt the brain will use this algorithm exactly. the real nonlinear functions in the brain may implement even smarter algorithms . to identify the brain s algorithm, it may be more revealing to measure how information is represented and transformed in a low - dimensional latent space embedded in the high - dimensional neural responses than to examine each neuronal nonlinearity in isolation .the present work is directed toward this challenge of understanding computation in this latent space .it provides a concrete example showing how distributed nonlinear computation can be distinct from localized neural computations .learning this computation from data will be a key challenge for neuroscience . infuture work we aim to recover the latent computations of our network from artificial neural recordings generated by the model .successful model recovery would encourage us to apply these methods to large - scale neural recordings to uncover key properties of the brain s distributed nonlinear computations .* acknowledgments : * xp and rr were supported by a grant from the mcnair foundation and by the intelligence advanced research projects activity ( iarpa ) via department of interior / interior business center ( doi / ibc ) contract number d16pc00003 .
|
behavioral experiments on humans and animals suggest that the brain performs probabilistic inference to interpret its environment . here we present a new general - purpose , biologically - plausible neural implementation of approximate inference . the neural network represents uncertainty using probabilistic population codes ( ppcs ) , which are distributed neural representations that naturally encode probability distributions , and support marginalization and evidence integration in a biologically - plausible manner . by connecting multiple ppcs together as a probabilistic graphical model , we represent multivariate probability distributions . approximate inference in graphical models can be accomplished by message - passing algorithms that disseminate local information throughout the graph . an attractive and often accurate example of such an algorithm is loopy belief propagation ( lbp ) , which uses local marginalization and evidence integration operations to perform approximate inference efficiently even for complex models . unfortunately , a subtle feature of lbp renders it neurally implausible . however , lbp can be elegantly reformulated as a sequence of tree - based reparameterizations ( trp ) of the graphical model . we re - express the trp updates as a nonlinear dynamical system with both fast and slow timescales , and show that this produces a neurally plausible solution . by combining all of these ideas , we show that a network of ppcs can represent multivariate probability distributions and implement the trp updates to perform probabilistic inference . simulations with gaussian graphical models demonstrate that the neural network inference quality is comparable to the direct evaluation of lbp and robust to noise , and thus provides a promising mechanism for general probabilistic inference in the population codes of the brain .
|
many open problems related to nonlinear partial differential equations ( pdes ) of mathematical physics concern the extreme behavior which can be exhibited to their solutions . by this we mean , among other , questions concerning the maximum possible growth of certain norms of the solution of the pde . from the physics point of view, these norms measure different properties of the solution , such as generation of small scales in the case of the sobolev norms .the question of the maximum possible growth of solution norms is also intrinsically linked to the problem of existence of solutions to pde problems in a given functional space .more specifically , the loss of regularity of a solution resulting from the formation of singularities usually manifests itself in an unbounded growth of some solution norms in finite time , typically referred to as `` blow - up '' .while problems of this type remain open for many important pdes of mathematical physics , most attention has been arguably given to establishing the regularity of the three - dimensional ( 3d ) navier - stokes equations , a problem which has been recognized by the clay mathematics institute as one of its `` millennium problems '' .analogous questions also remain open for the 3d inviscid euler equation .the problem we address in the present study is how the transient growth of solutions to certain nonlinear pdes is affected by the presence of noise represented by a suitably defined stochastic forcing term in the equation .more specifically , the key question is whether via some interaction with the nonlinearity such stochastic forcing may enhance or weaken the growth of certain solution norms as compared to the deterministic case . in particular , in the case of systems exhibiting finite - time blow - up in the deterministic caseit is interesting to know whether noise may accelerate or delay the formation of a singularity , or perhaps even prevent it entirely .these questions are of course nuanced by the fact that they may be considered either for individual trajectories or in suitable statistical terms .we add that transient growth in linear stochastic systems is well understood and here we focus on the interaction of the stochastic forcing with a particular type of nonlinearity .since this study is ultimately motivated by questions concerning extreme behavior in hydrodynamic models , we will focus our attention on the simplest model used in this context , namely , the one - dimensional ( 1d ) stochastic burgers equation defined on a periodic interval ] ( `` '' means `` equal to by definition '' ) . in equation the stochastic forcing is given by a random field , .therefore , at any point our solution becomes a random variable for in some probability space .we add that , while for other systems , such as e.g. the schrdinger equation , one may also consider multiplicative noise , for models of the type one typically studies additive noise . a common approach to modelling stochastic excitation in pde systemsis to describe it in terms of gaussian white noise associated with an infinite - variance wiener process .however , as will be discussed in section [ sec : noise ] and appendix [ sec : white ] , such noise model does not ensure that individual solutions are well defined in the sobolev space and is therefore not suitable for the problem considered here .thus , for the remainder of this paper , we shall restrict our attention to the case where is the derivative of a wiener process with finite variance , which is the most `` aggressive '' stochastic excitation still leaving problem well - posed in ( precise definition is deferred to section [ sec : noise ] ) .we add that the stochastic burgers equation may be regarded as a special case of the kardar - parisi - zhang equation which has received some attention in the literature .we now briefly summarize important results from the literature relevant to the stochastic burgers equation .the existence and uniqueness of solutions has been proven in for the problem posed on the real line and in for a bounded domain with dirichlet boundary conditions . in all cases ,solutions can be regarded as continuous -valued random processes . for the bounded domain ( the case which we are interested in ), convergence of numerical schemes has been established in for the finite - difference approaches and in for galerkin approximations . however , in both cases only dirichlet boundary conditions were considered . the case with the periodic boundary conditions has been recently considered in for a larger class of burgers - type equations and an abstract numerical scheme . given its significance in hydrodynamics , the key quantity of interest in our study will be the seminorm of the solution referred to as _ enstrophy _ in the deterministic setting ( in ) , where burgers equation is known to be globally well - posed , its solutions generically exhibit a steepening of the gradients ( driven by the nonlinearity ) followed by their viscous dissipation when the linear dissipative term starts to dominate .this behavior is manifested by an initial growth of enstrophy , which peaks when the solution builds up the steepest front , followed by its eventual decay to zero . as a point of reference, we illustrate this generic behavior in figure [ fig : det ] in which the results were obtained by solving system with , and an `` extreme '' initial condition designed to produce a maximum enstrophy growth over ] for a stopping time , or , where , for some ; we observe that is a compact set in both cases. _ sample path _ : for a given , a sample path is the function defined by . _second - order continuous process _ : an -valued stochastic process is said to be second - order if for all ; it is said to be continuous if all sample paths are continuous ; we denote by the space of continuous second - order -valued processes with the norm the above supremum exists because is taken to be compact ._ fourier basis _ : the sobolev space introduced earlier , cf . , is contained in , which has an orthonormal basis ; hereafter , will refer to elements of the fourier basis , where is the imaginary unit ; any real - valued function can be written as a fourier series in which the fourier coefficients have the property , where the overbar denotes complex conjugation ; we can furthermore express the norm of in terms of its fourier coefficients as _ trigonometric basis _ : in addition to the fourier basis introduced above , when sampling noise it will also be useful to consider the corresponding orthonormal trigonometric basis with elements defined as follows _ solution space _ : as mentioned above , a solution to at any given is a random variable ; we will assume the solution to be a continuous second - order -valued stochastic process ; in particular , this implies that for any time ] in the deterministic case provides * * an upper bound for the growth of the enstrophy of the expected value }\e({\mathbb{e}}[u(t)]) ] , * when the noise magnitude increases proportionally to the initial enstrophy , the same growth of the expected value of the enstrophy is observed as in the deterministic case ; this leads us to conclude that inclusion of stochastic forcing does not trigger any new mechanisms of enstrophy amplification .the remainder of this paper is divided as follows : in the next section we describe our model of noise and discuss some properties of the stochastic solutions ; the numerical approach is introduced in section [ sec : numer ] , whereas the computational results are presented and discussed in section [ sec : results ] ; conclusions are presented in section [ sec : final ] , whereas some technical material is deferred to two appendices .as is customary in the standard theory of stochastic partial differential equations ( spdes ) , we write the stochastic burgers equation in the differential form where in which is a constant and is a cylindrical wiener process . in other words , is formally given by where are i.i.d standard brownian motions and are scaling coefficients . when , is an infinite - variance wiener process and is gaussian white noise .however , in this paper we focus on noise representations with -summable coefficients , such as so that has a _finite _ variance , meaning that it is square - integrable in , i.e. , , with the norm for such a finite - variance wiener process , the term in equation will be referred to as the _ gaussian colored noise_. while gaussian white noise is commonly used in the literature on spdes , this choice is not , in fact , suitable for the present study .as explained in introduction , we are interested here in the effects of stochastic excitation on the enstrophy , cf . , which is not defined for the gaussian white noise , as demonstrated in appendix [ sec : white ] . on the other hand , gaussian colored noise with the structure given in will ensure that this quantity is well defined .different notions of solution of system can be considered . due to the lack of smoothness of the noise term, we do not expect to obtain solutions defined in the classical sense ( i.e. , solutions continuously differentiable with respect to the independent variables ) .one can , however , define the notion of a _ mild solution _ as in where is an operator acting on functions in defined such that , and thus also acts on with we remark that other notions of solution also exist , for example , the notion of a _ weak solution _ as defined in .we then have the following for dirichlet boundary conditions , initial condition , positive parameters , gaussian white noise and almost every , there exists a unique solution of .moreover , such a solution is continuous in time and square - integrable in space , that is , ;l^2) ] and ] denote the vector of fourier coefficients in which each is a -valued stochastic process . applying the galerkin discretization procedurewe then obtain a ( finite ) system of stochastic ordinary differential equations ( sodes ) for the fourier coefficients where the differential operators are represented as diagonal matrices whereas the nonlinear term is represented in terms of a convolution sum , denoted ] , where are -valued stochastic processes obtained by converting representation from the trigonometric basis to the fourier basis , i.e. , the scaling coefficients in are determined by the coefficients , defined in , so that finally , we also discretize the initial condition with ] as the corresponding vector of fourier coefficients .the approximate solution is then obtained via a semi - implicit euler method , defined as in which the dissipative term is treated implicitly whereas the nonlinear and the stochastic terms are treated explicitly .some remarks are in order .first of all , is square - integrable with respect to the probability measure if and only if is square - integrable for every , with the equality next , we discuss the noise term ( or , to be precise , the noise matrix ) ] ) . in the monte carlo approach we repeat the process described in for a sequence of noise samples , where is a _ sampling discretization parameter _ , thus obtaining a sequence of solution samples .we recall that the quantity of interest that we wish to compute is the enstrophy of the solution defined in . for an approximate solution enstrophy can be directly computed from its fourier coefficients , however , in the stochastic setting , there are two distinct quantities of interest corresponding to and : one can either consider the _ enstrophy of the expected value _ of the solution , i.e. , )=\sum_{k=1}^k4\pi^2k^2|{\mathbb{e}}[\hat{u}^{k , n}_{k , n}]|^2 , \qquad n=0,\ldots , n,\ ] ] or the _ expected value of the enstrophy _ of the solution , i.e. , =\sum_{k=1}^k4\pi^2k^2{\mathbb{e}}[|\hat{u}^{k , n}_{k , n}|^2 ] , \qquad , n=0,\ldots , n.\ ] ] estimates of both these quantities can be obtained using the _ average _ estimator ( * ? ? ?* section 4.4 ) [ eq : eest ] ) & \approx\sum_{k=1}^k4\pi^2k^2\left|\frac{1}{s}\sum_{s=1}^s\hat{u}^{k , n , s}_{k , n , s}\right|^2 , \label{eq : eest1 } \\ { \mathbb{e}}[{\mathcal{e}}(u^{k , n}_n ) ] & \approx\sum_{k=1}^k4\pi^2k^2\frac{1}{s}\sum_{s=1}^s\left|\hat{u}^{k , n , s}_{k , n , s}\right|^2 .\label{eq : eest2}\end{aligned}\ ] ] we observe that in we use averages to estimate the expected value and the second moment of the fourier coefficients , which can be expressed as & \approx \hat{u}^{k , n , s}_{k , n}:=\frac{1}{s}\sum_{s=1}^s \hat{u}^{k , n , s}_{k , n , s } , \label{eq : mc}\\ { \mathbb{e}}[|\hat{u}^{k , n}_{k , n}|^2 ] & \approx \hat{w}^{k , n , s}_{k , n}:=\frac{1}{s}\sum_{s=1}^s\left|\hat{u}^{k ,n , s}_{k , n , s}\right|^2.\end{aligned}\ ] ] the two quantities in are related via jensen s inequality ) \ ; \le \ ; { \mathbb{e}}[{\mathcal{e}}(u^{k , n}_n ) ] .\label{eq : jensen}\ ] ] we now discuss the convergence of the method with respect to the number of samples , in particular , the convergence of the average solution to the expected value .it is clear from and our previous discussion that , for any discretization parameters and , the -valued random variables are well - defined and square - integrable , so that \ ] ] define a discrete second - order -valued stochastic process .we can then use standard tools of probability to show the convergence of the estimators to the corresponding expected values , as made precise by the following assertions .[ lem : convs ] the discrete stochastic process defined in converges to the expected value ] . to streamline the exposition ,the proof can be found in the appendix .[ conv : nks ] under assumptions [ ass : convk ] and [ ass : convn ] , the discrete stochastic process defined in converges to the expected value ] , where is the continuous stochastic process that solves , in the sense that -u^{k , n , s}_n\right\|_{h_p^1 } \geq\epsilon\right)\underset{k , n , s\rightarrow\infty}{\longrightarrow } 0,\ ] ] valid for all ; moreover , there are constants such that we have a bound of the form -u^{k , n , s}_n\right\|_{h_p^1}\geq\epsilon\right ) \leq { \frac{c}{\epsilon^2}\left(k^{-2\alpha}+n^{-2\beta}+s^{-1}\right ) } , \qquad \text{for all } \ n=0,\ldots , n,\ ] ] valid for all and for a constant depending on , , and .using chebyshev s inequality , we obtain -u^{k , n , s}_n\right\|_{h_p^1}\geq\epsilon\right ) & \leq\frac{1}{\epsilon^2}\int_{\left\{\left\|{\mathbb{e}}[u(t)]-u^{k , n , s}_n\right\|_{h_p^1}\geq\epsilon\right\}}\left\|{\mathbb{e}}[u(t)]-u^{k , n , s}_n\right\|^2_{h_p^1}d{\mathbb{p}}(\omega)\\ & \leq\frac{1}{\epsilon^2}\int_\omega\left\|{\mathbb{e}}[u(t)]-u^{k , n , s}_n\right\|^2_{h_p^1}d{\mathbb{p}}(\omega)\\ & = \frac{1}{\epsilon^2}\left\|{\mathbb{e}}[u(t)]-u^{k , n , s}_n\right\|^2_{l^2(\omega , h_p^1)}\\ & \leq{\frac{c}{\epsilon^2}(k^{-2\alpha}+n^{-2\beta}+s^{-1})},\end{aligned}\ ] ] where the last step follows from theorem [ conv : nks ] .in this section we use the numerical approach introduced above to study the effect of the stochastic excitation with the structure described in section [ sec : noise ] on the enstrophy growth in the solutions of burgers equation .more specifically , we will address the question formulated in introduction , namely , whether or nor the presence of noise can further amplify the maximum growth of enstrophy characterized in the deterministic setting in .we will do so by studying how the growth of the two quantities , ) ] introduced in section [ sec : sample ] , is affected by the stochastic excitation as a function of the initial enstrophy . unless indicated otherwise, we will consider a time interval of length and will solve system subject to _ optimal _ initial condition which is designed to produce the largest possible growth of enstrophy at time for all initial data in with enstrophy .the procedure for obtaining such optimal initial data is discussed in and the optimal initial conditions corresponding to and different time windows are shown in figure [ fig : g ] .we see in this figure that , as increases , the form of the optimal initial data changes from a `` shock wave '' to a `` rarefaction wave '' . for and ranging from to ( arrows indicate the directions of increase of ).,scaledwidth=50.0% ] in our numerical solution of the stochastic burgers equation , cf ., we used the following values of the discretization parameters : dealiased complex fourier modes ( corresponding to grid points in the physical space ) , time steps and monte carlo samples .the convergence of our numerical procedure was verified and the indicated values of the numerical parameters represent a reasonable trade - off between accuracy and the computational cost .in the subsections below we first recall some properties of the extreme enstrophy growth in the deterministic setting and then discuss the effect of the noise on the enstrophy growth over time and globally as a function of .the deterministic case will serve as a reference and here we summarize some key facts about the corresponding maximum enstrophy growth .the reader is referred to studies for additional details .as illustrated in figure [ fig : det ] , a typical behavior of the solutions to burgers equation involves a steepening of the initial gradients , which is manifested as a growth of enstrophy , followed by their dissipation when the enstrophy eventually decreases . the key question is how the enstrophy at some fixed time , or the maximum enstrophy } { \mathcal{e}}(t) ] .these results are illustrated in figure [ fig : edet]a , b , where we can also see that for very short evolution times growth only linear in is observed ( this is because for small the solutions do not have enough time to produce sharp gradients ) . since for increasing maximum growth of enstrophy is achieved for different , the power - law behavior is obtained by taking a maximum of or } { \mathcal{e}}(t) ] on the initial enstrophy for the optimal initial data with in the range from to .arrows indicate the direction of increasing and the dashed lines correspond to the power law .,title="fig : " ] .45 at a final time and ( b ) the maximum enstrophy } { \mathcal{e}}(t) ] and the enstrophy of the expected value of the solution ) ] and the enstrophy of the expected value of the solution ) ] and ] and ] corresponding to the largest noise level are a numerical artefact resulting from an insufficient number of monte carlo samples .this is due to the fact that increased noise levels slow down the convergence of the monte carlo approach , a phenomenon which may in principle be deduced from theorems [ conv : nks ] and [ thm : convnks ] by noting the possible dependence of the constant on the noise magnitude ..45 ] ( see previous figure for details).,title="fig : " ] .45 ] ( see previous figure for details).,title="fig : " ] .45 ] ( see previous figure for details).,title="fig : " ] .45 ] ( see previous figure for details).,title="fig : " ] .45 ) ] ( dotted lines ) and the enstrophy of the deterministic solution ( thick solid line ) as functions of time for the initial condition with , and different noise levels in the range from to ( the direction of increase of is indicated by arrows).,title="fig : " ] in this section we analyze how the diagnostic quantities [ eq : diag ] ) , & \qquad & { \mathcal{e}}({\mathbb{e}}[u^{k , n}(t ) ] ) , \label{eq : diagt } \\ \max_{t \in [ 0,t ] } & { \mathbb{e}}({\mathcal{e}}[u^{k , n}(t ) ] ) , & \qquad\max_{t \in [ 0,t ] } & { \mathcal{e}}({\mathbb{e}}[u^{k , n}(t ) ] ) \label{eq : diagmax}\end{aligned}\ ] ] for some given depend on the initial enstrophy and whether the presence of the stochastic excitation modifies the power - law dependence of the quantities on as compared to the deterministic case ( cf .section [ sec : deterministic ] ) .we will do this in two cases , namely , when for different values of the initial enstrophy the noise level is fixed and when it is proportional to . in regard to the first case , in figures [ fig : fxtens]a and [ fig : fxtens]b we show the dependence of the quantities and with on for different fixed noise levels .the quantities ) ] for different time horizons are plotted as functions of for small and large noise levels , respectively , in figures [ fig : smlsgmens ] and [ fig : bigsgmens ] .these plots are therefore the stochastic counterparts of figure [ fig : edet ] representing the deterministic case .we see that with a fixed both ) ] saturate at a level depending on the noise magnitude ( figure [ fig : fxtens]a ) .analogous behavior is observed for a fixed noise level and increasing time intervals in figures [ fig : smlsgmens ] and [ fig : bigsgmens ] , from which we can also conclude that when we maximize the quantities ) ] over all considered values of , then the resulting quantity will scale proportionally to , which is the same behavior as observed in the deterministic case ( figure [ fig : edet ] ) .the process of maximizing with respect to is represented schematically in figures [ fig : smlsgmens ] and [ fig : bigsgmens ] as `` envelopes '' of the curves corresponding to different values of .as regards the behavior of the quantities , for every noise level we observe a transition from a noise - dominated behavior , where } { \mathbb{e}}({\mathcal{e}}[u^{k , n}(t)]) ] grows with ( figure [ fig : fxtens]b ) .as regards the latter regime , corresponding to large values of and whose lower bound is an increasing function of the noise magnitude , we observe that for sufficiently large the growth of the quantity } { \mathbb{e}}({\mathcal{e}}[u^{k , n}(t)]) ] of the expected value of the enstrophy ) ] ( dotted lines ) and the enstrophy of the deterministic solution ( thick solid line ) as functions of the initial enstrophy for the initial condition with , and different noise levels in the range from to ( the direction of increase of is indicated by arrows),title="fig : " ] .45 and ( b ) the maximum values attained in ] ( dashed lines ) , the enstrophy of the expected value of the solution ) ] on , in figure [ fig : diagonal]a we observe a superlinear growth which is however slower than characterizing the deterministic case ( in fact , from the data it is not entirely obvious if this dependence is strictly in the form of a power law ) .concerning the quantity } { \mathbb{e}}({\mathcal{e}}[u^{k , n}(t)]) ] and ( b ) the maximum expected value of the enstrophy } { \mathbb{e}}({\mathcal{e}}[u^{k , n}(t)]) ] in ( a ) and } { \mathbb{e}}({\mathcal{e}}[u^{k , n}(t)]) ] obtained in the deterministic case , whereas the thin black solid line in ( a ) represents the power law .,title="fig : " ] .45 } { \mathcal{e}}({\mathbb{e}}[u^{k , n}(t)]) ] on the initial enstrophy using the initial conditions and with noise magnitudes proportional to , cf . , with in the range from to ( arrowindicate the direction of increase of ) .the parameter is chosen to maximize } { \mathcal{e}}({\mathbb{e}}[u^{k , n}(t)]) ] in ( b ) .the thick black solid line corresponds to the quantity } { \mathcal{e}}(t) ] and the enstrophy of the expected value of the solution ) ] and ) ] being lower than the deterministic enstrophy , can be therefore interpreted in terms of the stochastic excitation having the effect of an increased dissipation of the expected value of the solution .as regards the expected value of the enstrophy , we observed in figure [ fig : diagonal]a that in the limit the quantity ) ] , so that ,l_p^2))$ ] , we can write ,{\mathbb{c}}))}^2 = \sum_{k\in{\mathbb{z}}}{\mathbb{e}}\left[\sup_{0\leq t\leq t}|\hat{y}_k|^2\right]<\infty,\ ] ] so that and \phi_k;\ ] ] now each of the coefficients in the sum above can be bounded as so that the second term in is also in with ,{\mathbb{c}}))}^2<\infty\end{aligned}\ ] ] which follows from the summability of .* analysis of the third term * : writing it in terms of a fourier series we obtain ( for with the cases and handled similarly ) which is a random variable with the second moment given by from this we see that the third term is in but not in , as for any we have so , we conclude that while the first two terms on the right - hand side of are in ( and hence also in ) , the third one is only in and not in .thus , for any , being the left - hand side of , is in but not in , and consequently the enstrophy obtained with gaussian white noise is not well defined .( to simplify the notation here denotes the complex conjugate of ) -u^{k , n , s}_n\right\|^2_{l^2(\omega , h_p^1 ) } & = \int_\omega\left\|{\mathbb{e}}\left[u^{k , n}_n\right]-u^{k , n , s}_n\right\|^2_{h_p^1}d{\mathbb{p}}(\omega)\\ & = \int_\omega\sum_{k\in{\mathbb{z}}}(1 + 4\pi^2k^2)\left|{\mathbb{e}}\left[\hat{u}^{k , n}_{k , n}\right]-\hat{u}^{k , n , s}_{k , n}\right|^2d{\mathbb{p}}(\omega)\\ & = \sum_{k\in{\mathbb{z}}}(1 + 4\pi^2k^2)\int_\omega\left|{\mathbb{e}}\left[\hat{u}^{k , n}_{k , n}\right]-\frac{1}{s}\sum_{s=1}^s\hat{u}^{k , n , s}_{k , n , s}\right|^2d{\mathbb{p}}(\omega)\\ & = \sum_{k\in{\mathbb{z}}}(1 + 4\pi^2k^2)\int_\omega\left|{\mathbb{e}}\left[\hat{u}^{k , n}_{k , n}\right]\right|^2-{\mathbb{e}}\left[\hat{u}^{k , n}_{k , n}\right]\left(\frac{1}{s}\sum_{s=1}^s\hat{u}^{k , n , s}_{k , n , s}\right)^\ast\\ & \hspace{1cm}-\left({\mathbb{e}}\left[\hat{u}^{k , n}_{k , n}\right]\right)^\ast\frac{1}{s}\sum_{s=1}^s\hat{u}^{k , n , s}_{k , n , s}+\left|\frac{1}{s}\sum_{s=1}^s\hat{u}^{k , n , s}_{k , n , s}\right|^2d{\mathbb{p}}(\omega)\\ & = \sum_{k\in{\mathbb{z}}}(1 + 4\pi^2k^2)\bigg(\left|{\mathbb{e}}\left[\hat{u}^{k , n}_{k , n}\right]\right|^2-{\mathbb{e}}\left[\hat{u}^{k , n}_{k , n}\right]\left({\mathbb{e}}\left[\hat{u}^{k , n}_{k , n}\right]\right)^\ast\\ & \hspace{1cm}-\left({\mathbb{e}}\left[\hat{u}^{k , n}_{k , n}\right]\right)^\ast{\mathbb{e}}\left[\hat{u}^{k , n}_{k , n}\right]+\int_\omega\frac{1}{s^2}\sum_{s , s'=1}^s\hat{u}^{k , n , s}_{k , n , s}\left(\hat{u}^{k , n , s}_{k , n , s'}\right)^\ast d{\mathbb{p}}(\omega)\bigg)\\ & = \sum_{k\in{\mathbb{z}}}(1 + 4\pi^2k^2)\left(-\left|{\mathbb{e}}\left[\hat{u}^{k , n}_{k , n}\right]\right|^2+\frac{s}{s^2}{\mathbb{e}}\left[\left|\hat{u}^{k , n}_{k , n}\right|^2\right]+\frac{s^2-s}{s^2}\left|{\mathbb{e}}\left[\hat{u}^{k , n}_{k , n}\right]\right|^2\right)\\ & = \sum_{k\in{\mathbb{z}}}(1 + 4\pi^2k^2)\frac{1}{s}\left({\mathbb{e}}\left[\left|\hat{u}^{k , n}_{k , n}\right|^2\right]-\left|{\mathbb{e}}\left[\hat{u}^{k , n}_{k , n}\right]\right|^2\right)\\ & = \sum_{k\in{\mathbb{z}}}(1 + 4\pi^2k^2)\frac{1}{s}{\mathbb{e}}\left[\left|\hat{u}^{k , n}_{k , n}-{\mathbb{e}}\left[\hat{u}^{k , n}_{k , n}\right]\right|^2\right]\\ & = \frac{1}{s}\left\|u^{k , n}_{n}-{\mathbb{e}}\left[u^{k , n}_{n}\right]\right\|^2_{l^2(\omega , h_p^1)}.\end{aligned}\ ] ] m. dozzi , e. t. kolkovska , and j.a .lopez - mimbela . , chapter finite - time blowup and existence of global positive solutions of a semi - linear spde with fractional noise , pages 95108 .springer optimization and applications .springer , 2013 .
|
this study considers the problem of the extreme behavior exhibited by solutions to burgers equation subject to stochastic forcing . more specifically , we are interested in the maximum growth achieved by the `` enstrophy '' ( the sobolev seminorm of the solution ) as a function of the initial enstrophy , in particular , whether in the stochastic setting this growth is different than in the deterministic case considered by ayala & protas ( 2011 ) . this problem is motivated by questions about the effect of noise on the possible singularity formation in hydrodynamic models . the main quantities of interest in the stochastic problem are the expected value of the enstrophy and the enstrophy of the expected value of the solution . the stochastic burgers equation is solved numerically with a monte carlo sampling approach . by studying solutions obtained for a range of optimal initial data and different noise magnitudes , we reveal different solution behaviors and it is demonstrated that the two quantities always bracket the enstrophy of the deterministic solution . the key finding is that the expected values of the enstrophy exhibit the same power - law dependence on the initial enstrophy as reported in the deterministic case . this indicates that the stochastic excitation does not increase the extreme enstrophy growth beyond what is already observed in the deterministic case . keywords : stochastic burgers equation ; extreme behavior ; enstrophy ; singularity formation ; monte carlo
|
with the process of globalization , the contacts between people of different areas become much more frequent than before , which makes the epidemic turn into an increasingly huge challenge for human beings . in the recent 10 years, the worldwide deluges of epidemics happened in human societies have been more frequent ( e.g. , the sars in 2003 , the h5n1 in 2006 and h1n1 in 2009 ) .the epidemic spreads through interactions of human or even animals , and it appears more powerful to damage as the interactions between people or animals get stronger , under the current global environment , the effective precaution actions should be made and taken according to available studies .complex networks , as models of interactions in many real areas , such as society , technology and biology , have provided a practical perspective for studying the epidemic spreading processes accompanying the real interactions .the spreading processes can be regarded as dynamic processes on complex networks .hence , studying the characteristics and underlying mechanisms of epidemic spreading on complex networks , which can be applied to a wide range of areas , ranging from computer virus infections , epidemiology such as the spreading of h1n1 , sars and hiv , to other spreading phenomena on communication and social networks , such as rumor propagation , has attracted many scientists attentions .there are two main aspects of such studies .one is to aim at setting the spreading mechanisms .for the epidemic spreading , the classical models include sis ( susceptive - infected - susceptive ) model and sir ( susceptive - infected - remove ) model .the other one is focused on researching the dynamic processes of epidemic spreading on networks with different topological structures which have an essential effect on the dynamic processes of epidemic spreading . in regular , random and small - world networks, the studies of epidemic spreading all found that the dynamic processes undergo a phase transition : the effective spreading rate needs to exceed a critical threshold for a disease to become epidemic .however , accompanied by the discovery that more and more networks have a scale - free distribution of degree , the absence of a critical threshold has been revealed in the studies of epidemic spreading in scale - free networks . epidemic spreading always follows the mobilities of human and animal .recently , many empirical studies and theoretical analysis on the pattern of animal foraging , migration and human traveling have presented that these mobility patterns possess a levy flights property with an exponent ( for human) .levy flights means when human and animal travel , the step size follows a power - law distribution .apparently , these mobility patterns can not be depicted only by the topology of networks ; hence , the characteristics of epidemic spreading following animal and human mobility pattern should be observed on a network with specific spatial structure .this work is to aim at abstracting a network to describe the levy flights mobility pattern and study the dynamic process of epidemic spreading on spatial network . in this paperwe study how the mobility pattern affects epidemic spreading from the network perspective , and especially , pay more attention to the extremely rapid spreading epidemics .we construct a weighted network with levy flights spatial structure to describe the levy flights mobility pattern . in this network, each node denotes a small area and the weight on the edge denotes the communication times or the quantities people or animal flow between the corresponding two small areas . and , as all the individuals only have limited energy , there must be a cost constraint on the mobility .considering this limitation , we let the consumed energy by one communication between any two small areas be proportional to the geographical distance between them .we hypothesize that the distributions of human and animal mobility energy are both homogenous. then all nodes can be think than they have the same energy , which can simplify the analysis considerably .unfortunately , we have no the data about human mobility and just have deer and sheep moving data to support the hypothesis . from fig .[ empirical ] , we can see that the distributions of consumed energy ( the sum of distances for a sheep or deer in a day ) for both sheep and deer are very narrow , thus the energy distribution can be looked as homogenous .the spatial levy flights network can be constructed as follows .based on a energy constraint uniform nodes cycle ( 1-dimensional lattice ) , for each node , connections are added with power law distance distribution randomly until the energy are exhausted .each realization can generate a specific spatial weighted network . according to levy flights mobility pattern ( ,the weight on the link between node and should be proportional to , and for a given network size , the sum of all should be a constant which denotes the energy constraint .so , we can get an ensemble network model of these spatial weighted networks generated by many times of realization as : solve the model we can get the network as : where , denotes the total energy , is a constant and denotes the expectation of one levy flights distance . obviously , the network is a full connected weighted network and each node is same with the degree far we have constructed the one - dimensional levy flights network .sis epidemiological network model is the standard model for studying epidemic spreading . in sis endemic network ,each node has only two states , susceptive or infected . at each step , the susceptive node is infected with rate if connected to one infected node .hence in the weighted network , the node will be infected with the rate , where is the set of infected nodes . andat the next time , infected nodes are cured and become susceptive with rate .the effective spreading rate , is defined as . in order to keep the model simple and without loss of generality , we always let , which implies that all the infected nodes will be cured in the next step . for many networks ,the epidemic threshold of is a very significant index to measure the dynamic processes of epidemic spreading on the networks .suppose is the epidemic threshold of a network which means that when the infection dies out exponentially and when the infection spreads and will always exist in the network .suppose denotes the fraction of infected node in the network , following the homogeneous mean - field approximation , the dynamical rate equations for the sis model are (1-\rho)\label{rf}\ ] ] the first term in eq .( [ rf ] ) , which is set , is the recovery rate of infected nodes .the second term takes into account the expected probability that a susceptive node to get the infection on this weighted network $ ] . employing taylor expansions and ignoring the infinitesimal value of higher order , when the epidemic threshold when and , . from the eq .( [ rf ] ) , we have for any . in order to keep , the must tend to when .in conclusion we have where is riemann zeta function .( [ th ] ) shows that has a transition at .the simulated and analytical results are shown in fig .[ critical ] which are match well .more over , the results can be easily extended to high - dimensional space .( [ threshold ] ) shows that the epidemic is liable to disappear ( ) if there are more long - distance connections than short ones ( ) .this is an interesting and counterintuitive conclusion .intuitively , the epidemic will get more chances to spread and consequently it will infect a wider range of people under the condition that there are more long - distance connections .however , under the restriction on the total energy , more long - distance connections for a node mean a cost of a sharp decline in the number of short - distance connections for exchange .hence , accompanied by the decrease of the interactions between nodes , epidemic spreading becomes recession and liable to disappear .on under .it shows the critical threshold obtained by the numeric simulation model and reaction equations under different exponent .green line shows the analytical critical threshold represented by eq .( [ threshold ] ) . as shown in eq .( [ threshold ] ) , there is a phase transition on . when , the epidemic threshold burst mounting up into , and when , the epidemic threshold is small values . because when , it is very hard to calculate around -2 .we use a line to represent in the very small region around -2.,width=264 ] for many epidemics such as hiv , if one is infected , he / she can not be cured and always has a possibility to infect others . while with many other extremely rapid spreading epidemics such as h1n1 and sars , we always ignore them or have no antibiotics to control them at the very beginning of spreading . in these conditions ,si model is a reasonable model to study the spreading process .there is only one difference between sis and si model . in sis model, an infected node will have a probability to become susceptive , while , in si model , when a node is infected , it can not be cured and always will infect other susceptive nodes . according to the outbreaks of h1n1 and sars , we know that there are always a few infected individuals and even only one infected individual in some cases .so , in the following numerical experiments , we preset only one node to be infected at the beginning and the effective spreading rate always .the spreading processes are simulated on networks with different network size .at first , we investigate the dependence of infected ratio on spreading time . as shown in fig .[ spread ratio ] , we can see that when , the epidemic spread fastest under each network size .then , we also studied the dependence of terminal time ( at , all nodes are infected ) on and we find that when is not too large , achieves the lowest value around the point of ( shown in fig .[ optimal ] ) .moreover for not too large energy , .when , ( shown in fig .[ speed ] ) . in the spatial weighted network ,the spreading process is the most efficient when .can we get a similar conclusion in the cases of higher dimensional ?it is still an open question because it seems tough to explain why epidemic spreading is fastest when by using strict mathematical analysis , and the numeric simulations on high - dimensional networks are too expensive to do .however , we speculate this conclusion can be extended to the cases of higher dimensional .recent studies have presented that , with the constraint , the average shortest path length also achieves the lowest value when .so , we investigate the relationship between terminal time and the average shortest path length .however , it is not convenient to obtain the average shortest path length for the spatial weighted network .the reason is that large means node and are very close , that is to say , we have to transform this kind of weight if we want to obtain the shortest path which is of great importance to us . in order to avoid transformation ,we construct another un - weighted network which is similar to spatial weighted network .based on a given uniform cycle network , we randomly add connections which hold the above constraint and by avoiding duplicated link to obtain an un - weighted network . by observing all phenomena detected on the above spatial weighted network , we find that the average shortest path length indeed affects the spreading speed almost linearly , shown as in fig .[ hh ] and fig .[ hhh ] , although , the slops at the two sides of are slight different .this difference implies that there are some other factors which also affect the spreading speed besides the average shortest path length .we believe that in high - dimensional networks with levy flights spatial structure , the epidemic also spreads fastest when . in the futurewe will try our best to solve the question . . * a. * spread ratio under and , respectively .* b. * spread ration under .we can see that when , the spread ratio grows much faster than other cases.,title="fig:",width=170,height=147 ] . * a. * spread ratio under and , respectively . *b. * spread ration under .we can see that when , the spread ratio grows much faster than other cases.,title="fig:",width=170,height=147 ] -11.0em -26.0em -2.0em 1.8em * b*9.0em -20.0em on under .it shows when the energy is not too large , the terminal time achieves the lowest value around the point of , while the optimal will deviate and become smaller when is very large.,width=264 ] on network size .it observes the impact of network size on terminal time under various exponent and energy .there is a log - log linear dependence between terminate time and the network size .move over we find that , for not too large energy , . when .,width=264 ] , the average shortest path and levy flights exponent .the lowest point of the curve is close at , at this point the average shortest path and the terminate time are both the minimum value.,width=264 ] on average shortest path under with energy and , the curve shown the reach the minimum value when touch the lowest point .when , there is a linear relationship between and .,width=272,height=234 ]in summary , this paper presents the relationship between epidemic spreading and the mobility pattern of human and animal from the network perspective . acknowledging that the mobility pattern of human and animal follows the levy flights pattern with an exponent , we find that the levy flights mobility pattern is very efficient for epidemic spreading when .this result presents a big challenge for epidemics control nowadays . on the other hand, this result in a sense complies with the evolution theory .the special mobility pattern has evolved since animal appeared on the earth .maybe it is very useful for searching foods , diffusing good genes or many others which are very important for species developing .consequently , it provides an effective path to spread epidemic .so it is of no surprise to obtain such a conclusion . ** acknowledgement.**we wish to thank prof .shlomo havlin and prof .tao zhou for some useful discussions . the work is partially supported by nsfc under grant no . 70771011 and 60774085 .y.hu was supported by the bnu excellent ph.d project .
|
recently , many empirical studies uncovered that animal foraging , migration and human traveling obey levy flights with an exponent around -2 . inspired by the deluge of h1n1 this year , in this paper , the effects of levy flights mobility pattern on epidemic spreading is studied from a network perspective . we construct a spatial weighted network which possesses levy flight spatial property under a restriction of total energy . the energy restriction is represented by the limitation of total travel distance within a certain time period of an individual . we find that the exponent -2 is the epidemic threshold of sis spreading dynamics . moreover , at the threshold the speed of epidemics spreading is highest . the results are helpful for the understanding of the effect of mobility pattern on epidemic spreading .
|
high - dimensional data are increasingly encountered in many applications of statistics such as bioinformatics , information technology , medical imaging , astronomy and financial studies . in recent years , there is a growing body of literature concerning inference on the first and second order properties of high dimensional data ; see among others .the validity of these procedures is generally established under independence amongst the data vectors , which can be quite restrictive for situations that involve temporally observed data .examples include spatial - temporal modeling and financial study of a large number of asset returns .although high dimensional statistics has witnessed unprecedented development , statistical inference for high dimensional time series remains largely untouched so far . in the conventional low dimensional setting , inference for time series data typically involves the direct estimation of the asymptotic covariance matrix , which is known to be difficult in the presence of heteroscedasticity and autocorrelation of unknown forms . in the high dimensionalsetting , where the dimension is comparable or even larger than sample size , the classical inferential procedures designed for the low dimensional case are no longer applicable , e.g. , the asymptotic covariance matrix is singular . along a different line , alternative nonparametric procedures including block bootstrap , subsampling and blockwise empirical likelihood been proposed to avoid the direct estimation of covariance matrices .however , the extension of these procedures ( coupled with suitable testing procedures ) to the high dimensional setting remains unclear .one relevant high dimensional work ( ) we are aware is on the estimation rates of the covariance / precision matrices of time series . in this paper, we establish a general framework of conducting bootstrap inference for high dimensional stationary time series under weak dependence .we start from three motivating examples that are mainly concerned with first or second order property of time series : ( 1 ) uniform confidence band for mean vector ; ( 2 ) testing for serial correlation ; ( 3 ) testing on the bandedness of covariance matrix .the proposed bootstrap procedures are rather simple to implement and supported by simulation results .we want to emphasize that neither gaussian assumption nor strong restrictions on the covariance structure are imposed in these applications .an important by - product of examples ( 2 ) and ( 3 ) is the covariance structure testing for high dimensional time series that even does not rely on the existence of the null limit distribution .this new result is in sharp contrast with the existing literature for i.i.d data such as .we also remark that the maximum - type testing procedure considered in these examples is expected to be particularly powerful for detecting sparse alternatives ( see ) .a comprehensive investigation along this line is left as our future topic .the underlying theory in supporting these high dimensional applications is a general gaussian approximation theory and its bootstrap version .the gaussian approximation theory quantifies the kolmogorov distance between the largest element of a sum of weakly dependent vectors and its gaussian analog that shares the same autocovariance structure .we develop our theory in the general framework of dependency graph , which leads to delicate bounds on the kolmogorov distance for various types of time series .the approximation error , which is finite sample valid , decreases polynomially in sample size even when the data dimension is exponentially high .moreover , we study two important dependence structures in more details : -dependent time series and weakly dependent time series .although the sharpness of kolmogorov distance is not established in this paper , our theoretical results ( also see figure [ fig : interplay ] ) strongly indicate an interesting interplay between dependence and dimensionality : the less dependent of the data vectors , the faster diverging rate of the dimension is allowed for obtaining an accurate gaussian approximation .we also propose an interesting dimension free " dependence structure that allows the dimension to diverge at the rate as if the data were independent .however , in practice , the intrinsic dependence structure of time series is usually unknown .this motivates us to develop a bootstrap version of the gaussian approximation theory that does not require such knowledge .specifically , we propose a blockwise multiplier bootstrap that is able to capture the dependence amongst and within the data vectors .moreover , it inherits the high quality approximation without relying on the autocovariance information .we also introduce a non - overlapping block bootstrap as a more flexible alternative .the above theoretical results are major building blocks of a general framework of conducting bootstrap inference for high dimensional time series .this general framework assumes that the quantity of interest admits an approximately linear expansion , and thus covers the three examples mentioned above .this quantity of interest can be expressed as a functional of the distribution of the time series with finite or infinite length .hence , our result is also useful in making inference for the spectrum of time series . our general gaussian approximation theory and its block bootstrap versionsubstantially relax the independence assumption in , and is established using several techniques including the slepian interpolation , leave - one - block - out argument ( modification of stein s leave - one - out argument ) , self - normalization , weak dependence measure , and -dependent approximation .it is worth pointing out that our results are established under the physical / functional dependence measure proposed in .this framework ( or its variants ) is known to be very general and easy to verify for linear and nonlinear data - generating mechanisms , and it also provides a convenient way for establishing large - sample theories for stationary causal processes .in particular , our work is largely inspired by a recent breakthrough in gaussian approximation for i.i.d data ( ) that obtained an astounding improvement over the previous results in by allowing the dimension of the data vectors to be exponentially larger than the sample size .the rest is organized as follows . in section [ sec : statappl ] , we describe three concrete bootstrap inference procedures mentioned above in details .section [ sec : maxima ] gives the gaussian approximation result that works even when the dimension is exponentially larger than sample size , and section [ sec : boot ] proposes the blockwise multiplier ( wild ) bootstrap and also the non - overlapping block bootstrap that do not depend on the autocovariance structure of time series .building on the results in sections [ sec : maxima ] and [ sec : boot ] , a general framework of conducting bootstrap inference based on approximately linear statistics is established in section [ sec : ts ] .three examples considered in [ sec : statappl ] and one spectral testing example are covered by this framework .all the proofs are gathered in the supplementary material .to motivate our general theory , we consider three concrete bootstrap inference procedures for high dimensional time series : uniform confidence band ; white noise testing ; and bandedness testing for covariance matrix .these procedures are rather straightforward to implement .the main focus of this section is mostly on the methodological side , and the general theoretical results are deferred to section [ sec : ts ] .an ad - hoc way of choosing block size in bootstrap is discussed in section [ sec : ucb ] .consider observations from a sequence of weakly dependent -dimensional time series with .we are interested in constructing a uniform confidence band for the mean vector in the form of where . in the traditional low dimensional regime , confidence region for the mean of a multivariate time seriesis typically constructed by inverting a suitable test .a common choice is the wald type test which is of the form , where and is a consistent estimator of the so - called long run variance matrix .however , obtaining a consistent could be difficult in practice due to the unknown dependence structure . to avoid this hassle , several appealing nonparametric alternatives , e.g. ,moving block bootstrap method , subsampling approach and block - wise empirical likelihood , have been proposed . in the high dimensional regime , where the dimension of the time series is comparable with or even much larger than the sample size , inverting the wald type test is no longer applicable because the long run variance estimator is singular for .moreover , the direct application of the nonparametric approaches described above to the high dimensional setting is unclear yet . in this subsection, we propose a bootstrap - assisted method to obtain the critical value in ( [ eg : ucb ] ) , whose theoretical validity will be justified in section [ subsec : als ] . specifically , we introduce the following blockwise multiplier ( wild ) bootstrap . for simplicity ,suppose with .define the non - overlapping block sums , and the bootstrap statistic , where is a sequence of i.i.d . random variables independent of .the bootstrap critical value is defined as we next conduct a small simulation study to assess the finite sample coverage probability of the uniform confidence band . consider a -dimensional var(1 ) ( vector autoregressive ) process , where . for the error process , we consider three cases : ( 1 ) where ; ( 2 ) , where are generated independently from ( uniform distribution on [ 2,3 ] ) , and are i.i.d random variables ; ( 3 ) is generated from the moving average model in ( 2 ) with being i.i.d centralized gamma random variables . set , , and or in ( [ eq : var ] ) . to implement the blockwise multiplier bootstrap , we choose table [ tab : mean ]reports the coverage probabilities at 90% and 95% nominal levels based on 5000 simulations and 499 bootstrap resamples .we note that the coverage probabilities appear to be low for relatively small block size .when increases , a larger block size is generally required to capture the dependence .although the coverage probability is generally sensitive to the choice of the block size , with a proper block size , the coverage probability can be reasonably close to the nominal level .for univariate time series , there are two major approaches for selecting the optimal block size : the nonparametric plug - in method ( e.g. ) and the empirical criteria - based method .however , these selection procedures are deduced based on the bias - variance tradeoff , which are not intended to guarantee the best coverage of confidence interval . moreover , it is still unclear how these selection rules can be extended to the high dimensional context .hence , we provide an ad - hoc way for choosing the block size below .given a set of realizations , we pick an initial block size such that where .conditional on the sample , we let be i.i.d uniform random variables on and define with and in other words , is a non - overlapping block bootstrap sample with block size . for each ( block size for the original sample ), we can compute the times that the sample mean is contained in the uniform confidence band constructed based on the bootstrap sample and then compute the empirical coverage probabilities based on bootstrap samples .this is based on the notion that is the true mean for the bootstrap sample conditional on . in this case , the block size , which delivers the most accurate coverage for , can be viewed as an estimate of the optimal for the original series .we employ the above procedure with and to choose the optimal block size .based on 200 realizations from the original data generating process , the coverage probabilities ( given the selected block size ) in different simulation setup are summarized in table [ tab : mean - opt ] .we observe that the coverage probability based on the optimal block size is close to the best coverage presented in table [ tab : mean ] .finally we point out that it might be possible to iterate the above procedure to further improve the empirical performance ..coverage probabilities of the uniform confidence band for the mean , where the block size and .[ cols= " < , > , > , > , > , > , > , > , > , > , > , > , > " , ] the simulation results demonstrate the usefulness of the proposed method but they also leave some room for improvement . herewe point out two possibilities : ( 1 ) it is of interest to study the studentized version of the test statistic which may be more efficient as expected in the low dimensional setting ( see remark [ rk : student ] ) ; ( 2 ) in the sparsity situation , the test statistic can be constructed based on a suitable linear transformation of the observations .the linear transformation aims to magnify the signals owing to the dependence within the data vector under alternatives , and hence improves the power of the testing procedure , e.g. , . in this subsection , we consider testing the bandedness of covariance matrix .this problem aries , for example , in econometrics when testing certain economic theories ; see and reference therein .also see for independent case . for any integer ( which possibly depends on or ), we want to test our setting significantly generalizes the one considered in which focuses on independent gaussian vectors . here, we shall allow non - gaussian and dependent random vectors .we define the test statistic as for with , we define the block sums and the bootstrap statistic where is a sequence of i.i.d independent of .we reject the null if , where alternatively , one can employ the non - overlapping block bootstrap ( to be presented in sections [ subsec : block ] ) to obtain the critical value .in this section , we derive a gaussian approximation theory that serves as the first step in studying high dimensional inference procedures in section [ sec : statappl ] . consider a sequence of -dimensional dependent random vectors with .suppose and .the gaussian counterpart is defined as a sequence of gaussian random variables independent of .in addition , preserves the autocovariance structure of in the sense that and ( note that this assumption can be weakened , see remark [ rm : cov ] ) .gaussian approximation theory quantifies the kolmogorov distance defined as where , , and chernozhukov et al ( 2013 ) recently showed that for independent data vectors , decays to zero polynomially in the sample size . in section [ subsec : dep - graph ] , we substantially relax their independence assumption by first establishing a general proposition , i.e. , proposition [ prop1 ] , in the framework of dependency graph .this general result leads to delicate bounds on the kolmogorov distance for various types of weakly dependent time series even when their dimension is exponentially high , i.e. , sections [ subsec : m - dep ] [ subsec : weak - dep ] . in this subsection, we introduce a flexible framework in modelling the dependence among a sequence of -dimensional dependent ( _ unnecessarily identical _ ) random vectors .we call it as dependency graph , where is a set of vertices and is the corresponding set of undirected edges . for any two disjoint subsets of vertices , if there is no edge from any vertex in to any vertex in , the collections and are independent .let be the maximum degree of and denote . throughout the paper ,we allow to grow with the sample size for example , if an array is a dependent sequence ( that is and are independent if ) , then we have . within this general framework , we want to understand the largest possible diverging rate of ( w.r.t . ) under which the kolmogorov distance between the distributions of and , i.e. , defined in ( [ dfn : rhon ] ) , converges to zero .recall that , .the problem of comparing distributions of maxima is nontrivial since the maximum function is non - differentiable . to overcome this difficulty, we consider a smooth approximation of the maximum function , where is the smoothing parameter that controls the level of approximation .simple algebra yields that ( see ) , denote by the class of times continuously differentiable functions from to itself , and denote by the class of functions such that for set with . in proposition [ prop1 ]below , we derive a non - asymptotic upper bound for the quantity | ] and for some .let and . for ,let be the set of neighbors of , and let be a constant depending on the threshold parameter such that analogous quantity can be defined for set . define where =\sum^{n}_{i=1}{{\mathbb{e}}}z_i / n ] such that for and for fix any and define with .for this function , , , and . here , is a smoothing parameter we will choose carefully in the proof .corollary [ corollary : m - dep ] and lemma [ lemma : self ] imply the following result .[ thm1 ] consider a -dependent stationary time series .suppose with , and and for some .further suppose that there exist constants such that uniformly holds for all large enough , and .then for any , we point out that the stationarity assumption is non - essential in the proof of theorem [ thm1 ] . to characterize the dependence of -dependent time series , we adopt the idea of viewing the weakly dependent time series as outputs on inputs in physical systems .this framework is very general and easy to verify for specific ( linear or nonlinear ) data - generating mechanism ; see . with some abuse of notation ,let be a sequence of mean - zero i.i.d random variables .consider a physical system , where are the inputs and is a ( -dimensional ) measurable function such that its output is well defined .define the sigma field with suppose the -dependent sequence has the following representation ( also see the discussions in the next subsection ) , :=\mathcal{g}^{(m)}(\epsilon_{i - m},\epsilon_{i - m+1},\dots,\epsilon_i).\end{aligned}\ ] ] for any , let = { { \mathbb{e}}}[\mathcal{g}(\dots,\epsilon_{i-1},\epsilon_{i})|\mathcal{f}_{l-1}(i)] ] be the -dependent approximation sequence for .define in the same way as by replacing with .because and ( by the lipschitz property of ) , we have , \end{split}\ ] ] where for some depending on .suppose for some . by lemmaa.1 of , we have where and is a positive constant depending on . for any , we obtain \leq & \sum^{p}_{j=1}p(|x_j - x_j^{(m)}|\geq \delta_m ) \leq \sum^{p}_{j=1}\frac{1}{\delta_m^q}{{\mathbb{e}}}|x_j - x_j^{(m)}|^q \\ \leq & \sum^{p}_{j=1}\frac{c^{q/2}_q\theta_{m , j , q}^q(x)}{\delta_m^q}=\sum^{p}_{j=1}\frac{c^{q/2}_q}{\delta_m^q}\left(\sum^{+\infty}_{l = m}\theta_{l , j , q}(x)\right)^q.\end{aligned}\ ] ] optimizing the bound with respect to in ( [ eq : m - approx-1 ] ) , we deduce that which along with ( [ eq : fbeta ] ) implies that with .we give an explicit expression of the approximation error ( [ eq : m - approx ] ) in the following two examples .consider a stationary linear process , where and is a sequence of i.i.d random variables .simple calculation yields that and . for , we have under the assumption that and with , we get |\lesssim ( g_0g_1^q)^{1/(1+q)}p^{1/(1+q)}\rho^{(qm)/(1+q)}.\ ] ] consider a stationary markov chain defined by an iterated random function here s are i.i.d .innovations , and is an -valued and jointly measurable function , which satisfies the following two conditions : ( 1 ) there exists some such that and ( 2 ) where denotes the euclidean norm for a -dimensional vector . then it can be shown that has the geometric moment contraction ( gmc ) condition property and ( see example 2.1 in ). hence we are now ready to present the main result .recall that and are defined in section [ subsec : m - dep ] .[ thm : gaussian - dep ] suppose is a stationary time series which admits the representation ( [ eq : causal ] ) .assume that , and for some constants and .suppose that there exist and such that and assumption [ assum : tail ] is fulfilled .then for , we have where . the approximation parameter will be chosen appropriately to optimize the bound ( [ rhon1 ] ) .the gaussian sequence can be constructed as a causal linear process ( e.g. based on the wold representation theorem ) to capture the second order property of .we note that the conditions in theorem [ thm : gaussian - dep ] can be categorized into two types : tail restriction and weak dependence assumption .assumption [ assum : tail ] and the condition that impose restrictions on the tails of uniformly across , while conditions ( [ eq : h3])-([eq : h4 ] ) essentially require weak dependence uniformly across all the components of .when for , we have suppose for some and .then by choosing with and , and assuming that , condition ( 1 ) in assumption [ assum : tail ] holds with , and the same conclusion holds under condition ( 2 ) in assumption [ assum : tail ] provided that , , and with .below we provide some empirical evidence for two conjectures proposed in remark [ rem : interp ] , in particular the interplay between dependence and dimensionality . to this end, we generate from a multivariate arch model , where with being a sequence of i.i.d random variables , and with being a lower triangular matrix based on the cholesky decomposition of . here with and for notice that are uncorrelated and . to capture the second order property of ,we generate independent gaussian vectors from .figure [ fig : interplay ] illustrates the interplay between dependence and dimensionality using the p - p plots for , , and . for moderate and , the gaussian approximation is reasonably good , which is consistent with our theory .moreover , we also observe the following phenomena . on one hand , as increases , the approximation deteriorates for the same which controls the strength of dependence ; on the other hand , for fixed , the approximation becomes worse in the right tail which is most relevant for practical applications , as increases .note that our theoretical results are finite sample valid , and thus the sample size supposed not to play any role here .hence , we believe that the less dependent of the data vectors , the faster diverging rate of the dimension is allowed for obtaining an accurate gaussian approximation . and .,title="fig:",width=196,height=196 ] and .,title="fig:",width=196,height=196 ] and .,title="fig:",width=196,height=196 ] in the end , we discuss an intriguing question : is there any so - called `` dimension free dependence structure '' ?in other words , what kind of dependence assumption will not affect the dimension increase rate ( as compared to the independence case in ) ? to address this question , we consider one possibility : the original -dimensional vector can be decomposed into two components namely one times series component and one independence component , where the former component is asymptotically ignorable comparing to the latter as grows . our contribution here is to precisely characterize such a dimension free " dependence structure .[ prop : dim - free ] consider a -dimensional time series .suppose there exists a permutation such that , where is a -dimensional ( possibly nonstationary ) time series and is a dimensional sequence of independent variables .suppose and are independent .when satisfies the assumptions in corollary 2.1 of , we have recall that is and is defined in a similar manner . then under the additional assumption that and , we have the additional assumption ( [ ass : add ] ) implies that is of a polynomial order w.r.t . while achieves the exponential order as specified in corollary 2.1 of .therefore , the largest possible diverging rate of allowed in proposition [ prop : dim - free ] remains the same as that in the independence case ( ) .the independence assumption between and might be relaxed . here, we assume it mainly for technical simplicity so that only one single dependence assumption needs to be imposed on .in practice , the intrinsic dependence structure of time series data is usually unknown .hence , the gaussian approximation theory becomes too restrictive to use .however , this general theory provides a foundation in developing the bootstrap inference theory that do not require such knowledge . in this section, we consider two types of bootstrap procedures : ( i ) blockwise multiplier bootstrap ; and ( ii ) non - overlapping block bootstrap .the former is employed in section [ sec : statappl ] , while the latter is a more flexible alternative . to approximate the quantiles of , we introduce a blockwise multiplier bootstrap procedure for -dependent and weakly dependent time series considered in sections [ subsec : m - dep ] and [ subsec : weak - dep ] .suppose , where and as .let be a sequence of i.i.d variables that are independent of .define recall the definitions of and in ( [ eq : ab ] ) .conditional on , are mean - zero gaussian random variables such that thus we have conditional on the sample , define the -quantile of as our goal below is to quantify to this end , consider the estimation errors where . recall that is a nondecreasing convex function with . define the orlicz norm as we first consider -dependent stationary sequence where is allowed to grow with the sample size .define the following quantities which characterize the higher order properties of the time series ( e.g. , and below characterize the fourth order property of ) , where denotes the cumulant ( see e.g. ) and .the following lemma plays an important role in the subsequent derivations .[ lemma : m - dep - boot ] suppose is a -dependent stationary sequence. then with , alternatively , we have let in the spirit of lemma 3.2 in , we can show that when for some where for some constant depending on . using the arguments in theorem 3.1 of ,it is not hard to show that because , we deduce that [ assum : boot - m - dep ] suppose with set and with , and . assume that under condition ( 1 ) in assumption [ assum : tail ] with or under condition ( 2 ) in assumption [ assum : tail ] .further assume that one of the following two conditions holds .+ * condition 1 : * , where and satisfy that * condition 2 : * , and satisfy that we are now in position to present the first main result in this section .[ thm : m - dep - boot ] consider a -dependent stationary time series . under the assumptions in theorem [ thm2 ] and assumption [ assum : boot - m - dep ] , next theorem extends the above result to weakly dependent stationary time series .[ thm : dep - boot ] consider a weakly dependent stationary time series .suppose for and some . then under the assumptions in theorem [ thm : gaussian - dep ] and assumption [ assum : boot - m - dep ] , remark that the results of theorems [ thm : m - dep - boot ] and [ thm : dep - boot ] are still valid even when is fixed or grows slower than the exponential rate required in assumption [ assum : boot - m - dep ] .when has the so - called geometric moment contraction ( gmc ) property ( uniformly across its components ) , we have ( i.e. , ) by proposition 2 of and the assumption that it is known that in the low dimensional setting , the tapered block bootstrap method yields an improvement over the block bootstrap in terms of the bias for variance estimation , and thus provides a better mse rate ; see . hence , we may also want to combine the blockwise multiplier bootstrap method proposed here with the data tapering scheme .for example , let : be a data taper with for .one can consider the following modification , more detailed investigation along this direction is left for future study .in this subsection , we propose an alternative bootstrap procedure in the high dimensional setting : non - overlapping block bootstrap ( ) .in general , this bootstrap procedure may avoid estimating the influence function ( defined in section [ sec : ts ] ) in contrast with blockwise multiplier bootstrap .we provide theoretical justifications for this procedure through establishing its equivalence with multiplier bootstrap ; see ( [ eq : equi ] ) .assume for simplicity that , where .conditional on the sample , we let be i.i.d uniform random variables on and define with and in other words , is a non - overlapping block bootstrap sample with block size .define where , , and and are i.i.d draws from the empirical distribution of . also define where is a sequence of i.i.d . throughout the following discussions ,we suppose that the theoretical validity of the multiplier bootstrap based on can be justified using similar arguments in the previous subsection because the same arguments go through when and are replaced by ( provided that ) . by showing that with probability , we establish the validity of non - overlapping block bootstrap in theorem [ thm : block - boot ] .[ assum : block - boot ] assume that and with , where [ thm : block - boot ] suppose that and for some constants and , where .further assume that the assumptions in theorem [ thm : m - dep - boot ] or theorem [ thm : dep - boot ] hold with and .then ( [ eq : equi ] ) holds with probability for some . moreover , we have where and this section , we establish a general framework of conducting bootstrap inference for high dimensional time series based on the theoretical results in section [ sec : boot ] .this general framework assumes that the -dimensional quantity of interest , denoted as , admits an approximately linear expansion , and thus covers three examples considered in section [ sec : statappl ] . in particular , is expressed as a functional of the distribution of a -dimensional _ weakly dependent _ stationary time series here is different from the dimension of discussed in previous sections . ]motivated by the testing on spectral properties , we further extend the results in section [ subsec : als ] to an infinite dimensional parameter case in section [ subsec : infinite ] . in this subsection, we consider the quantities that can be expressed as functionals of the marginal distribution of a block time series with length : , where and . here, we allow the integer to grow with .define as the empirical distribution for .the distribution function of is denoted as .we are interested in testing the parameter for some functional .the parameter dimension depends on either or , e.g. , or . a natural estimator for then given by .assume admits the following approximately linear expansion in a neighborhood of : where is called influence function " ( see e.g. ) and is a remainder term .examples of approximately linear statistics include various location and scale estimators for the marginal distribution of , von mises statistics and -estimators of time series models ( see ) .we are interested in testing the null hypothesis versus the alternative , where .the test is proposed as we next apply the bootstrap theory in section [ sec : boot ] to obtain the critical value .specifically , we define and , where is some estimate of .suppose , where and as define the estimated block sums where and .let where with being a sequence of i.i.d independent of .the bootstrap critical value is given by [ thm : app-1 ] suppose the assumptions in theorem [ thm : m - dep - boot ] or theorem [ thm : dep - boot ] hold for , where is replaced by . then under assumption [ assum : app-1 ] and , we have theorem [ thm : app-1 ] applies directly to the methods described in sections [ sec : ucb]-[subsec : cov ] for both m - dependent and weakly dependent stationary time series .for example , consider the white noise testing problem in section [ subsec : cov ] .suppose .in this example , with and . then we have and with and .note that the bootstrap procedures considered in section [ sec : statappl ] are in fact simplified versions of the blockwise multiplier bootstrap in section [ sec : boot ] with and .our next theorem covers the problem of testing the bandedness of covariance matrix in section [ sec : bandtest ] . recall that where . with some abuse of notation ,let with .[ thm : bandtest ] suppose the assumptions in theorem [ thm : m - dep - boot ] or theorem [ thm : dep - boot ] hold for , where is replaced by the cardinality of the set .then under assumption [ assum : band ] in the supplementary material and , we have where is given in section [ sec : bandtest ] . the proof of theorem [ thm : bandtest ] is similar as that of theorem [ thm : app-1 ] , and thus skipped . in section [ subsec :band ] , we show that assumption [ assum : band ] can be verified under suitable primitive conditions . to avoid direct estimation of the influence function, we may alternatively apply the non - overlapping block bootstrap procedure in section [ subsec : block ] .assume for simplicity that , where .let be i.i.d uniform random variables on and define with and compute the block bootstrap estimate based on the bootstrap sample .let be the quantile of the distribution of conditional on the sample .in what follows , we further justify the validity of the non - overlapping block bootstrap in the same framework .[ rk : student ] an alternative way to construct the uniform confidence band or perform hypothesis testing is based on the studentized statistic .for example , let be a consistent estimator of . then the uniform confidence band can be constructed as the blockwise multiplier bootstrap or non - overlapping block bootstrap can be modified accordingly to obtain the critical value .to broaden the applicability of our method , we extend the above results to cover infinite dimensional parameters that are functionals of the joint distribution of , denoted as .a typical example is the spectral quantities that depend on the distribution of the whole time series rather than any finite dimensional distribution ; see example [ eg : spe ] .hence , the extension in this section is useful in conducting inference for the spectrum of high dimensional time series .suppose and its estimator is .again , is allowed to grow with or .assume that there exists a sequence of approximating statistics for that is a functional of -dimensional empirical distribution , and a sequence of approximating ( non - random ) quantities for .then our bootstrap method as proposed in section [ subsec : als ] still works provided that these two approximation errors can be well controlled and similar regularity conditions hold for the expansion of the approximating statistics around , i.e. , ( [ app : exp ] ) . to be more precise , we impose the following assumption .[ assum : app-3 ] for a sequence of positive integers that grow with let with assume the expansion , where is a remainder term .denote .suppose that and for some .[ eg : spe ] consider the spectral mean , where denotes the trace of a square matrix , is the spectral density of and \rightarrow \mathbb{r}^{p\times p}. ] for here can be interpreted as the projection of the spectral density matrix onto directions defined by with a sample analogue of is the periodogram with . then a plug - in estimator for given by .letting , then with consider the approximating quantity with .it is then straightforward to see that where and is the corresponding remainder term .recall that with being the joint distribution of .the statistic for testing the null hypothesis versus the alternative , where , is given by with some abuse of notation , we now define and with being some estimate of ( note that in this case is an array ) .suppose .we can define and in a similar way as before ( see ( [ eq : app - ab ] ) ) , where and .let where with being a sequence of i.i.d independent of . the bootstrap critical value is then given by following the arguments in the proof of theorem [ thm : app-1 ] , we obtain the following result . [ thm : app-3 ] suppose assumption [ assum : app-3 ] holds and the assumptions in theorem [ thm : m - dep - boot ] or theorem [ thm : dep - boot ] are satisfied for , where is replaced by .assume in addition that , where , and with and .then we have for some 2.3em1 cai , t. t. and jiang , t. ( 2011 ) .limiting laws of coherence of random matrices with applications to testing covariance structure and construc tion of compressed sensing matrices .statist . _* 39 * 1496 - 1525 .2.3em1 liu , w. , lin , z.y . and shao , q .- m .the asymptotic distribution and berry - esseen bound of a new test for independence in high dimension with an application to stochastic optimization .* 18 * 2337 - 2366 .define with the slepian interpolation and let define and .write , and for , where .note that \\=&\frac{1}{2}(i_1+i_2+i_3 ) , \end{split}\ ] ] where and , \\&i_2=\sum^{n}_{i=1}\sum^{p}_{k , j=1}\int^{1}_{0}{{\mathbb{e}}}[\partial_k\partial_j m(z^{(i)}(t))\dot{z}_{ij}(t)v_{k}^{(i)}(t)]dt , \\& i_3=\sum^{n}_{i=1}\sum^{p}_{k , l , j=1}\int^{1}_{0}\int^{1}_{0}(1-\tau){{\mathbb{e}}}[\partial_l\partial_k\partial_jm(z^{(i)}(t)+\tau v^{(i)}(t))\dot{z}_{ij}(t)v_{k}^{(i)}(t)v_{l}^{(i)}(t)]dtd\tau . \end{split}\ ] ] using the fact that and are independent , and , we have to bound the second term , define the expanded neighborhood around , and , where with . by taylor expansion , we have \\&+\sum^{n}_{i=1}\sum^{p}_{k , j , l=1}\int^{1}_{0}\int^{1}_{0}{{\mathbb{e}}}[\partial_k\partial_j\partial_l m(\mathcal{z}^{(i)}(t)+\tau \mathcal{v}^{(i)}(t))\dot{z}_{ij}(t)v_{k}^{(i)}(t)\mathcal{v}^{(i)}_l(t)]dtd\tau \\=&\sum^{n}_{i=1}\sum^{p}_{k , j=1}\int^{1}_{0}{{\mathbb{e}}}[\partial_k\partial_j m(\mathcal{z}^{(i)}(t))]{{\mathbb{e}}}[\dot{z}_{ij}(t)v_{k}^{(i)}(t)]dt \\&+\sum^{n}_{i=1}\sum^{p}_{k , j , l=1}\int^{1}_{0}\int^{1}_{0}{{\mathbb{e}}}[\partial_k\partial_j\partial_l m(\mathcal{z}^{(i)}(t)+\tau \mathcal{v}^{(i)}(t))\dot{z}_{ij}(t)v_{k}^{(i)}(t)\mathcal{v}^{(i)}_l(t)]dtd\tau \\= & i_{21}+i_{22},\end{aligned}\ ] ] where we have used the fact that and are independent .let . by the assumption that }(2\sqrt{t}+\sqrt{1-t})m_{xy}/\sqrt{n } \\\leq & \sqrt{5}d_n^2m_{xy}/\sqrt{n } \leq \beta^{-1}/2 \leq \beta^{-1},\end{aligned}\ ] ] where the second inequality comes from the facts that , and . by lemma a.5 in , we have for every and satisfy that with for . along with lemma a.6 in , we obtain |{{\mathbb{e}}}[\dot{z}_{ij}(t)v_{k}^{(i)}(t)]|dt \\ \lesssim & \sum^{n}_{i=1}\sum^{p}_{k , j=1}\int^{1}_{0}{{\mathbb{e}}}[u_{jk}(z(t))]|{{\mathbb{e}}}[\dot{z}_{ij}(t)v_{k}^{(i)}(t)]|dt \\ \lesssim & ( g_2+g_1\beta)\int^{1}_{0}\max_{1\leq j , k\leq p}\sum^{n}_{i=1}|{{\mathbb{e}}}[\dot{z}_{ij}(t)v_{k}^{(i)}(t)]|dt.\end{aligned}\ ] ] since , we have \tau \notag \\ \leq & \sum^{n}_{i=1}\sum^{p}_{k , j , l=1}\int^{1}_{0}\int^{1}_{0}{{\mathbb{e}}}[u_{kjl}(\mathcal{z}^{(i)}(t)+\tau \mathcal{v}^{(i)}(t))|\dot{z}_{ij}(t)v_{k}^{(i)}(t)\mathcal{v}^{(i)}_l(t)|]dtd\tau \notag \\ \lesssim & \sum^{n}_{i=1}\sum^{p}_{k , j , l=1}\int^{1}_{0}{{\mathbb{e}}}[u_{kjl}(z(t))|\dot{z}_{ij}(t)v_{k}^{(i)}(t)\mathcal{v}^{(i)}_l(t)|]dtd\tau \notag \\ \leq & \int^{1}_{0}{{\mathbb{e}}}\left[\sum^{p}_{k , j , l=1}u_{kjl}(z(t))\max_{1\leq k , j , l\leq p}\sum^{n}_{i=1}|\dot{z}_{ij}(t)v_{k}^{(i)}(t)\mathcal{v}^{(i)}_l(t)|\right]dtd\tau \notag \\ \lesssim & ( g_3+g_2\beta+g_1\beta^2)\int^{1}_{0}{{\mathbb{e}}}\max_{1\leq k , j , l\leq p}\sum^{n}_{i=1}|\dot{z}_{ij}(t)v_{k}^{(i)}(t)\mathcal{v}^{(i)}_l(t)|dtd\tau.\end{aligned}\ ] ] to bound the integration on ( [ eq : i22 ] ) , we let and note that as for , by the assumption that ( in fact , we only need to require that for all ) , we have |=\max_{1\leq j , k\leq p}\frac{1}{n}\sum^{n}_{i=1}\left|\sum_{l\in \widetilde{n}_i}({{\mathbb{e}}}\widetilde{x}_{ij}\widetilde{x}_{lk}-{{\mathbb{e}}}\widetilde{y}_{ij}\widetilde{y}_{lk})\right| \\=&\max_{1\leq j , k\leq p}\frac{1}{n}\sum^{n}_{i=1}\left|\sum_{l\in \widetilde{n}_i}({{\mathbb{e}}}\widetilde{x}_{ij}\widetilde{x}_{lk}-{{\mathbb{e}}}x_{ij}x_{lk})+\sum_{l\in \widetilde{n}_i}({{\mathbb{e}}}y_{ij}y_{lk}-{{\mathbb{e}}}\widetilde{y}_{ij}\widetilde{y}_{lk})\right| \\ \leq & \max_{1\leq j , k\leq p}\frac{1}{n}\sum^{n}_{i=1}\left|\sum_{l\in \widetilde{n}_i}\left\{{{\mathbb{e}}}y_{lk}(y_{ij}-\widetilde{y}_{ij})+{{\mathbb{e}}}\widetilde{y}_{ij}(y_{lk}-\widetilde{y}_{lk})\right\}\right| \\&+\max_{1\leq j , k\leq p}\frac{1}{n}\sum^{n}_{i=1}\left|\sum_{l\in \widetilde{n}_i}\left\{{{\mathbb{e}}}x_{lk}(x_{ij}-\widetilde{x}_{ij})+{{\mathbb{e}}}\widetilde{x}_{ij}(x_{lk}-\widetilde{x}_{lk})\right\}\right| \\ \leq & \phi(m_{x},m_y ) .\end{split}\ ] ] using similar arguments as above , we have with we first consider the term .using the fact that we get on the other hand , notice that similarly , we have note that summarizing the above results , we have alternatively , we can bound in the following way . by lemmas a.5 and a.6 in , we have \tau \\ \lesssim & \sum^{n}_{i=1}\sum^{p}_{k , j , l=1}\int^{1}_{0}{{\mathbb{e}}}[u_{kjl}(\mathcal{z}^{(i)}(t))]{{\mathbb{e}}}|\dot{z}_{ij}(t)v_{k}^{(i)}(t)v_{l}^{(i)}(t)|dt \\ \lesssim & \sum^{n}_{i=1}\sum^{p}_{k , j , l=1}\int^{1}_{0}{{\mathbb{e}}}[u_{kjl}(z(t))]{{\mathbb{e}}}|\dot{z}_{ij}(t)v_{k}^{(i)}(t)v_{l}^{(i)}(t)|dt \\ \leq & n(g_3+g_2\beta+g_1\beta^2)\int^{1}_{0}w(t)\max_{1\leq j , k , l\leq p } ( \bar{{{\mathbb{e}}}}|\dot{z}_{ij}(t)/w(t)|^3)^{1/3 } ( \bar{{{\mathbb{e}}}}|v_{k}^{(i)}(t)|^3)^{1/3 } ( \bar{{{\mathbb{e}}}}|v_{l}^{(i)}(t)|^3)^{1/3}dt.\end{aligned}\ ] ] notice that it is not hard to see that thus we derive that notice that , and .define the then following the arguments in the proof of proposition [ prop1 ] , we can show that which implies that the conclusion follows from the proof of proposition [ prop1 ] .we only need to prove the result for as the inequality holds trivially for .suppose that the distributions of and are both symmetric , then we have where we have used theorem 2.15 in .let be an independent copy of in the sense that have the same joint distribution as that for , and define ( and ) in the same way as ( and ) by replacing with .following the arguments in the proof of theorem 2.16 in , we deduce that for , where we have used the fact that note that \leq & p(\max_{1\leq j\leqp}|x_j-\widetilde{x}_j|>\delta)+p(\max_{1\leq j\leq p}|y_j-\widetilde{y}_j|>\delta ) \\ \leq&\sum^{p}_{j=1}\left\{p(|x_j-\widetilde{x}_j|>\delta)+p(|y_j-\widetilde{y}_j|>\delta)\right\}.\end{aligned}\ ] ]let where applying lemma [ lemma : self ] and using the union bound , we have with probability at least , by the assumption , therefore with probability at least where we have used the fact that and the cauchy - schwarz inequality .the same argument applies to the gaussian sequence . summarizing the above results and along with ( [ eq : m - dep1 ] ), we deduce that which also implies that for -dependent sequence , provided that . consider a `` smooth '' indicator function ] .then we have -{{\mathbb{e}}}[\mathcal{g}_k(\dots,\epsilon_{l},\epsilon_{l+1})|\mathcal{f}_{l-1}(l+1 ) ] = \sum^{m}_{j = l}\mathcal{p}_jx_{(l+1)k}.\end{aligned}\ ] ] note that -{{\mathbb{e}}}[x_{ik}|\mathcal{f}_{j-1}(i ) ] \\=&{{\mathbb{e}}}[\mathcal{g}_k(\dots,\epsilon_{i-1},\epsilon_i)-\mathcal{g}_k(\dots,\epsilon_{i - j}',\epsilon_{i - j+1},\dots,\epsilon_{i-1},\epsilon_i)|\mathcal{f}_j(i ) ] \\=&{{\mathbb{e}}}[\mathcal{g}_k(\dots,\epsilon_{j-1},\epsilon_j)-\mathcal{g}_k(\dots,\epsilon_{0}',\epsilon_{1},\dots,\epsilon_{j-1},\epsilon_j)|\mathcal{f}_j(j)].\end{aligned}\ ] ] jensen s inequality yields that which implies that therefore , we obtain we need to verify that the -dependent approximation satisfies the assumptions in theorem [ thm2 ] . using the convexity of and jensen s inequality we have under condition ( 1 ) in assumption [ assum : tail ] , and under condition ( 2 ) in assumption [ assum : tail ] .we claim that as which implies that and with and . thus under the assumptions in theorem [ thm : gaussian - dep ], we have for some constants uniformly for all large enough .to show ( [ eq : thm : gaussian - dep ] ) , we note that for the first term , we have where we have used the fact that and . under the assumption that , we have as on the other hand , note that for thus we have which implies that as lemma [ lemma : m - dep ] verifies the first condition in ( [ eq : condition ] ) .the same arguments apply to .the triangle inequality and ( [ eq : m - approx ] ) imply that with being the -dependent approximation for .the conclusion thus follows from theorem [ thm1 ] and theorem [ thm2 ] .next we analyze and . under the assumptions in corollary 2.1 of , we have notice that in this case , we allow with ( assuming that in corollary 2.1 of ) . by ( [ eq : dim - free1 ] ) , and the independence between and , we obtain \lesssim \sum^{q}_{j=1}{{\mathbb{e}}}\left[p_y\left(\max_{q+1\leq i\leq p}y_i < x_j\right)\right]+qn^{-c},\end{aligned}\ ] ] where denotes the probability measure with respect to .let . using the concentration inequality( see e.g. ( 7.3 ) of and theorem a.2.1 of ) , for , we have where . under the assumption that we can choose such that and .then we have \leq \sum^{q}_{j=1}{{\mathbb{e}}}\exp\left(-\frac{1}{2\bar{\sigma}}\left({{\mathbb{e}}}\max_{q+1\leq i\leq p}y_i - x_j\right)^2_+\right ) \\\leq & \sum^{q}_{j=1}{{\mathbb{e}}}\exp\left(-\frac{1}{2\bar{\sigma}}\left({{\mathbb{e}}}\max_{q+1\leq i\leq p}y_i - x_j\right)^2_+\right)\mathbf{i}\{x_j\leq \widetilde{q}\}+ \sum^{q}_{j=1}{{\mathbb{e}}}\mathbf{i}\{x_j>\widetilde{q}\ } \\\leq & \exp\left(\log q-\frac{1}{2\bar{\sigma}}\left({{\mathbb{e}}}\max_{q+1\leq i\leq p}y_i-\widetilde{q}\right)^2_+\right)+q\max_{1\leq j\leq q}{{\mathbb{e}}}|x_j|/\widetilde{q}=o(1).\end{aligned}\ ] ] moreover , if for , we can replace by for some thus we get +qn^{-c } \lesssim n^{-c''}.\end{aligned}\ ] ] similar argument applies to and the conclusion follows from ( [ eq : dim - free1 ] ) .note that for any is a sequence of i.i.d random variables .let and . then by lemma a.1 in ,we have cauchy - schwarz inequality yields that by theorem [ thm2 ] , choosing for some we have . pick with then it is easy to verify that the terms and are both of order with . finally by ( [ eq : boot ] ) ,we have the result under condition 2 can be proved in a similar manner .let be the -dependent approximation sequence for .define , , and in a similar way as , , and by replacing with .notice that by lemma a.1 of , we have for some .it follows that similarly we have using similar arguments in the proof of theorem [ thm : gaussian - dep ] , we have thus by ( [ eq : boot ] ) , we have then by lemma a.1 in , we have where the first two terms can be bounded using similar arguments in the proof of lemma [ lemma : m - dep - boot ] , and the last two terms decay exponentially .the same arguments apply to the terms associated with . by theorem [ thm : gaussian - dep ], we have the assumption that for , and with implies that decays exponentially .the rest of the proof is similar to those in the proof of theorem [ lemma : m - dep - boot ] .our arguments below apply to -dependent time series , and can be easily extended to weakly dependent time series by employing the -approximation techniques ( that incurs only an asymptotically ignorable error ) .let , , and be some generic constants which can be different from line to line .define following the arguments in the proof of lemma [ lemma : m - dep - boot ] , we have similarly we can show that where we have used the fact that by markov s inequality , we have with probability , uniformly for it implies that with probability , . by ( [ eq : block - boot ] ) , we have with probability with , for some small because , we can apply corollary 2.1 in to conclude that with probability , next , notice that with probability , we have . using the tail property of standard normal distribution , we can choose such that with probability , and for some properly chosen and .therefore by lemma 2.1 in , we obtain that with probability , by ( [ eq : block1 ] ) and ( [ eq : block2 ] ) , ( [ eq : equi ] ) holds with probability .the second part of the theorem follows from theorem [ thm : m - dep - boot ] and theorem [ thm : dep - boot ] .define where and since we have let and for some large enough and small enough ( e.g. ) such that we show that because and conditional on , we have \leq c'\sqrt{\mathcal{e}_{ab}\log(2q_0)} ] .note that in this case , is allowed to grow arbitrarily .
|
this article studies bootstrap inference for high dimensional weakly dependent time series in a general framework of approximately linear statistics . the following high dimensional applications are covered : ( 1 ) uniform confidence band for mean vector ; ( 2 ) specification testing on the second order property of time series such as white noise testing and bandedness testing of covariance matrix ; ( 3 ) specification testing on the spectral property of time series . in theory , we first derive a gaussian approximation result for the maximum of a sum of weakly dependent vectors , where the dimension of the vectors is allowed to be exponentially larger than the sample size . in particular , we illustrate an interesting interplay between dependence and dimensionality , and also discuss one type of dimension free " dependence structure . we further propose a blockwise multiplier ( wild ) bootstrap that works for time series with unknown autocovariance structure . these distributional approximation errors , which are finite sample valid , decrease polynomially in sample size . a non - overlapping block bootstrap is also studied as a more flexible alternative . the above results are established under the general physical / functional dependence framework proposed in wu ( 2005 ) . our work can be viewed as a substantive extension of chernozhukov et al . ( 2013 ) to time series based on a variant of stein s method developed therein . and blockwise bootstrap , gaussian approximation , high dimensionality , physical dependence measure , slepian interpolation , stein s method , time series .
|
in recent years , the study of quantum entanglement has provided us with novel perspectives and tools to address many - body problems .progress in our understanding of many - body entanglement has resulted both in the development of efficient tensor network descriptions of many - body wavefunctions and in the identification of diagnoses for quantum criticality and topological order .measures of entanglement have played a key role in the above accomplishments .the most popular measure is the entanglement entropy , namely the von neuman entropy of the reduced density matrix of region , which is used to quantify the amount of entanglement between region and its complementary when the whole system is in the pure state . in order to go beyond bipartite entanglement for pure states , another measure of entanglementwas introduced , namely the entanglement negativity .the entanglement negativity is used to quantify the amount of entanglement between parts and when these are in a ( possibly mixed ) state .one can always think of regions and as being parts of a larger system in a pure state such that , and therefore use the entanglement negativity to also characterize tripartite entanglement .recently , calabrese , cardy , and tonni have sparked renewed interest in the entanglement negativity through a remarkable exact calculation of its scaling in conformal field theory .similarly , an exact calculation is possible for certain systems with topological order .moreover , the negativity is easily accessible from tensor network representations and through quantum monte carlo calculations . in this paperwe add to the above recent contributions by presenting a disentangling theorem for the entanglement negativity .this technical theorem allows us to use the negativity to learn about the structure of a many - body wave - function .the theorem states that if and only if the negativity between parts and of a system in a pure state does not decrease when we trace out part , that is , if and only if , see fig .[ fig : abc ] , then it is possible to factorize the vector space of part as ( direct sum with an irrelevant subspace ) in such a way that the state itself factorizes as this is a remarkable result .notice that one can come up with many different measures of entanglement for mixed states ( most of which may be very hard to compute ) .these measures of entanglement will in general differ from each other and from the negativity for particular states , and it is often hard to attach a physical meaning to the concrete value an entanglement measure takes . [ for instance , in the case of the logarithmic negativity , we only know that it is an upper bound to how much pure - state entanglement one can distill from the mixed state ]. however , the disentangling theorem tells us that through a calculation of the entanglement negativity we can learn whether the wave - function factorizes as in eq . [ eq : factorization ] .that is , we are not just able to use the entanglement negativity to attach a number to the amount of mixed - state entanglement , but we are also able to learn about the intrincate structure of the many - body wave - function . in some sense , this theorem is analogous to hayden et al.s necessary and sufficient conditions for the saturation of the strong subadditivity inequality for the von neumann entropy , a beautiful result with deep implications in quantum information theory . we also present a numerical study of monogamy that puts the above disentangling theorem in a broader perspective .it follows from eq .[ eq : factorization ] that state has no entanglement between parts and , so that .that is , the disentangling theorem refers to a setting where the entanglement negativity fulfills the monogamy relation , ( for the particular case ) .we have seen numerically that the entanglement negativity does not fulfill the monogamy condition of eq .[ eq : monogamy ] .however , we have also found that the square of the negativity entanglement negativity satisfies the monogamy relation : for the particular case of a three - qubit system , this result had been previously proved analytically by ou and fan .the rest of the paper is divided into sections as follows .first in sect .ii we review the entanglement negativity . then in sect .iii we present and prove the disentangling theorem and discuss two simple corollaries .finally , in sect .iv we analyze a monogamy relation for entanglement negativity , and sect .v contains our conclusions .the negativity ( now known as entanglement negativity ) was first introduced in ref . and later shown in ref . to be an entanglement monotone and therefore a suitable candidate to quantify entanglement ( see also refs .the entanglement negativity of is defined as the absolute value of the sum of negative eigenvalues of , where the symbol means partial transpose with respect to subsystem .equivalently , where . a related quantity , the logarithmic negativity , is an upper bound to the amount of pure state entanglement that can be distilled from . ref . contains a long list of properties of the entanglement negativity . in this paper, we will need the following two results ( see lemma 2 in ref . ) : * _ lemma 1 _ * : for any hermitian matrix a there is a decomposition of the form , where are density matrices and .we say that a specific decomposition of the form is optimal if is minimal over all possible decompositions of the same form .* _ lemma 2 _ * : the following four statements of a decomposition of the form in lemma 1 are equivalent : 1 .decomposition is optimal ( that is , is minimal ) .2 . .3 . ( that is , is the absolute value of the sum of negative eigenvalues of , or its negativity ) .4 . and have orthogonal support , so that ( we say and are orthogonal ) . ,then there exist a factorization of such that the whole state can be factorized as .,title="fig:",scaledwidth=40.0% ] +let be a pure state of a system made of three parts , , and , with vector space .let be the reduced density matrix for parts , and let and be the entanglement negativity between and for state , and between parts and for state , respectively , see fig.[fig : abc ] .* _ theorem 3 ( disentangling theorem ) : _ * the entanglement negativities and are equal if and only if there exists a decomposition of as , such that the state decomposes as .that is , _ proof : _ proving one direction , namely that if , then , is simple : one just needs to explicitly compute the negativities and following their definition .we thus focus on proving the opposite direction . for the sake of clearness ,the proof is divided into three steps : ( a ) preparation ; ( b ) orthogonal condition ; ( c ) factorization ._ step ( a ) .preparation : _let us start by writing in each schmidt decomposition according to the bipartition , where , , , and .further , each state can be decomposed using an orthonormal set of states and in parts and respectively , as where the coefficients fulfill the unitary constraints : let .we note that decomposes as the direct sum of two subspaces , , where subspace contains the support of and subspace is its orthogonal complement .in particular , states above form an orthonormal basis in .similar decompositions of course also apply to and , but we will not need them here . the operators and can be expressed as : it is then easy to verify that all eigenvalues and corresponding eigenvectors of are given by : * , * , * .let us denote the above three types of eigenvectors simply as , , and , respectively , so that where and where and are non - negative , , and orthogonal , ( see lemma 2 ) . on the other hand , the matrix reads where we have introduced with by construction and are still non - negative , , but they no longer necessarily orthogonal or , in other words , the decomposition in eq .[ eq : newdeco ] may not be optimal and consequently , the negativity might be smaller than ._ step ( b ) .orthogonality condition : _lemma 2 above tells us that , in order to preserve the negativity of after tracing out part c , the positive operators and have to be orthogonal , that is it is not difficult to show that this amounts to requiring that for all valid values of .let us analyze these conditions carefully .first , let us consider there are four particular cases to be considered : * if and , then eq . [ eq.typei ] is already zero due to the and ; * if and , then we find the condition ; * if and , then we recover the same condition as for ; * the case and does not exist because we demanded from the start . in summary , we have obtained the condition : second , let us consider there are again four particular cases to be considered : * if and , then eq . [ eq.typeii ]is already zero due to the , , and ; * if ( or if ) and , then we reach again eq . [ eq : condi ] ; * if or , because of , then we must have ; * the case ( or ) does not exist because we demanded from the start . in summary , we have obtained the new condition which says that the sum does not depend on index i. we can now combine conditions [ eq : condi ] and [ eq : condii ] into : and from the unitary constraints of eq .[ eq : unitary ] we see that ._ step ( c ) .factorization : _ in this part , we will finally show the factorization of the wave - function .first , we compute , here we are free to choose the orthonormal basis of part such that is diagonal , ie , the matrix .let us now consider the set of states of .the scalar products reveal that they form an orthogonal basis in , which we can normalize by defining .the wave - function can now be written as further , we can introduce a product basis which defines a factorization of into a tensor product .with respect to this decomposition , state factorizes as where this completes the proof. we end this section with two simple corollaries .* _ corollary 4 _ : * if a tri - partite pure state is such that , then .this can be seen from eq .[ eq : fact2 ] , which implies that the density matrix decomposes as the product .the second corollary , below , can be proved by applying the disentangling theorem iteratively .let be a pure state of a system that decomposes into parts , , , .let denote the negativity between parts and ; and let denote the negativity between parts and .* _ corollary 5 _ : * the above entanglement negativities fulfill for all if , and only if , the state factorizes as where for each , the vector space decomposes as 4 implies that , in the specific context of the tripartite pure states addressed by the disentangling theorem , the entanglement negativity ( somewhat trivially ) satisfies the relation , for the particular case , which saturates the inequality .if it was correct , then this inequality would tell us the following : when measuring entanglement by means of the squared negativity , if part is very entangled with part , then part can not be at the same time very entangled with part .[ eq : monogamy ] is reminiscent of ( and motivated by ) the famous coffman - kundu - wootters monogamy inequality for another measure of entanglement , the concurrence , which satisfies according to this inequality , if approaches , then is necessarily small , which is used to say that entanglement is monogamous : if is very entangled with , then it can not be simultaneously very entangled with .however , the concurrence is only defined on qubits , and therefore of rather limited use . thus in this sectionwe explore whether the entanglement negativity , which is defined for arbitrary systems , can be used as a replacement of eq .[ eq : c ] for general systems .we first note that for the simplest possible system , made of three qubits , y. c. ou and h. fan showed analytically that eq .[ eq : monogamy2b ] holds .here we address the validity of eq .[ eq : monogamy2b ] numerically . for and ,we have randomly generated hundreds of states in and computed both sides of eq .[ eq : monogamy2b ] .the results for and are shown in fig .[ fig : monogamy ] . for numerical results are consistent with the analytical proof presented in ref .for we again see consistency with the monogamy relation [ eq : monogamy2b ] , but a tendency already observed in becomes more accute : most randomly generated states concentrate away from the saturation line .this is a concern , because it means that we are not properly exploring the states near the saturation line , which are the ones that could violate the inequality . for this purpose, we used a monte carlo sampling whereby a new tripartite pure state similar to a previous one is accepted with certain probability that depends on the distance of its negativities to the saturation line .the second panel in fig .[ fig : monogamy ] shows that this method indeed allows us to explore the neighborhood of the saturation line , and that there are again no violations of the monogamy relation [ eq : monogamy2b ] . for (not displayed ) similar results are obtained .we take these results as strong evidence of the validity of eq .[ eq : monogamy2b ] in the systems we have analyzed , and conjecture that eq .[ eq : monogamy2b ] should be valid for arbitrary tri - partite systems .finally , we have also considered the generalized monogamy relation in a system of parts , , , , .specifically , we have numerically checked the validity of eq .[ eq : monogamy3 ] for the case of four qubits ( ) .( y - axis ) versus ( x - axis ) for randomly generated pure states of three qubits ( left ) and three qutrits ( right ) .the saturation line is is represented in blue , while each green dot corresponds to a randomly generated pure states .these results are consistent with the general validity of eq .[ eq : monogamy2b ] .the concentration of points near the saturation line in the three - qutrit case is due to the sampling algorithm employed , which favors exploring the neighborhood of this saturation line.,title="fig:",scaledwidth=50.0% ] +in this paper we have provided two new results for the entanglement negativity .first , a disentangling theorem , eq .[ eq : disentangling ] , that allow us to use the negativity as a criterion to factorize a wave - function of a system made of three parts , , and into the product of two parts , namely and , where we have also explained how to break the vector space of into those of and .the second result is a conjectured monogamy relation , eq .[ eq : monogamy2b ] , which is known to hold for a system of three qubits and that we have numerically confirmed for systems made of three -level systems , for and 4 .these results are intended to add to our current understanding of entanglement negativity , at a time when this measure of entanglement is being consolidated as a useful tool to investigate and characterize many - body phenomena , including quantum criticality and topological order .research at perimeter institute is supported by the government of canada through industry canada and by the province of ontario through the ministry of research and innovation .g.v . acknowledges support from the templeton foundation .thanks the australian research council centre of excellence for engineered quantum systems . c. holzhey , f. larsen , and f.wilczek , nucl .b 424 , 443 ( 1994 ) ; c. g. callan and f. wilczek , phys . lett .b 333 , 55 ( 1994 ) ; g. vidal , j.i .latorre , e. rico , a. kitaev , phys .90 , 227902 ( 2003 ) ; p. calabrese and j. cardy , j. stat .p06002 ( 2004 ) .a. hamma , r.ionicioiu , p.zanardi , phys .lett . a 337 , 22 ( 2005 ) ; a. hamma , r. ionicioiu , and p. zanardi , phys .a 71 , 022315 ( 2005 ) ; a. kitaev , j. preskill , phys .96 110404 ( 2006 ) ; m. levin , x .-wen , phys .lett . , 96 , 110405 ( 2006 ) .
|
entanglement negativity is a measure of mixed - state entanglement increasingly used to investigate and characterize emerging quantum many - body phenomena , including quantum criticality and topological order . we present two results for the entanglement negativity : a disentangling theorem , which allows the use of this entanglement measure as a means to detect whether a wave - function of three subsystems , , and factorizes into a product state for parts and ; and a monogamy relation , which states that if is very entangled with , then can not be simultaneaously very entangled also with . = 5000 = 1000
|
we consider a model of _ copolymer at a selective interface _ introduced in , which has attracted much attention among both theoretical physicists and probabilists ( we refer to for general references and motivations ) .let be the symmetric simple random walk on started at , with law such that the increments are iid and .the partition function of the model of size is given by \end{aligned}\ ] ] where , and is a sequence of iid standard gaussian random variables ( the quenched disorder ) .we adopt the convention that , if , then .one interprets as the inverse temperature ( or coupling strength ) and as an `` asymmetry parameter '' : if , since the s are centered , the random walk overall prefers to be in the upper half - plane ( ) .it is known that the model undergoes a delocalization transition : if the asymmetry parameter exceeds a critical value then the fraction of `` monomers '' , , which are in the upper half - plane tends to in the thermodynamic limit ( delocalized phase ) , while if then a non - zero fraction of them is in the lower half - plane ( localized phase ) .what mostly attracts attention is the slope , call it , of the curve in the limit : is expected to be a universal quantity , i.e. , independent of the details of the law and of the disorder distribution ( see next section for a more extended discussion on this point ) .already the fact that the limit slope is well - defined and positive is highly non - trivial . until now , all what was known rigorously about is that , but numerically the true value seems to be rather around .the upper bound comes simply from annealing , i.e. , from jensen s inequality , as explained in next section .our main new result is that is strictly smaller than .the proof works through a coarse - graining procedure in which one looks at the system on the length - scale , given by the inverse of the annealed free energy .the other essential ingredient is a change - of - measure idea to estimate fractional moments of the partition function ( this idea was developed in and , and used in the context of copolymers in ) .coarse - graining schemes , implemented in a way very different from ours , have already played an important role in this and related polymer models ; we mention in particular , and .as in , we consider a more general copolymer model which includes as a particular case .since the critical slope is not proven to exist in this general setting , theorem [ th : slope ] will involve a instead of a limit .consider a renewal process of law , where and is an iid sequence of integer - valued random variables .we call and we assume that ( the renewal is recurrent ) and that has a power - law tail : with and . as usual , the notation is understood to mean that .the copolymer model we are going to define depends on two parameters and , and on a sequence of iid standard gaussian random variables ( the quenched disorder ) , whose law is denoted by . for a given system size and disorder realization , we define the partition function as ,\end{aligned}\ ] ] where is the indicator function of the event . to see that the `` standard copolymer model '' is a particular case of ,let and as a consequence .it is known that in this case satisfies with , see ( * ? ? ?iii ) ( the fact that in this case holds only for , while for due to the periodicity of the simple random walk , entails only elementary modifications in the arguments below ) . next , observe that if denotes the sign of the excursion of between the successive returns to zero and , under the sequence is iid and symmetric ( and independent of the sequence ) .therefore , performing the average on the in one immediately gets .the infinite - volume free energy is defined as where existence of the limit is a consequence of superadditivity of the sequence and the inequality is immediate from which is easily seen inserting in the expectation in right - hand side of the indicator function .one usually defines the critical line in the plane as from jensen s inequality ( the `` annealed bound '' ) one obtains the immediate inequality indeed , one has ,\end{aligned}\ ] ] from which it is not difficult to prove that and therefore the claim . for , follows from the fact that the right - hand side of is bounded above by , while the left - hand side of is always non - negative . for , just observe that ={{\ensuremath{\mathbf p } } } ( n\in\tau)e^{2{\lambda}({\lambda}-h)n}\end{aligned}\ ] ] and that =k(n)\frac{1+e^{2{\lambda}({\lambda}-h)n}}2.\end{aligned}\ ] ] the limit is called _ annealed free energy_. the critical point is known to satisfy the bounds the upper bound , proven recently in ( * ? ? ?2.10 ) , says that the annealed inequality is strict for every .the lower bound was proven in for the model and in the general situation in , and is based on an idea by c. monthus .we mention that ( the analog of ) the lower bound in was recently proven in and to become optimal in the limit for the _ `` reduced copolymer model '' _ introduced in ( * ? ? ?4 ) ( this is a copolymer model where the disorder law depends on the coupling parameter ) .as already mentioned , much attention has been devoted to the slope of the critical curve at the origin , in short the `` critical slope '' , existence of such limit is not at all obvious ( and indeed was proven only in the case of the `` standard copolymer model '' ) , but is expected to hold in general .while the proof in was given in the case , it was shown in ( by a much softer argument ) that the results of imply ( always for the model ) that the slope exists and is the same in the gaussian case we are considering here . moreover , the critical slope is expected to be a function only of and not of the full , at least for , and to be independent of the choice of the disorder law , as long as the s are iid , with finite exponential moments , centered and of variance .in contrast , it is known that the critical curve _ does _ in general depend on the details of ( this follows from ( * ? ? ? * prop .2.11 ) ) and of course on the disorder law .the belief in the _ universality of the critical slope _ is supported by the result of which , beyond proving that the limit exists , identifies it with the critical slope of a continuous copolymer model , where the simple random walk is replaced by a brownian motion , and the s by a white noise . until recently , nothing was known about the value of the critical slope , except for which follows from and from the lower bound in ( note that the strict upper bound does not imply a strict upper bound on the slope ) .none of these bounds is believed to be optimal .in particular , as we mentioned in the introduction , for the standard copolymer model numerical simulations suggest a value around for the slope . this situation was much improved in : if , then ( * ? ? ? * ths .2.9 and 2.10 ) note that and are profoundly different situations : the inter - arrival times of the renewal process have finite mean in the former case and infinite mean in the latter .moreover , it was proven in ( * ? ? ?2.10 ) that there exists ( which can be estimated to be around ) , such that if note that this does not cover the case of the standard copolymer model , for which .our main result is that the upper bound in is always strict : [ th : slope ] for every there exists such that , whenever satisfies , it is interesting to note that the upper bound depends only on the exponent and not the details of .this is coherent with the mentioned belief in universality of the slope .the new idea which allows to go beyond the results of ( * ? ? ?2.10 ) is to bound above the fractional moments of in two steps : 1 .first we chop the system into blocks of size , the correlation length of the annealed model , and we decompose according to which of the blocks contain points of 2 . only at that point we apply the inequality , where each of the corresponds to one of the pieces into which the partition function has been decomposed .theorem [ th : slope ] holds in the more general situation where is a sequence of iid random variables with finite exponential moments and normalized so that .we state the result and give the proof only in the gaussian case simply to keep technicalities at a minimum . the extension to the general disorder lawcan be obtained following the lines of ( * ? ? ?fix , and define the reason why we restrict to will be clear after . from now on we take , where the value of will be chosen close to later .let and note that , irrespective of how is chosen , can be made arbitrarily large choosing small ( which is no restriction since in theorem [ th : slope ] we are interested in the limit ) .one sees from that , apart from an inessential factor , is just the inverse of the annealed free energy , i.e. , we will show that , if is sufficiently close to , there exists such that for there exists such that ^\gamma\end{aligned}\ ] ] for every . in particular , by jensen s inequality and the fact that the sequence has a non - negative limit , for .this implies with from now on we assume that is integer and we divide the interval into blocks set ( with the convention ) , where is the left shift operator : for .we have then the identity ( see fig . [fig : decompos ] ) = 4 cm [ c] [ c] [ c] [ c] [ c] [ c] [ c] [ c ] [ c ] [ c] [ c] [ c] [ c] [ c] where \right)k(n_1)z_{n_1,j_1}\varphi((j_1,n_2])k(n_2-j_1 ) z_{n_2,j_2}\times\ldots \\\nonumber & & \times \varphi((j_{\ell-1},n_\ell])k(n_\ell - j_{\ell-1})z_{n_\ell , n}\end{aligned}\ ] ] and , for , we have then , using the inequality which holds for and , .\end{aligned}\ ] ] define and note that . with the conventions of fig .[ fig : decompos ] , is the union of the blocks which either contain a big black dot or such that contains a big black dot .note also that , for every , the interval ] .thanks to the definition of and to , equals in conclusion , we proved ^\gamma\end{aligned}\ ] ] where , since $ ] is a subset of as observed above , with the convention that . in we used independence of for different s ( recall that is a product measure ) to factorize the expectation .the heart of the proof of theorem [ th : slope ] is the following : [ th : lemmagep ] there exists such that the following holds for . if , for some , and then the quantity in square brackets in is bounded above by where . here and in the following , the positive and finite constants depend only on the arguments which are explicitly indicated , while is the same constant which appears in .assume that lemma [ th : lemmagep ] is true and that - are satisfied .then , if moreover satisfies then it follows from ( * ? ? ?a.4 ) that for every .indeed , the sum in the right - hand side of is nothing but the partition function of a homogeneous pinning model ( * ? ? ?2 ) of length with pinning parameter such that the system is in the delocalized phase ( this is encoded in ) .more precisely : to obtain it is sufficient to apply proposition [ th : a4 ] below , with replaced by and replaced by ( * ? ? ?a.4 ) [ th : a4 ] if is a probability on which satisfies for some , then for every there exists such that for every inequality is then proven ; note that depends on through .the condition which we required since the beginning guarantees that the sum in converges . note that depends on only through ; this is important since we want in to depend only on and not on the whole . to conclude the proof of theorem [ th : slope ] , we still have to prove lemma [ th : lemmagep ] and to show that - can be satisfied , with satisfying , for all and , if is sufficiently small and is close to ( see also remark [ rem : flow ] below ). _ proof of lemma [ th : lemmagep ] ._ first of all , we get rid of and effectively we replace by : the quantity in square brackets in is upper bounded by explicitly , one may take one has ( recall and ) , while the supremum in is easily seen from to depend only on and .recall that by convention , and let also from now on and .we do the following : * for every such that ( which guarantees that between and there is at least one full block ) , we use this is true under the assumption that with large enough , i.e. , with small , since and . * for every such that , we leave as it is. then , is bounded above by where now we can sum over , using the two assumptions - .we do this in three steps : * first , for every we sum over the allowed values of ( provided that , otherwise there is no to sum over ) using and the constraint .the sum over all such gives at most where takes care of the fact that possibly . at this pointwe are left with the task of summing \end{aligned}\ ] ] ( observe that ) over all the allowed values of and of . *secondly , using , we see that if then the sum of over the allowed values of and gives at most ( or if ) .the contribution from all the with is therefore at most this is best seen if one starts to sum on for the largest value of and then proceeds to the second largest , and so on .* finally , the sum over all the s with is trivial ( the summand does not depend on the s ) and gives at most . in conclusion ,we have upper bounded by ^{|j|}.\end{aligned}\ ] ] if it is clear from the definition of that the last factor equals and is proven .for , can be made as small as wished with large ( i.e. choosing small ) , so that we can again assume that the last factor in does not exceed .lemma [ th : lemmagep ] is proven .finally , we have : [ th : cond12 ] let be sufficiently small .there exists satisfying and such that conditions - are satisfied for all and . _ proof of proposition [ th : cond12 ] ._ we have by direct computation \\\nonumber & \le & { { \ensuremath{\mathbf e } } } \left[\prod_{1\le n:\tau_n\le j } \frac{1+e^{-\frac1{k\sqrt{1-\rho}}(\tau_n-\tau_{n-1})}}2 { \mathbf{1}}_{\{j\in\tau\}}\right]\\ \label{eq : condiz } & = & { { \ensuremath{\mathbf p } } } ( j\in\tau){{\ensuremath{\mathbf e } } } \left[\left.\prod_{1\le n:\tau_n\le j } \frac{1+e^{-\frac1{k\sqrt{1-\rho}}(\tau_n-\tau_{n-1})}}2 \right|j\in\tau\right]\end{aligned}\ ] ] where in the inequality we assume that ( it is important that this condition does not depend on ) .of course , from we see that for every moreover , we know from ( * ? ? ? * th .b ) that , if , while for ( cf . for instance (a.6 ) ) . for every onehas then for sufficiently large , i.e. , for small , ( we recall that was defined in ) .we need also the following fact : [ th : lemma43 ] for every there exists such that the following holds .if then , say , for \le c_7({\alpha})\frac{(\log q)^2}{q^{\alpha}}. \end{aligned}\ ] ] if then , for every , \le c_7({\alpha } ) e^{-q/2}. \end{aligned}\ ] ] lemma [ th : lemma43 ] was proven in ( * ? ? ?* lemma 4.3 ) .we add a few side remarks about its proof in appendix [ sec : lemma43 ] .the upper bound is certainly not optimal , but it gives us an estimate which vanishes for and which depends only on and , which is all we need in the following .let us mention that in the case of the standard copolymer model , using the property that for every and every =\frac1{(n/2)+1}\end{aligned}\ ] ] ( see ( * ? ? ?iii.9 ) ) we obtain for every =\frac{1-e^{-q}}q.\end{aligned}\ ] ] fix which satisfies and choose via one finds ( for sufficiently large ) next we observe that , for , choosing small and sufficiently close to ( how close , depending on only through the exponent ) we have . this just follows from lemma [ th : lemma43 ] above ( applied with and ) and from , since small implies large . as a consequence , and follows .[ rem : flow ] it is probably useful to summarize the logic of the proof of ( similar observations hold for the proof of below ) .given , one first fixes , then which satisfies , then as in and such that the right - hand side of eqs .- is smaller than when is replaced by .once all these parameters are fixed , one chooses sufficiently small ( i.e. sufficiently large ) so that for all the estimates - hold . if we choose small , implies .therefore , ( if is sufficiently large and is suitably small ) . as for the rest of the sum : again from lemma [ th : lemma43 ] and , one has for every .then , in the last equality , we used the fact that is the -probability that the first point of which does not precede equals .the sum over of then clearly equals , since is recurrent .the proof of lemma [ th : lemma43 ] given in ( * ? ? ?* lemma 4.3 ) works as follows .let , i.e. , the last point of up to .first of all one shows that \\ \label{eq : nocondiz2 } & & \label{eq : c9 } \le c_{9}({\alpha } ) \limsup_{n\to\infty } { { \ensuremath{\mathbf e } } } \left [ \frac{1+e^{-(q / n)(n - x_n)}}2 \prod_{1\le j:\tau_j\le n}\frac{1+e^{-(q / n)(\tau_j-\tau_{j-1})}}2 \right].\end{aligned}\ ] ] we detail below the proof of this inequality in order to leave no doubts on the fact that the constant depends only on .this was not emphasized in the proof of ( * ? ? ?* lemma 4.3 ) since it was not needed there . for , it follows from ( * ? ? ?( 4.23 ) and ( 4.49 ) ) that the in the right - hand side of is actually a limit , and equals ( the which appears in ( * ? ? ?* eq . ( 4.49 ) ) can be immediately improved into ) . as a side remark ,the expectation in , irrespective of the value of and , is not smaller than ; this just follows from the convexity of the exponential function : for , the in does not exceed , as was proven in ( * ? ? ?* eq . ( 4.43 ) ) .finally we prove , which is quite standard .the expectation in is bounded above by \\ & & = \sum_{i=0}^{n/2 } { { \ensuremath{\mathbf e } } } \left[\left .\frac{1+e^{-(q / n)(n/2-i)}}2 \prod_{1\le j:\tau_j\le n/2}\frac{1+e^{-(q / n)(\tau_j-\tau_{j-1})}}2 \right|x_{n/2}=i \right]\\&&\times{{\ensuremath{\mathbf p } } } ( x_{n/2}=i|n\in\tau)\end{aligned}\ ] ] and follows if we can prove that to show this , we use repeatedly and .we start from the identity the denominator is lower bounded , uniformly in , by where the last inequality holds for sufficiently large . as for the numerator : always for sufficiently large , and , uniformly on , work was partially supported by anr , grant polintbio and grant lhmshe .i wish to thank the anonymous referees for the careful reading of the manuscript and for several useful comments .
|
for a much - studied model of random copolymer at a selective interface we prove that the slope of the critical curve in the weak - disorder limit is strictly smaller than , which is the value given by the annealed inequality . the proof is based on a coarse - graining procedure , combined with upper bounds on the fractional moments of the partition function . + + 2000 _ mathematics subject classification : 60k35 , 82b44 , 60k37 _ + + _ keywords : copolymers at selective interfaces , fractional moment estimates , coarse - graining _ + + _ submitted to ejp on june 10 , 2008 , final version accepted on january 30 , 2009 _
|
autonomous dynamical systems are defined by equations of the form , . when a bounded attractor exists , its general structure is partly described by the location of the fixed points . typically the fixed points are isolated .more information about the structure of an attracting set can be determined from the stability of each of the fixed points .the stability is determined from the eigenvalues of the jacobian at each fixed point , and the eigenvectors associated with each eigenvalue . when a dynamical system generates a strange attractor , certain parts of the attractor `` swirl '' around one - dimensional invariant sets called connecting curves .in such cases , connecting curves organize the global structure of the flow , and therefore provide more information than just the location and stability of the fixed points .connecting curves defined by the eigenvalue - like condition have been studied in a number of three - dimensional dynamical systems .they are recognized as a kind of skeleton that helps to define the structure of a strange attractor . in this workwe explore the properties of connecting curves in dynamical systems of dimension .this is done in a restricted class of dynamical systems called differential dynamical systems .we describe the basic connectivity between fixed points and outline conditions that determine whether or not they lie on the connecting curve .we also describe the systematics of connecting curve attachment to or detachment from a fixed point and the recombinations or reconnections that can take place between different connecting curves .fixed points and their eigenvalue spectrum are used to predict changes in the local stability of a connecting curve as the control parameters are varied . under certain conditions ,we show that the global stability of a connecting curve can also be predicted .when the stability is such that the flow undergoes a swirling motion around the connecting curve , an integer index is used to describe the swirling in terms of vortex or hypervortex structures .the formation of these structures plays an important role in determining the topology of strange attractors in higher dimensions . in sec .[ sec : connectingcurves ] we review the definition of connecting curves and their properties .the particular class of dynamical systems studied is introduced in sec .[ sec : diffdynsys ] . in that sectionwe also describe how the distribution of fixed points in such systems is systematically organized by cuspoid catastrophes , where is the maximum number of fixed points allowed by variation of the control parameters .we also describe how the stability properties of each fixed point are determined by another cuspoid catastrophe , where is the dimension of the dynamical system . in secs .[ sec:3d ] , [ sec:4d ] , and [ sec:5d ] we study cases of three- , four- , and five - dimensional dynamical systems with one , two , and three fixed points .these illustrate many of the properties of connecting curves described in secs . [sec : connectingcurves ] and [ sec : diffdynsys ] .results are summarized in sec .[ sec : conclusions ] .special points exist in the phase space of autonomous dynamical systems where the acceleration is proportional to the velocity .these points satisfy the eigenvalue equation , where is the jacobian . for an -dimensional dynamical system ,the equations represented by provide constraints in an dimensional space consisting of phase space coordinates and one `` eigenvalue '' : .the intersections of the manifolds defined by these equations are one - dimensional sets in whose projection into the coordinate subspace is called a connecting curve .much like fixed points , connecting curves provide constraints on the behavior of local phase space trajectories .this behavior is determined by examining the eigenvalue spectrum of the jacobian matrix along the length of the connecting curve .the eigenvalues can occur in a variety of ways depending on the dimension , , of the phase space : real and complex conjugate pairs , with $ ] .subsets of a connecting curve along which are called strain curves because small volumes in their vicinity undergo irrotational deformation under the action of the flow .deformation takes place along the principle stable and unstable directions indicated by the eigenvectors of the real eigenvalues . in three dimensions , subsets of the connecting curve along which are known as vortex core curves .vortex core curves were originally developed to identify vortices in complex hydrodynamic flows .they were later shown to organize the large - scale structure of strange attractors produced by three - dimensional dynamical systems .a vortex can be decomposed into its rotational and non - rotational components by linearizing the flow around its core curve .rotation takes place on a plane spanned by the eigenvectors of the complex eigenvalues . the swirling flow is then transported along the eigenvector associated with the real eigenvalue . under this combined action, trajectories are expected to undergo a tornado - like motion that spirals around the core in the direction of the flow .the concept of a vortex can also be extended to higher dimensions in the phase space of dynamical systems .the basic idea is the same as in three - dimensions .linearized flow around the core line is resolved into orthogonal planes of rotation that can be transported along directions .we refer to higher dimensional vortices with as hypervortices .hypervortices and their associated core curves play an important role in organizing the large - scale structure of strange attractors in higher dimensions . at a fixed pointthe initial conditions for a core curve are the eigenvectors with real eigenvalues .this observation has a number of important consequences . at a fixed point with real eigenvalues , connecting curves pass through the fixed point .this means that if at a fixed point , no connecting curves attach to the fixed point .if , under control parameter variation , the stability properties of a fixed point change when a pair of real eigenvalues become degenerate and transform to a complex conjugate pair , then two connecting curves must disconnect from the fixed point . for the remainder of this work , we assign each of the connecting curve subsets a unique color .strain curves ( ) are plotted in red , vortex core curves ( ) are plotted in black and hypervortex core curves ( ) are plotted in blue .we use differential dynamical systems to study connecting curves in higher dimensions for several reasons : ( a ) their canonical form has the same structure in all dimensions , ( b ) the jacobian matrix is simple and can be put into a jordan - arnold canonical form when evaluated at the fixed points , and ( c ) only a single forcing function , , needs to be modeled .each of these properties is described in detail below .differential dynamical systems assume a canonical form in which each phase space coordinate except the first is the time derivative of the previous : this form is encountered when analyzing experimental data embedded using a differential embedding .such embeddings are equivalent to takens time - delay embeddings using a minimum time delay .the fixed points all lie along the axis because for .the stability properties of a fixed point at are determined by the eigenvalues of the jacobian matrix {{\bf x}_{f } } \label{eq : jacobian}\ ] ] evaluated at the fixed point , where as usual .the secular equation for the eigenvalues at the fixed point is setting converts the jacobian ( [ eq : jacobian ] ) into the jordan - arnold canonical form . in this form , the jacobian provides a standard unfolding for the cuspoid catastrophes which can be exploited to provide information about the stability properties of fixed points as the control parameters are varied . the canonical form ( [ eq : differentialform ] ) can be further simplified by making two assumptions about the nature of the single forcing function .we write the forcing function as the sum of two functions and split the control parameters for the original source function into two subsets , and . the functions and satisfy the following properties . is a function of only one variable , , and control parameter set .the fixed points are given by .their number and location is controlled by the polynomial and the control parameters . the term in the jacobian in eq .( [ eq : jacobian ] ) .we choose to be a polynomial of degree with a maximum of real fixed points . in this workwe take with a single control parameter .the functions used in this work are outlined below . for a maximum of one fixed point , we choose .the fixed point is located at the origin and has slope . for a maximum of two fixed points , we chose .there are no real fixed points for , a doubly - degenerate fixed point for and two fixed points for .the two real fixed points are created in a saddle - node bifurcation .they are symmetric and located at and with slopes , or at and at . for a maximum of three fixed points , we choose .there is one real fixed point for and three for .the three fixed points are created in a pitchfork bifurcation .one fixed point is located at the origin .the other two are at .the critical points and their slopes are . along the axis the sign of alternates as successive fixed points are encountered . for fixed points with focal stability, this means an alternation of ( stable focus - unstable outset ) with ( unstable focus - stable inset ) along the axis .if at a fixed point then one eigenvalue is positive and the other two are negative , or have negative real part . if the reverse is true : one eigenvalue is negative and the other two are either positive or have positive real part .the net result is that if there are more than two fixed points , for an interior fixed point that is a stable focus with unstable outsets , the outsets can flow to the neighboring fixed points on its left and right , which are unstable foci with stable insets . ) with .a period doubling route to chaos is observed after the saddle - node bifurcation at .parameter values : .,height=226 ] is a function of the remaining variables , .since the volume growth rate under the flow is determined by the divergence of the vector field , and the divergence is , we impose the condition that is negative semi - definite throughout the phase space .under such an assumption eq .( [ eq : differentialform ] ) describes a dissipative dynamical system .the constant term in the taylor series expansion of is zero .the linear terms define the terms that appear in the jacobian eq .( [ eq : jacobian ] ) when evaluated along the axis , specifically at fixed points : , .thus , the stability of the fixed points is determined by the roots of the characteristic equation , given in eq .( [ eq : sec ] ) .the center of gravity of the eigenvalues at a fixed point is .it is convenient to choose for two reasons . in this caseall fixed points are unstable .further , eq . ( [ eq : sec ] ) assumes the form of the catastrophe function .in this section we set in order to study three - dimensional differential dynamical systems that take the form the jacobian at any fixed point is {{\bf x}_{f } } \label{eq : jcn3}\ ] ] with a characteristic polynomial whose roots determine the stability properties ( focus or saddle ) of the fixed point . by making the substitution , ( [ eq : cpn3 ] )becomes the canonical unfolding of the cusp catastrophe whose bifurcation set is shown in fig .[ fig : a3](a ) . when projected down into the plane , the bifurcation set forms a well known cusp shape that divides the control parameter space into two regions . inside the cusp , and the three eigenvalues of ( [ eq : jcn3 ] )are real . for control parameters in this regionthe fixed points of ( [ eq:3dode ] ) have the stability of a saddle . outside the cusp , and and the eigenvalues of ( [ eq : jcn3 ] ) consist of one complex conjugate pair and one real . in this region ,the fixed points have focal stability .if , the fixed point stability is determined along symmetry axis . outside the cusp ( ) , the fixed point is a center that has two complex conjugate eigenvalues with zero real part and a real eigenvalue that takes on a value of zero . inside the cusp ( ), the fixed point is a saddle and has three real eigenvalues .two of the eigenvalues differ only by a sign .the third is equal to zero .connecting curves satisfy the following two constraints in the three - dimensional space ( c.f . ,appendix 1 ) where .we use polynomials of degree in the sections below to explore the spectrum of changes that can occur in the fixed points and connecting curves as the parameters and are varied .we begin by fixing and varying for . a standard period - doubling route to chaosis observed in the bifurcation diagram shown in fig .[ fig:3dbif_r ] .a pair of symmetry related fixed points are created in a saddle - node bifurcation as passes through zero .their stability can be determined directly by evaluating the zeros of ( [ eq : cpn3 ] ) .a more convenient method is to examine the evolution of the fixed points in the catastrophe control parameter space .this method allows us to predict the fixed point stability as the control parameters are changed .the fixed point stability for and is plotted in fig .[ fig : a3](b ) using green dots .their evolution as a function of is indicated by the green arrows .no real fixed points exist for .figure [ fig : saddlenode](a ) shows that vortex and strain curves exist in the phase space despite the lack of fixed points .phase space trajectories initialized near the core curve spiral along its length towards the right .these trajectories are unbounded .a real degenerate fixed point is created when .it has coordinates in the catastrophe control parameter plane .figure [ fig : a3](b ) shows that it is located directly above the cusp along the symmetry axis .because it is both outside the cusp and on the symmetry axis , the fixed point is a center with a stable inset on the left and a unstable outset on the right .it appears on the vortex core curve in fig .[ fig : saddlenode](b ) . phase space trajectories spiral along the core curve towards the right and remain unbounded .two real fixed points are created as is increased past zero . as seen in fig .[ fig : a3](b ) , they move horizontally along the axis as is increased .since the fixed point stability evolves outside of the cusp region for all , the fixed points always have focal stability for the current value of .when , the alternation of ( stable focus - unstable outset ) with ( unstable focus - stable inset ) builds up the strange attractor shown in fig .[ fig : saddlenode](c ) .a reconnection between two connecting curves takes place as is increased past one .the sequence is shown in fig .[ fig : reconnect ] .the vortex core curve running between the two fixed points breaks and re - attaches itself with the strain curve approaching from the bottom .this sequence is important because it indicates that reconnections can be made between vortex and strain curves despite their different stability properties .we end this sub - section by fixing the control parameters and varying .the fixed points now move along the axis and transition from outside to inside the cusp region .this transition is illustrated in fig .[ fig : a3](c ) for three different values of . the change from focal to saddle stability forces a corresponding change in stability along the connecting curve near the fixed point .when , all the eigenvalues become real and only strain curves can pass through the fixed points . the sequence in fig .[ fig : vortstrain ] shows the connecting curves during this transition . for , the jacobian is given by eq .( [ eq : jcn3 ] ) with .the description of the fixed point stability proceeds as in the case with .the fixed point at the origin has coordinates in the catastrophe control parameter plane .because it sits on the symmetry axis , the fixed point is a center when . for ,it is a saddle .the symmetry related fixed points have the same coordinate in the catastrophe control parameter plane .they have three real eigenvalues for .otherwise , the fixed points have one real eigenvalue and a complex conjugate pair .figure [ fig:3d3fpxy ] shows the strange attractor , connecting curves and fixed points generated for parameter values .the fixed point at the origin has two stable insets that attract flow along the vortex core curve from the two outer fixed points . ) with .the flow is organized by the fixed points and the vortex core curve .the fixed point at the origin is a center with stable insets .the symmetric fixed points are foci with unstable outsets .parameter values : .,height=226 ]in this section we set in order to study four - dimensional differential dynamical systems that take the form the jacobian at any fixed point is {{\bf x}_{f } } \label{eq : jcn4}\ ] ] with a characteristic polynomial whose roots determine the stability properties of the fixed point . by making the substitution , ( [ eq : cpn4 ] )becomes the canonical unfolding of the swallowtail catastrophe whose bifurcation set is shown in fig .[ fig : a4](a ) .the control parameter space is divided into three disjoint open regions that describe fixed points with four , two , or zero real eigenvalues and zero , one , or two pairs of complex conjugate eigenvalues .the open regions are connected and simply connected .the bifurcation sets separating these open regions satisfy and .we use polynomials of degree in the sections below to study the properties of connecting curves in higher dimensions . for , we set and vary the parameter such that the single fixed point traverses the three regions of stability shown in fig . [fig : a4](b ) .we describe the effects on the connecting curves as the fixed point passes through each of the regions .we also determine the global stability of the connecting curve as a function of the single phase space variable . for we set such that the control parameter space is divided into two regions where ( [ eq : jcn4 ] ) produces one or two pairs of complex conjugate eigenvalues .under certain conditions , the fixed points can assume different values of the index .we describe these effects on the structure of strange attractors and their connecting curves . to investigating the global stability of connecting curves change in higher dimensions , we fix and vary for .the stability of the single fixed point at the origin depends on its coordinates in the catastrophe control parameter space . as is varied , the fixed point moves along the symmetry axis and passes through the three regions of stability shown in fig .[ fig : a4](b ) . along the symmetry axis ,the fixed point has the following eigenvalue spectrum : one degenerate complex conjugate eigenvalue pair when ; none for ; and a doubly degenerate pair for . although useful , the fixed points only provide limited ( local ) information about the stability of a connecting curve .the stability of an arbitrary point in phase space is determined by the characteristic equation of the jacobian evaluated at that point . for the control parameters listed above, the characteristic equation is which is a function of the single phase space variable .equation ( [ eq : cpglobal ] ) is used to create a simple partition of the phase space that determines the stability along the entire length of the connecting curve .we start by choosing such that . for this casethe fixed point has a degenerate complex conjugate eigenvalue pair .the index .the vortex core curves that pass through the fixed point are shown in fig . [ fig:4d1fp](a ) .equation ( [ eq : cpglobal ] ) produces a single pair of complex conjugate eigenvalues for all . as a result ,the global stability properties of the connecting curve remain unchanged throughout the phase space .next , we set .this moves the fixed point into the region where no complex conjugate eigenvalue pairs are produced . since , we expect strain curves to connect to the fixed point through all possible stable insets and unstable outsets .the strain curves for this value of are shown in fig . [ fig:4d1fp](b ) .( [ eq : cpglobal ] ) , the phase space is split into two parts .they are separated by the blue dotted line .the first part of the phase space ( center ) contains only strain curves because ( [ eq : cpglobal ] ) produces all real eigenvalues within this range of .the second part of the phase space ( top and bottom ) contains only vortex core curves because ( [ eq : cpglobal ] ) produces a single pair of complex conjugate eigenvalues . the abrupt change in stability that is observed to take place along the connecting curvesis explained by a transition between the two phase space partitions . setting moves the fixed point into the final region of the catastrophe control parameter space where and . in this region , connecting curves connect to the fixed point .the phase space is again divided into two parts using eq .( [ eq : cpglobal ] ) . the first part ( center ) produces two pairs of complex conjugate eigenvalues .no vortex core lines are able to enter this region .the second part ( outer top and bottom ) produces a single pair and supports the creation of vortex core lines as shown in fig .[ fig:4d1fp](c ) .for we fix the control parameters and vary for . a period doubling route to chaos is observed in fig .[ fig:4dbif_r ] .a stable limit cycle is produced for .the fixed points are located at and .they have coordinates in the catastrophe control parameter space shown in fig .[ fig : a4](c ) .both fixed points reside in a region where they have a single pair of complex conjugate eigenvalues and two real eigenvalues ( ) .the connecting curves and limit cycle for this case are shown in fig . [ fig:4d2fp](a ) .the vortex core curves are observed to pass through each of the fixed points . as increased , the two fixed points scatter other along the axis .they retain the same value of until the upper fixed point crosses the bifurcation boundary .the fixed point transition for is shown in fig .[ fig : a4](c ) .this value of generates the strange attractor shown in fig .[ fig:4d2fp](b ) .the fixed points are located at and with and .the different values plays an important role in the structure of strange attractor .specifically , an asymmetry is created because connecting curves pass through and connecting curves pass through . swirling flow generated near is transported along these two core curves towards .the process creates a funnel in the center of the strange attractor .as the flow approaches the neighborhood of , it is pushed back towards and the process is repeated .the two vortex core curves that connect to pass through the funnel of the strange attractor , but diverge to wrap around the hypervortex near since they are unable to attach to it . ) with .a period doubling route to chaos is observed .parameter values : .,height=226 ] figure [ fig:4d2fp](c ) shows an example of a strange attractor that forms in the hypervortex region around for a different set of control parameters .the vortex core curves in this case have less influence on the structure of the strange attractor .for we use with control parameter values .the coordinates of the fixed points in the catastrophe control parameter space are given by .since , the bifurcation surface splits the control parameter space into two regions .the symmetric fixed points share a single coordinate located in the region that produces a single pair of complex conjugate eigenvalues ( ) and two real eigenvalues .the fixed point at the origin is located in the region that produces two pairs of complex conjugate eigenvalues ( ) .it is located in the hypervortex near the origin that plays an important role in the structure of the strange attractor shown in fig .[ fig:4d3fp ] .the vortex core curves pass through the outer fixed points but are unable to enter the hypervortex that produces a hole in the center of the attractor . ) with .the hypervortex around the origin creates a hole in the attractor and forces the vortex core curves to wrap around it .parameter values : ,height=226 ]in this section we set in order to study five - dimensional differential dynamical systems that take the form the characteristic polynomial is where the substitution creates the canonical unfolding of the next catastrophe in the series of cuspoids . hereagain the control parameter space is divided into three disjoint open regions that describe fixed points with zero , one , or two pairs of complex conjugate eigenvalues and a complementary number of real eigenvalues .the open regions are connected and simply connected and can be studied as in the case of .we use in this section .parameters generate the strange attractor shown in fig .[ fig:5d2fp2 - 2 ] .both fixed points have two pairs of complex conjugate eigenvalues ( ) and hypervortex core curves connected to them . a vortex core curve ( )is also observed to form away from the attractor .parameters generate the strange attractor shown in fig .[ fig:5d2fp2 - 1 ]. the fixed point indices of the two fixed points differ .a single hypervortex core curve runs through the fixed point with ( ) while vortex core curves run through the fixed point with ( ) . )using . a hypervortex core curve ( )connects the two fixed points .parameter values : .,height=226 ] ) using .a single hypervortex core curve ( ) runs through the fixed point on the left . after a change in stability ,three vortex core curves run through the fixed point on the right with .parameter values : ,height=226 ]the present work was motivated by an attempt to determine how invariant sets of greater dimension than fixed points can be used to determine the structure of flows that result from integrating sets of nonlinear ordinary different equations .these sets , which have been called core curves , obey the eigenvalue - like equation and have previously been used in studies of three - dimensional flows . in extending these results to higher dimensions , we have focused on a special class of dynamical systems differential dynamical systems as defined in eq .( [ eq : differentialform ] ) , which have the same form in all dimensions .only the driving function varies from system to system .since all fixed points occur on the axis , it was convenient to write the driving function as a sum of two functions , depending only on the coordinate and a set of control parameters , and another function depending on complementary variables .the locations of the fixed points can be put into canonical form by expressing in terms of a cuspoid catastrophe , where is the maximum number of fixed points that occurs under control parameter variation .the stability at a fixed point is determined by the first derivatives , evaluated at , , and .that is , the stability is governed by the linear terms in and the slope of .variations in the stability of the fixed points is most conveniently studied by identifying the linearization of with another cuspoid catastrophe , where is the dimension of the dynamical system .there is a weak coupling between these two catastrophes given by the term , which appears as one of the unfolding parameters for the function .the slope alternates along the axis from fixed point to fixed point . at a fixed point the core curves are tangent to the eigenvectors of with real eigenvalues .there are real eigenvalues , where is the number of complex conjugate pairs of eigenvalues .if and then a core curve emanating from the fixed point may pass through the `` center '' of a strange attractor and act to organize the flow around it . if the curve is called a vortex core curve and if it is called a hypervortex core curve . if then only strain curves ( of them ) originate at the fixed point .if then the fixed point is disconnected from the network of curves satisfying the defining eigenvalue - like equation .the eigenvalues of the jacobian at a fixed point vary as the control parameters vary .if a real ( complex conjugate ) pair becomes degenerate and scatters to become a complex conjugate ( real ) pair , then two distinct core curves gradually approach degeneracy and then detach from ( attach to ) the fixed point .the eigenvalues of the jacobian also vary along core curves .changes in stability and/or the number of real and complex conjugate pairs are closely involved in rearrangements or recombinations of the core curves . for differentiable dynamical systemsthe eigenvalue equation defining the core curve can be projected to a pair of constraints acting in a three - dimensional space .this pair of constraints is given in eq .( [ eq : collapsed1 ] ) .core curves are not heteroclinic connections in the dynamical system .rather , they satisfy a closely related set of nonlinear equations that are effectively nonautonomous .this dynamical system is given in eq .( [ eq : coreequation ] ) .these ideas have been illustrated for dynamical systems with , and .table [ tab : summary ] summarizes the cases presented , the dynamical system equations studied , and the figures that illustrate the results .supplementary material from this work can be found online .we thank fernando mut for useful discussion .core curves for differential dynamical systems obey the condition , so that for . as a resultthere are three independent variables and two constraints : the functions that appear in this collapsed constraint equation are obtained by replacing .the general form for the vortex core curve is independent of the dimension of the differential dynamical system and is described by a curve in the space with coordinates .
|
connecting curves have been shown to organize the rotational structure of strange attractors in three - dimensional dynamical systems . we extend the description of connecting curves and their properties to higher dimensions within the special class of differential dynamical systems . the general properties of connecting curves are derived and selection rules stated . examples are presented to illustrate these properties for dynamical systems of dimension .
|
as any dissipative system driven far from equilibrium , turbulent flows require a permanent supply of energy to remain in their non - equilibrium state . in the case of confined von krmn swirling flows ( see figure [ fig_1 ] ) , the fluctuations in the injected power may have a non - gaussian statistics , characterized by a probability density function ( pdf ) strongly asymmetric , with a stretched left side .at least this is the case in experiments in which air is used as the working fluid , and the counter - rotating stirrers are driven at the same _ constant _ angular speed. in addition , it has been shown that the shape of these pdfs remains similar when the reynolds number of the flow is changed , or at most depends marginally on this parameter in the range where the experiments are typically realized . in a new experiment performed with air , in which the stirrers were driven at constant torque ,fluctuations of injected power having a strongly non symmetric pdf were found , this time with the right side stretched towards the high power end. the reason for this left - right reversal is simple : the constant torque applied by the motors increases the angular speed of the stirrers when the drag exerted by the flow drops , so that the instantaneous power rises. contrarily , these events appear as power drops when the speed is held constant .these two types of events have in common the sudden drop in the torque exerted by the flow , and what makes the difference is the stirrers driving mode .it follows that , in an experiment where the stirrers are driven at constant angular speed , torque and power fluctuations are proportional , and their pdfs are related by as there is no change in the kinetic energy of the stirrers , this is just the pdf of the power injected _ into the flow_. thus , when the goal of the experiment is to study the statistics of the power injected into the flow , using constant angular speed for the stirrers is the correct choice . here , and in what follows , we assume , , and , of course , .over bars and tildes indicate the time average and the fluctuating parts of a quantity , respectively .when the experiment is performed at constant torque , rises or drops in the injected power increase or decrease both , stirrers and flow kinetic energies. consequently , the pdf of the total injected power is shaped by both , fluctuations of the power injected into the flow and fluctuations in the kinetic energy of the stirrers .thus , there is a fundamental difference between an experiment at constant angular speed versus one at constant torque : when a steady regime is reached , in the former there is no net transfer of energy to the stirrers , whereas in the latter the stirrer s changing kinetic energy has a role that can not be neglected . in an experiment performed by titon and cadot using water as the working fluid, both driving modes were used for the stirrers .interestingly , they found that the pdf of the injected power is nearly gaussian at constant torque , and gaussian at constant angular speed .this last result was confirmed recently by burnishev and steinberg in experiments performed at constant angular speed using pure water and solutions of sugar in water using several concentrations .these results seem to contradict the results obtained in air , because one would expect that in geometrically similar systems the flow must be similar at equal reynolds numbers . for turbulent flowsthis statement can be translated into a weaker one : _ turbulent flows in geometrically similar systems , and having equal reynolds numbers , must display similar statistical properties_. given that the sole parameter of the dimensionless navier - stokes equation is the reynolds number , re , this seems completely reasonable . now , re is related to some characteristic size and speed of the solid boundaries that shape the flow . in von krmn flows , the reynolds number is customarily defined as , where is the angular speed of the stirrers , is their radius , so that is the tangential speed of the disks edges , and is the kinematic viscosity of the fluid . when is not tightly controlled , for each stirrer we should expect small fluctuations : , . in the symmetric case, we have . in addition , when , we can consider that is well defined , even when no much attention is payed to the shape and size of the vanes .a problem arises when a close examination of the meaning of , in an experimental context , is made . in other words , when a specific experiment is being considered , one could ask two questions : * for small fluctuations in , would still be a valid definition ?* is the flow dynamics affected by these fluctuations ? in incompressible fluids ,including air at low mach numbers , the flow dynamics is governed by the navier - stokes equation . in the symmetric case , with identical and constant angular speeds ,the reynolds number definition given above is perfectly adequate .now , let us allow small fluctuations in the angular speed of the stirrer 1 by running its driving motor at constant torque , so that , while the stirrer 2 rotates in the opposite direction with constant angular speed : .as in a previous work, given that the signs of the angular velocities never change , we will work with their magnitudes .dropping the indices for clarity , we obtain the following governing equations for the flow and stirrer 1 motion : [ dyneq ] where is the fluid density , is the coefficient of `` viscous '' electromagnetic losses in the motor, is the torque exerted by the turbulent flow , is the ( constant ) torque exerted on the armature of the electric motor , is the stirrer moment of inertia , and is the union of the leading and trailing surfaces of the vanes .the primed variables in the preceding equations are dimensionless ; their relations with dimensional variables are displayed in the equations ( [ dyneq:4 ] ) .the use of dimensional variables in ( [ dyneq:2 ] ) makes clear the dependence of on the system parameters .the _ varying _ reynolds number in equation ( [ dyneq:1 ] ) , defined as , is introduced as a simplified mechanism of coupling between stirrer s dynamics and flow dynamics .this premise may be considered as a sort of toy model , and we stress that the results of the analysis that follows do not depend on it .nevertheless , it effectively closes the loop of the input - output system defined by equations ( [ dyneq:1 ] ) and ( [ dyneq:3 ] ) .equation ( [ dyneq:3 ] ) governs the motion of the stirrer , which is driven by the joint action of the motor torque , , and the flow torque , . in this equation ,the time is dimensionless , which makes it compatible with the equation ( [ dyneq:1 ] ) . finally ,if the fluctuations are small enough as compared to mean values , and terms up to first order in the fluctuating variables are held , then the equation of motion for the stirrer is reduced to [ dynflu ] + the solution of the equation ( [ dyneq:1 ] ) , with proper initial and boundary conditions ( the stirrer 2 enters here as a moving boundary ) provides , through the velocity field , the fluctuating torque that determines in the equation ( [ langeq:1 ] ) . given a velocity field which is a solution of the equation ( [ dyneq:1 ] ) , the dimensionless pressure which appears in the equation ( [ dyneq:2 ] ) can be obtained , in principle , from the poisson equation for the pressure obtained by taking the divergence of the equation ( [ dyneq:1]). to evaluate the required integrals , a rotating coordinate system attached to the stirrer 1 can be used .note that this reference frame follows the angular acceleration of the stirrer , so that if we write the equation ( [ dyneq:1 ] ) in such frame , terms related to coriolis and euler forces will appear .nonetheless , the velocity field seen from any material point should look the same in both , laboratory and rotating reference frames .if and are the ( dimensionless ) coordinates and velocity components in the new reference frame , then the poisson equation for the pressure is , in its dimensionless form , from which the pressure on the surface can be calculated . in principle, the problem with constant can be solved numerically using a specific method to solve ( [ dyneq:1 ] ) and ( [ poisson ] ) .there exist commercial software packages to undertake this problem allowing a number of prescriptions for the eddy viscosity .not surprisingly , each choice gives a different solution , so that experimental data is required to select the best model .of course , when is not merely a constant parameter , but fluctuates as a result of the flow stresses on the stirrer , the preceding approach alone in no longer useful .from now on , we will use ` a ' and ` w ' as subscripts to denote air and water , respectively . in the next two subsections of this introduction , numerical estimates for constant and varying be based on parameter values similar to those used in the experiments , namely : , , moment of inertia , , and in both experiments .we want to know the effect of a change in the fluid density on some of the system s kinematic and dynamic magnitudes while keeping constant the reynolds number .denoting by and the densities of air and water , respectively , and considering the geometries and parameter values used in the experiments , we find that the ratio between the required ( constant ) angular speeds must be from the equation ( [ langeq:3 ] ) we have that the relative torque fluctuation is as we have the same re in both cases , the dimensionless velocity field that solves the equation ( [ dyneq:1 ] ) is the same for air and water , so that the integral giving in equation ( [ dyneq:2 ] ) will be also the same in each case . thus , the ratio in equation ( [ trel ] ) is the same in both cases .then , the relative torque fluctuation ratio between water and air is with independence of the ratio .thus , in two given von krmn flows having similar geometries , using air in one of them and water in the other , and having the same reynolds number , the relative rms torque fluctuations produced by the flow are the same .if we consider two systems with similar geometries except by a scale factor , this result will remain valid if the reynolds number is the same in both devices .note that the ratio of plain rms torque fluctuations is not independent of some of the systems parameters .this ratio , which of course is equal to the ratio of the mean values , is given by where the numerical value is what would result from an experiment done at constant angular speeds .+ the previous results are a direct consequence of the principle of dynamic similarity , which implies the condition of constant angular speed for the stirrers .equivalently , these are the implications of assuming that the fluctuating part of the hydrodynamic forces have no effect on the stirrers motion .the problem becomes far less simple when fluctuates , because in this case the hydrodynamics must be coupled , through the solution of equation ( [ poisson ] ) and ( [ dyneq:2 ] ) , to the motion equation ( [ dyneq:3 ] ) of the stirrer . in addition , we expect changes in the flow structure when the vanes perform an accelerated motion in response to the flow action .in fact , the equations ( [ dyneq:1 ] ) and ( [ dyneq:3 ] ) conform a closed - loop dynamical system with parametric feedback : the input of the equation ( [ dyneq:3 ] ) is a functional of the velocity field that solves the equation ( [ dyneq:1 ] ) , and the latter is parametrically coupled to the output of the former through the coefficient of the laplacian term .this means that the effect of upon the velocity field depends in a complicated manner on the value of ( ) , and its time derivatives .inserted in the loop , we have the flow with its own dynamics , which we can attempt to understand through its effects on the stirrers motion .although this problem can not be easily solved by numerical methods , the rather obvious role of the fluid density is made clear by the equation ( [ dyneq:2 ] ) : the torque exerted by the flow , and its fluctuating part , , are proportional to the density , so that if we consider a system with constant in which we only replace the air with water , the rms amplitude of torque fluctuations must change by a factor equal to the ratio of the densities : . + for the device filled with water , the stirrer s motion equation in dimensional variables is in the limit of vanishing moment of inertia , the ratio between the rms amplitudes of angular speed and torque is simply at this point , it is necessary to assume that the weak similarity principle stated before is valid in this context .if the reynold number ( ) has the same value for the flows using water and air , then their statistical properties should be similar .therefore , for fluctuations in a very low frequency band or , equivalently , for a vanishing moment of inertia , the ratio of the preceding fraction between air and water should be where we neglected the motor losses . at higher frequenciesthe moment of inertia of the stirrers becomes important , because of the increasing loss of coherence between the stirrer rotation and the spatially averaged rotation of the flow . still neglecting the motor losses ,the equation ( [ meqw ] ) implies that there is a cutoff frequency for the angular speed fluctuations given , in general , by now , for two devices running at equal reynolds numbers , one with water and the other with air , we obtain the following ratio for the cutoff frequencies : where the parameter values are those used in our experiments . for the dimensionless cutoff frequencies , we obtain the ratio these numerical values are a direct consequence of assuming that the weak similarity principle valid when can be extended to systems where the characteristic velocity have small fluctuations .of course , the same kind of analysis can be carried out when some geometric parameter undergoes small fluctuations , or even when some of the fluid parameters , like density or viscosity , undergoes global fluctuations of small amplitude .+ given its overall complexity and implications , it seems worth to design an experiment to gather data allowing some further understanding on this subject .it would provide some specific results to compare with the estimates obtained above , and possibly some insight on the way in which the energy injected by the stirrers is transferred to the flow . in what follows, we describe the experimental setup in section [ setup ] , and give the results of the spectral and statistical analysis , which will show that there are substantial differences between air and water statistics .next , in section [ etd ] the results of the cross correlation study of the energy transfer dynamics are reported .they will make clear that the system dynamics , as well as the energy transfer dynamics , are markedly different when the working fluid is water instead air . in section [ conc ]we compare the scaling of some additional dynamic magnitudes with the experimental results and draw our conclusions . in the appendixa we give details about the signal processing used in the experiment with water .finally , in the appendix b we develop a simplified analysis about the dynamics of experiments performed at constant torque vs constant angular speed .to answer the questions i ) and ii ) in the previous section , an experiment was designed in which two geometrically similar devices running at constant torque produce von krmn swirling flows at nearly the same reynolds number : in one of them the working fluid is air while in the other it is water . in each devicethe power injected to the system can be derived simply from the product between the sum of the measured angular speed of the stirrers and the torque applied to them by the electric motors , which in this case is held constant: = 2\tau \overline{\omega } + \tau[\widetilde{\omega}_1(t)+\widetilde{\omega}_2(t)]=\overline{p}+\widetilde{p}(t ) .\label{pow1}\ ] ] here , we will be focused in the results of the measurements of , and , in water and air . in air , the data were obtained using the experimental setup described in a previous work. for the experiment in water , the apparatus sketched in fig .[ fig_1 ] was designed and built .it is basically a half - scale version of the system used with air .let us assume that the mean values are not strongly affected by the fluctuations discussed in the previous section . by using dimensional analysis ,it is easy to estimate the power required by this device .what we want is a device having a power consumption of the same order of magnitude than that of the device for experiments with air .therefore , we need to calculate a geometric scale factor , which applied to the dimensions of the device used with air gives the right lengths and ratios of the device for water .given that both reynolds numbers must be equal , we have so that the radii are related by in experiments performed with water or air , we typically find that the angular speed ratio between water and air is , so that the scale factor between the devices for water and air is with this scale factor , the ratio between the mean power consumption in water and air is while the ratio between the mean torques is . in the subplot ( a ) and ( c ) ,spectra showing roll - off regions with slope can be seen . in the frequency range from hz to hz a less pronounced roll - off of the spectra can be seen , corresponding to a first order , langevin - like dynamics ( see text ) .low pass filters with cutoff frequency hz were used to remove noise ( see text ) . in subplots ( b ) and ( d ) , short records of the corresponding rotation rate signal are displayed .the curve in subplot ( d ) looks somewhat more noisy than the curve ( b ) , which is reflected in the spectrum ( c ) by three peaks close to the cutoff . note that a degree of correlation exists between the curves in plots ( b ) and ( d).,scaledwidth=100.0% ] then , when both experiments run at equal reynolds numbers , the experiment with water requires 50% more torque than the required with air , and nearly half the power . in our case ,the height of the vanes in the water device was reduced by slightly more than , so that the resulting power consumption is about of that required by the device working with air .we remark that a small reduction in the height of the vanes has no noticeable effect on the statistical properties of the power fluctuations .in fact , controls the pumping action of the stirrers . roughly speaking, the radial pumping is related to the volume of fluid contained between the vanes , and this volume scales linearly with .thus , changing changes proportionally the radial mass flow rate .we illustrate this point at the end of this section with a measurement in air of the effect that a substantial change in the geometry of the vanes has on the average injected power and its fluctuations .+ to obtain similar reynolds numbers in both devices , the angular speed of the stirrers in air must be approximately four times greater than the angular speed in water . taking this into account , for measuring angular speeds in water we used optical encoders with twice the resolution of those used for air .this allowed high quality measurement of fluctuations at very low rotation rates .demineralized water was used , and to avoid bubbles it was degassed by using a rotary vane vacuum pump .after degassing , no bubbles were visible within the apparatus running either in co - rotating or counter - rotating modes. data acquisition and processing were performed as described in a previous work .the stirrers mean rotation rates , , were rps ( revolutions per second ) with air and rps with water , giving reynolds numbers and with air and water , respectively .in both experiments the signals were low - pass filtered , using cutoff frequencies hz and hz .this latter cutoff is necessary because the asymmetries and noise of the electric motors used with water produced angular speed fluctuations with an amplitude of about of the amplitude of the fluctuations produced by the turbulent flow .it is important to stress that the only effect of filtering on the pdfs obtained with water is a slight reduction in its width .no noticeable change in their shape after the filtering process is observed . in air , where pancake dc servomotors were used ,the noise and the asymmetries of the motors are small , but filtering improves the calculation of the angular acceleration from the angular speed data . a detailed explanation of the signal processing used for the experiment with water is given in appendix [ ap_a ] .+ figure [ fig_2 ] displays spectra and signals corresponding to the rotation rate fluctuations of the stirrers for the experiment with water .the upper and lower spectra displayed on the left side , corresponding to the left and right stirrers , respectively , have three clearly different zones : i ) a flat region in the lowest frequency band , spanning a little more than one decade , ii ) a short decay between hz and hz , and iii ) a roll - off region with a scaling for hz .this latter zone is the combined result of the continued roll - off starting in ii ) plus an additional roll - off , possibly related to an averaging process of normal stresses , on the surface of the vanes , related to flow structures whose characteristic length goes from the height of the vanes down to the kolmogorov scale .the resulting angular speed signals are similar to those obtained in air .although one of the stirrer has some increased noise below the cutoff frequency of the low - pass filter , its amplitude is too small to have a significant effect on the signal statistics. times larger has .( c ) spectrum of rotation rate fluctuations for the experiment in air . in this casetwo distinct regions , with slope and , exist ( see text).,scaledwidth=100.0% ] on the other hand , the experiment performed in air produces signals like the one displayed in figure [ fig_3](a ) . the relative amplitude of this signal , , is compared with the corresponding signal obtained in water in figure [ fig_3](b ) .we can see that in water the relative amplitude of the angular speed fluctuations is about five times larger than in air .figure [ fig_3](c ) displays the raw spectrum of the signals corresponding to angular speed fluctuations obtained in air .it can be seen that this spectrum is qualitatively different to the spectra displayed in figure [ fig_2](a ) or ( b ) : there is a wide region with a roll - off that scales as , which is barely present in the spectra obtained in water .we will return to this point later . given that in each experiment the torque applied to each stirrers is in principle the same , the total injected power ( tip ) can be calculated in both cases using equation ( [ pow1 ] ) .the pdfs of obtained in both experiments are displayed in figure [ fig_4 ] .the huge difference between these results is apparent . in ( a ), the pdfs obtained with air have the same shape than those obtained previously on this device, as expected . in ( b ) , the pdfs obtained in water are _ almost _ gaussian , a result closer to those reported in former works. a possible explanation for the difference between air and water begins to arise when we look at the spectra of these fluctuations . in air figure [ fig_3 ] ( c) the spectrumis characterized by the presence of three regions : i ) a nearly flat zone , below hz , ii ) a first roll - off scaling as for almost one and half decade , followed by iii ) a second roll - off with scaling . in a recent work usingair, it was shown that the dynamics of on regions i ) and ii ) is governed by a langevin equation , obtained by linearizing the equations of motion .deconvolution of the the angular speed signals revealed that the fluctuations of the torque exerted by the flow on the stirrers have a flat spectrum , at least in the range of frequencies below the end of region ii ) .this coincides with the spectrum of torque fluctuations measured in air at constant angular speed .probably a flat spectrum still holds at higher frequencies , which implies somewhat surprisingly that the spectrum of torque fluctuations resembles that of a white noise , despite the fact that it comes from the integral of normal stresses on the surface of the vanes a sort of weighted sum .the roll - off in region iii ) can not be explained in terms of the stirrer s mechanical response .the interpretation for this behavior is the following : the frequencies belonging to the spectrum zone that scales as are related to flow scales that become comparable or smaller than the height of the vanes, so that their contribution to the total torque adds up increasingly incoherently to the surface integral at smaller scales or , equivalently , higher frequencies .the effect on the spectrum is an additional roll - off , which combined with the fall related to the stirrers inertia , gives the region . in water figure [ fig_2 ] (a) the spectrum still has three regions , but the intermediate region , corresponding to region ii ) in the spectrum for air , is nearly nonexistent : the flat , low frequency region , extends up to about hz , then the ( collapsed ) middle region goes up to hz , and finally we see the region with roll - off up to the sharp cutoff of the noise filter .thus , the spectrum observed in water is qualitatively different from the spectrum obtained with air : the shapes are clearly different .this implies that the time - domain dynamics of these two systems is not the same .if the dynamics of each system is different , we can not invoke the similarity principle , not even the weak version given on the first paragraph , to state that the power fluctuations in water and air should be similar . in these experiments , using geometrically similar devices and similar reynolds numbers , that is , in conditions where the hydrodynamic similarity principle holds , different results are obtained when the fluid is water instead air .being this the case , it is not surprising that experiments performed in water give results different from those obtained in air . now ,if we look at the pdfs , we note that the amplitude of relative fluctuations for the tip , , is _ smaller _ than the relative amplitude of the individual stirrers . in figure[ fig_4 ] , the pdf of the total power is represented by black circles . in ( a ), we see that this effect is rather small , whereas in ( b ) it is clearly visible .this marks another difference in the dynamics when water is used instead air : fluctuations in the rotation speed have an anticorrelated component which in water is stronger than in air .this anticorrelation characterizes the global rotation of the flow , a behavior that has its own dynamics and scaling properties , as shown in a previous experiment in air, and belongs to the motion dynamics on the lowest frequency range of the spectrum .as mentioned earlier , it may be shown that a major change in the shape of vanes has a minor effect on the pdf and the spectrum of the injected power .the present test was made with air , but by using water one should extract similar conclusions .figure [ fig_5 ] ( a ) displays the disk with the vanes used in the experiment with air .figure [ fig_5 ] ( b ) displays a disk with segmented vanes .the discontinuities in the vanes greatly affect the radial mass flux .this results in a reduction of the power required to maintain a given mean angular speed .specifically , disks with continuous vanes require a total mean power of w to maintain a rotation rate rps , whereas with segmented vanes , it is enough with w. this makes a reduction of % in the injected power .figure [ fig_5 ] ( c ) displays the normalized spectra of injected power in both cases . as can be seen , the change in the shape is marginal . the upper curve ( red ) was obtained with continuous vanes .there is a small reduction in the first cutoff frequency of the lower ( blue ) curve , and the ratio between this one and the second cutoff is slightly smaller , as compared with the upper curve . the wide curve ( red squares ) in figure [ fig_5 ] ( d ) is the pdf of the tip obtained with continuous vanes , whereas the narrow curve ( blue diamons ) is the pdf of the tip issued from disks with segmented vanes .it has approximately half the width of the former curve , that is , the ratio between the widths is .however , when the height of the later pdf is divided by , and its width is multiplied by , the result is the continuous curve ( black ) which is almost coincident with the pdf obtained with continuous vanes , as shown in figure [ fig_5 ] ( d ) .thus , this result shows that the statistics of the injected power is fairly insensitive to the shape of the vanes .note that the symmetry of the stirrers with continuous vanes belongs to the d symmetry group .this symmetry implies that , for stirrers rotating in opposite directions with equal , constant angular speed , the flow has the following symmetry property under simultaneous rotation reversal of the stirrers : where is the azimuthal component of the flow velocity .this property is preserved by the disks with segmented vanes , which implies that by reversing the rotation of both disk , the only effect on the flow will be , in both cases .previous works give a clue about the effect of modifying the vanes without preserving the stirrers primary symmetry . in the experiment reported by burnishev and steinberg ,curved vanes were used .the corresponding symmetry group is c , so that in their experimental device the symmetry property ( [ rsymm ] ) is lost .although we might not necessarily expect a change in the statistics of power fluctuations , we should expect a noticeable change in the mean flow .in fact , the radial pumping by the vanes is enhanced , and less angular momentum is injected to the flow by each stirrer .this change in the geometry has been used in experiments to study the dynamo action in von krmn swirling flows , using melted sodium as working fluid , in order to obtain similar poloidal and toroidal velocities. despite the loss of the symmetry property ( [ rsymm ] ) and the change in the ratio between poloidal and toroidal components of the motion , we see that burnishev and steinberg still find a gaussian statistics for the injected power , as in the experiments with straight vanes performed by titon and cadot .this suggest that changing the shape of the vanes , even if a loss in the system s symmetry is involved , has only a marginal effect if any on the statistics of the injected power .however , with regard to the global flow , some experiments have shown that stirrers with curved vanes can produce inverse turbulent cascades and a bistable dynamics in the global flow. a detailed study of different regimes and the supercritical transition to fully developed turbulence , in a von krmn flow driven by stirrers with curved vanes , can be found in the work by ravelet _ _in the previous section we have seen that the pdfs of obtained in air and water are markedly different .given that the fluctuations are finally due to the flow in a neighborhood of the stirrers , the observed difference in the pdfs is a strong indication of differences in the flow itself , although the pdfs do not have explicit information about the flow dynamics , nor its energy transfer dynamics .we can undertake this last aspect by looking at time cross correlations of the power components involved in the energy transfer .let us take a closer look at one stirrer in air .after some straightforward algebra , it can be shown that the total power fluctuation is related to the sum of the power delivered to the stirrer and the flow by where is the fluctuation of the tip , is the stirrer s moment of inertia ( which includes the motor armature ) , and is the fluctuation of the power injected to the flow .the first term on the right hand side is the stirrer power consumption , which indeed represent a ` reactive ' component , because it does not dissipate energy .in addition , the main contribution of this term to comes from , because is about one hundred times smaller .figure [ fig_6 ] ( a ) displays the pdfs of these three quantities .we see that the pdfs of both , the tip and the power transferred to the flow are strongly asymmetric . the reactive power spent in accelerating the stirrer seems nearly gaussian , but is still asymmetric , with positive skewness . in figure [ fig_6 ] ( b ) the mean cross - correlation functions between these magnitudes can be seen .the curve 1 ( red ) shows the cross - correlation between the flow power consumption and the tip .the retarding action of the stirrer on the energy flow is evidenced by the time lag of its peak .thus , the stirrer operates as a momentary energy storage , as can be also deduced from the anti - correlation dip at zero time lag in the curve 3 ( blue ) of flow - stirrer cross - correlation ( here , the small oscillation is related to electromechanical asymmetries of the stirrer ) . finally , the stirrer - tip cross - correlation curve 2 ( black) is antisymmetric , showing that the same amount of energy that the stirrer takes from the power supply at negative time lags is later released to the flow , at positive time lags . ms . in water ( c ) ,the pdfs of injected power ( black , circles ) and the power transferred to the flow ( red , squares ) are basically coincident .note that the black ( circles ) and red ( squares ) pdfs are about twice as wide as those in ( a ) , whereas the stirrer power pdf ( blue , diamonds ) is narrower .the mean cross - correlation functions ( d ) also differ remarkably from those obtained in air . in water , the time lag between fluctuations of flow power and tip is ms ( see text).,scaledwidth=100.0% ] in water , the energy transfer dynamics ( for one stirrer , as before ) is clearly different , as can be seen in the lower plots in figure [ fig_6 ] . in ( c ), the tip and flow power pdfs are nearly coincident and gaussian , whereas the stirrer power pdf is quite narrow and nearly symmetric , with negative skewness .thus , the stirrer has a minor role in the whole dynamics .this is confirmed by the curves on subplot ( d ) : the stirrer - tip cross - correlation , curve 2 ( black ) is nearly the opposite of the flow - stirrer cross - correlation ( curve 3 [ blue ] ) , the latter completely lacking the big dip present on the corresponding curve on subplot ( b ) : in this case there is practically no energy storage in the stirrer .moreover , in this case the flow - tip cross - correlation is nearly symmetric , and the time lag of the peak is thus , although the stirrers rotate approximately four times slower than in air , at similar reynolds number the energy transfer dynamics in water is about three times faster . in terms of dimensionless time, we have so that at similar reynolds numbers the energy transfer dynamics is about twelve times faster in water than in air , in dimensionless time .of special interest is the comparison of the ratios of cutoff frequencies , given by equations ( [ fcdm ] ) and ( [ fcdml ] ) .the reciprocal of this frequency is related to the beginning of the time scales in which the interaction between the stirrer and the neighboring turbulent flow undergoes a transition ; from a state in which the stirrer simply follows the spatially averaged flow rotation , to the state where , for , it no longer responds to the fluctuating torque . from the spectra in water and air , and a linear fit of the frequency response obtained from equation ( [ meqw ] ) for each case, we have while the same ratio for the dimensionless frequencies is these ratios reveal how much the stirrer response to the faster fluctuations of the torque exerted by the flow is enhanced when the fluid is water . as can be seen ,these experimental values are almost six times smaller than those in equations ( [ fcdm ] ) and ( [ fcdml ] ) .the results given in equations ( [ fcdme ] ) and ( [ fcdmle ] ) , as compared with the estimates in equations ( [ fcdm ] ) and ( [ fcdml ] ) , evidence a dramatic failure of the assumptions behind their obtention . along with the findings on the probability density functions and the energy transfer dynamics ,the latter results tell us that there are deep differences in both , the flow and the energy transfer dynamics , when water replaces air in turbulent von krmn flows .it is worth to stress that these differences can not be ascribed to small differences in geometry or reynolds number , because many preceding experiments with noticeable differences in geometry and reynolds numbers have shown that the statistical properties of the injected power , in water and air separately , are fairly invariant .here we see that there are clear discrepancies between the results obtained by assuming the validity of the weak similarity principle , when the angular speed has small fluctuations , and those given by the experiments . of course, we can not claim that our experiments in water and air have exactly the same re , or that the geometries are strictly similar .instead , we can state that the differences that can be found in the experimental setups would very unlikely explain the discrepancies found in this study , when water replaces air in a von krmn swirling flow .summarizing , we have found that running in constant torque mode , von krmn swirling flows differ when the working fluid is water instead air .we ascribe this to the huge difference in densities between these two fluids , which in the case of water leads to a stronger coupling between the flow and the stirrer .accordingly , the stirrer follows more closely the fluctuations of the torque exerted by the flow , which in turn leads to a nearly missing region with scaling in the spectrum .therefore , the dynamics issued from the interaction between the flow and the stirrers in each case produces dissimilar flows , which prevents thinking about this problem in terms of the hydrodynamic similarity principle , valid when geometrically similar setups run at equal reynolds numbers .this view is reinforced when we look at the ratio between the rms amplitude of the angular speed fluctuations and its mean value . in water ,the ratio is only five times greater than in air , despite the fact that the water / air density ratio is , that is , about times larger .the other scales that we need to consider are : the scale factor between the experimental devices for air and water , which is ; and the moment of inertia of the stirrers used in air and water , which are kg m and kg m respectively , giving a ratio .if the similarity principle was valid for these two flows , then a linear scaling for similar flows gives a value for the ratio , which is times larger than the value obtained in the experiment .this huge discrepancy clearly indicates that if water is used in a given experimental setup , at equal reynolds number the flow will not be similar to the flow obtained when the fluid is replaced by air , when the stirrers are driven at constant torque .this is not shocking at all : the difference is due to the difference in the response of the stirrers to the flow stresses .note that these discrepancies are related only to fluctuations . for mean values , the similarity principle seems to work fairly well .for example , the ratio in equation ( [ rwa ] ) , which can be expressed in terms of mean values , differs from the experimental value only by . whether this is always the case when one is dealing with fully turbulent flows , remains an open question . from the outcomes reported here, it appears that high precision measurements would be necessary to obtain a response . +the consequences that these findings could have for studies using scale models will depend on how relevant turbulent fluctuations are to the system of interest .for example , if the air drag on a big truck traveling at km / h is to be studied using a small scale model in a water tunnel , possibly huge discrepancies in the relative rms amplitude of some magnitudes could be found if the similarity principle is naively applied .the reason is that these fluctuations will depend on the response of the scale model to the turbulent component of the flow , as our results suggest .nonetheless , mean values will scale accordingly to the similarity principle fairly well , which may appear contradictory .this happens because when the turbulent component of the flow matters , the full 3d dynamic response of the body the truck in our example must be considered .in other words , in addition to scaling geometric and hydrodynamic parameters by using the similarity principle , an appropriate equivalent of equations ( [ dyneq:3 ] ) or ( [ langeq:1 ] ) must be considered , in order to properly scale the mechanical parameters of the model ; namely mass , supports compliances , damping factors , and main moments of inertia .+ we have to stress that none of the pdfs of total injected power obtained in the experiments reported here is gaussian .the closest ones to a gaussian are those for fluctuation of the angular speed for individual stirrers in water , which nevertheless have positive skewness , and the pdf of the reactive power for a stirrer in air , which has also positive skewness .these results are consistent with the findings by titon and cadot in water at constant torque : figures 6(b ) and 7(b ) of their article display pdfs that clearly have positive skewness .a question remains : in which case the weak version of the similarity principle could be valid in von krmn flows ?the qualitative coincidence between the torque spectrum at constant angular speed and the deconvolved version at frequencies lower than obtained from fluctuations of angular speed in the constant external torque mode gives a clue .if we are able to run geometrically similar setups at equal reynolds numbers using constant angular speed , then the pdfs of torque in air and water should be similar .the problem is that constant speed in water means using servo - controllers capable of keeping the angular speed of the stirrers well below the fluctuations of about % measured at constant torque . in other words ,perhaps constant speeds within % or better would be necessary , along with stirrers having extremely low inertia .finally , in our experiment with air the mach number is , so that we do not expect that air compressibility plays a role in this phenomenon .nevertheless , an experiment is being planned to test this possibility .the authors gratefully acknowledge lautaro vergara and ulrich raff for their careful reading of the manuscript and valuable comments .financial support for this work was provided in part by fondecyt under project no . and dicyt - usach under project no .a. s. gratefully acknowledges financial support from conicyt s _ programa de formacin de capital humano avanzado _ and the fellowship from_ direccin general de graduados _ of universidad de santiago de chile .two types of electric motors were used for the experiments reported here .pancake servo motors were used with air , and universal motors with water .pancake motors have no iron in their armatures , so that they have a very low inductance , run smoothly in their rated angular speed range , and can deliver their rated torque almost independently of the angular speed .they use permanent magnets to produce the stator magnetic field .universal motors have winding in the stator to produce the stator field , and a cylindrical core in their armature .the latter is made of laminated iron , with a number of slots to allocate the windings .the slots can be helical or , most often , parallel to the axle . in the latter case, it happens that when the armature is in an angular position that minimizes the reluctance of the magnetic circuit , formed by the stator with its poles and the armature core , there appears a retentive torque which tends to anchor the armature in such angular position .this is because at these angles the magnetic flux density reaches a maximum .this condition is reproduced each time the angle of the armature advances by one slot .thus , with slots there will be positions per turn where the armature will tend to become anchored .note that the armature core can be seen as a cylinder with protuberances and slots in - between : hence the term `` cogging '' for this effect . on the other hand ,the armature of a universal motor is highly inductive , so that its torque is degraded at high angular speeds .the interested reader can find a good introduction to electric motors in ref . .a question that can be raised is : can inexpensive universal motors be used for this type of experiments ?the answer is : it depends on the operating conditions . .in the subplot ( a ) , spectra showing roll - off regions with slope can be seen .subplots ( b ) and ( c ) display short records of the corresponding rotation rate signal . on the right half of subplot ( a ) ,a number of peaks appear , related to motor asymmetries .the largest peak is due to motor cogging ( see text ) .subplot ( d ) displays the spectra of the same signals , after processing them with a low - pass filter with cutoff frequency hz . subplots ( e ) and ( f ) show the effect of the filter on the signals displayed on the left column .the signal components related to the remaining peaks at frequencies below in the red spectrum have an amplitude too small to be seen on the subplot ( f ) .note that , after filtering , still about one decade of the region with slope is preserved .horizontal and vertical scales in plots of similar type are the same.,scaledwidth=100.0% ] in our experiment in water , the rotation rate is rps , which is well below the normal operating speed rps , typical of these motors .thus , the reduction of torque at high speeds is not a concern .in addition , in this experiment the motors work with constant armature current , which at low angular speeds compensates for the armature inductance effects .the stator windings are powered by an independent , constant current ( the field current ) , so that the mean stator magnetic field is constant . with respect to cogging, the number of slots in this case is , which gives cogging frequencies in a narrow band centered at hz .this is near the end of the frequency band where the turbulence related fluctuations of the stirrers angular speed take place .thus , by using a low - pass filter the cogging noise can be easily removed .an additional bonus reduction of this noise comes from the necessity of large currents to obtain the high level of torque required by this experiment .the flux density must be increased by utilizing a field current well beyond the rated value for these motors . as a consequence , the working point of the magnetic fluxis located deep in the saturation region , which reduces the changes in the flux density due to the armature motion .normally , operation under these conditions should result in burned windings . to prevent such outcome, we implemented a powerful forced air cooling , which allowed continuous operation without overheating .thus , under the conditions previously described , universal motors can give satisfactory results in this type of experiments when a better option in not available .the cogging effect can be seen in figure [ fig_7 ] .figure [ fig_7 ] ( a ) displays the rotation rate spectra of both disks : left , with few peaks in the high frequency zone ( blue ) ; and right , having a greater number of peaks ( red ) .below , the plots ( b ) and ( c ) display samples of the corresponding signals in the time domain .the rapid oscillations superimposed to the slow variations in both plots clearly illustrate the cogging effect . in the figure [ fig_7 ] ( d ) the same spectraare displayed after applying a low pass filter .notice that in the ( red ) spectrum three peaks before the cutoff frequency still remain .this noise seems to be related to asymmetries of the armature of this motor , and its amplitude is lower than that of the main peak . in the time domain , plots ( e ) and( f ) , we see that the rapid oscillation due to the cogging effect is eliminated .the red signal still contains some noise related to the peaks in the plot ( d ) , but it is not detectable at the resolution level of plot ( f ) . with the previous treatment ,the signals obtained in the experiment with water are appropriate for the calculation of probability density functions .to calculate correlation functions , a different signal treatment was used . to isolate the fundamental component of the cogging noise , a narrow band - pass filter tuned to the main peak frequency was used .then this signal was subtracted from the main signal , adjusting its amplitude in order to minimize the peak height . given that this signal component is modulated in both , amplitude and frequency , it was also necessary to adjust the filter bandwidth .thus , the peak in the spectrum was also minimized with respect to the latter parameter .the cleaned signal allowed the calculation of inverse fourier transforms on a wider interval of frequencies . given that the spectral components of the whole signal fall a little more than six decades at hz , see figure [ fig_7](a ) , the subtraction procedure previously described allowed a ] is the inverse fourier transform and is a filter designed to manage the divergence of when , and suppress the noise at frequencies above the useful frequency band .as can be seen , the experiment in which the electric motors are driven at constant torque allows a very simple analysis of the results .as the angular speed dynamics is that of a first order system , artifacts affecting the data acquisition / processing and/or the system dynamics could hardly be found . on the other hand, one has to be aware that in this driving mode , there is no independence between the flow dynamics and the stirrers motion . as noted in section [ intro ] , a complex nonlinear dynamics is hidden in the forcing function . nevertheless , from this function, we can obtain valuable information about the structure of the turbulent flow .now let us consider a von krmn swirling flow setup wanted to operate at constant angular speed .as before , we will neglect the motor losses , set the magnitude of all the parameters equal to one , and use dimensionless variables .equation ( [ be:1 ] ) , which governs a stirrer dynamics , becomes where the only change is that the torque provided by the motor , , is no longer constant : it must be determined by a servo controller in order to keep the angular speed constant , that is , make . in other words , ideally the controller must provide a torque such that when the motor is powered by a voltage - controlled voltage source , it is necessary to take into account the inductance of the armature windings , which increases by one the order of the system .so , it is better to use a voltage - controlled current source , to keep the order of the system as low as possible . in this case the current source , which delivers the armature current , directly determines , where is the motor s torque constant .thus , assuming no limit to the current source compliance , we can make the controller output equal to the torque delivered to the stirrer .usually , a proportional - integral - derivative ( pid ) controller is used to set the speed at some reference value . assuming that the angular speed can be measured instantaneously by some device, we can write the angular speed error as . for the moment, we will allow to change .then , the output of the pid controller is where the parameters , , and are the proportional , integral , and derivative gains , respectively . by inserting this torque in ( [ be:9 ] ) , and expanding , we obtain the system s dynamic equation : in the steady state , we can expand the angular speed as .after some algebra , and retaining terms up to first order in , we obtain with constant , , and the term is balanced by , so that this equation gives the time evolution of the fluctuating part of the angular speed , , under the joint action of the torque provided by the pid controller , , and the fluctuating part of the flow torque , . from equation ( [ be:14 ] ) we can obtain as a function of the angular speed fluctuations , and the controller parameters : which in the laplace domain reads \widetilde{\omega}(s ) .\label{be:16}\ ] ] therefore , when a pid controller is used , the relationship between fluctuations of angular speed and fluctuation of flow torque is in the laplace domain , the term in equation ( [ be:14 ] ) translates into so that , from equations ( [ be:17 ] ) and ( [ be:18 ] ) from here , it can be shown that the error in the torque measurement , as a function of , is equation ( [ be:17 ] ) tells us that a pid controller can not keep a perfectly constant angular speed , because . in the frequency domainit reads so that in the low frequency band , , the controller can keep .but at finite frequencies the error increases , and could reach a maximum when if non optimized values for , , and are used .when the error decreases again .of course , a sharp resonance at can be avoided with proper parameter adjustment , but even with optimal values , an error of possibly non negligible amplitude may persist in a more or less wide band of frequencies . within this band ,the stirrer angular speed will fluctuate under the action of the torque exerted by the flow , so that the torque measurement will be contaminated by the angular speed fluctuations .this can be derived from the equation ( [ be:19 ] ) : the motor torque set by the controller can not exactly mirror the torque exerted by the flow . from equation ( [ be:20 ] ) we have where the contamination of the torque measurement by angular speed fluctuations is explicitly displayed .note that it gets worse at higher values of .the accurate measurement of power fluctuation at constant angular speed requires a vanishing amplitude in the last term of equation ( [ be:22 ] ) , in which case the desired relation could be achieved .however , when a pid controller is used in this context , there will be always a mismatch between the zeros of the polynomials in the numerator and denominator of the equation ( [ be:19 ] ) , no matter the values of the parameters .on the other hand , the rational function in this equation , which is the steady state transfer function linking and , has the mean angular speed as one of its parameters .thus , a change in changes the overall system response . although this idealized model shows that an approximate matching of the zeros in the numerator and denominator of can be obtained if one chooses and , too big values in these parameters will inevitably generate instability problems arising from bandwidth limitations in one or more of the controller components .in addition , high frequency noise problems related to large values of would arise .we must stress that in no way we are suggesting that a pid controller is useless for this application . with appropriate tuning , the error can in principle be reduced to an acceptable level .moreover , finding the dependence of the parameters , , and on , using as control criteria the minimization for each value of , should allow the design of an optimal auto - tuning pid controller .+ summarizing , both methods have pros and cons when it comes to the measurement of power fluctuations . on the one hand, the constant torque mode may be the simplest option , although the measurement of torque ( via deconvolution ) or angular speed has always a hidden component related to the interplay between the angular acceleration and the changes that it produces in the turbulent flow . yet , in the experiments reported here this mode revealed how the energy transfer dynamics changes due to a large change in the fluid density .on the other hand , the constant speed mode requires a careful tuning of the servo - controller , in order to avoid the contamination of torque measurement by fluctuations of the angular speed , along with the effects that the angular acceleration have on the turbulent flow structure . in principle, a well tuned pid controller should minimize such effects , but a rigorous assessment of the overall system error is crucial .of course , there are alternative control strategies that , specifically for studies in von krmn swirling flows , could have better performance than the well known pid controller .14ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] \doibase 10.1051/jp2:1996118 [ * * , ( ) ] link:\doibase 10.1103/physreve.60.r2452 [ * * , ( ) ] link:\doibase 10.1063/1.4757651 [ * * , ( ) ] link:\doibase 10.1063/1.1539856 [ * * , ( ) ] link:\doibase 10.1063/1.4873201 [ * * , ( ) ] * * , ( ) link:\doibase 10.1006/icar.2002.6867 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.110.124501 [ * * , ( ) ] link:\doibase 10.1017/s0022112008000712 [ * * , ( ) ] _ _ ( , ) * * , ( )
|
here we report experimental results on the fluctuations of injected power in confined turbulence . specifically , we have studied a von krmn swirling flow with constant external torque applied to the stirrers . two experiments were performed at nearly equal reynolds numbers , in geometrically similar experimental setups . air was utilized in one of them and water in the other . with air , it was found that the probability density function of power fluctuations is strongly asymmetric , while with water , it is nearly gaussian . this suggests that the outcome of a big change of the fluid density in the flow - stirrer interaction is not simply a change in the amplitude of stirrers response . in the case of water , with a density roughly times greater than air density , the coupling between the flow and the stirrers is stronger , so that they follow more closely the fluctuations of the average rotation of the nearby flow . when the fluid is air , the coupling is much weaker . the result is not just a smaller response of the stirrers to the torque exerted by the flow ; the pdf of the injected power becomes strongly asymmetric and its spectrum acquires a broad region that scales as . thus , the asymmetry of the probability density functions of torque or angular speed could be related to the inability of the stirrers to respond to flow stresses . this happens , for instance , when the torque exerted by the flow is weak , due to small fluid density , or when the stirrers moment of inertia is large . moreover , a correlation analysis reveals that the features of the energy transfer dynamics with water are qualitatively and quantitatively different to what is observed with air as working fluid .
|
as an emerging candidate for 5 g wireless communication networks , massive multiple - input multiple - output ( mimo ) has drawn a lot of research interests recently . however , there are only a few works on physical layer security in this area . among the very few ,only some of them have studied jamming aspects although jamming exists and has been identified as a critical problem for reliable communications , especially in massive mimo systems , which are sensitive to pilot contamination .for instance , the authors consider security transmission for a downlink massive mimo system with presence of attackers capable of jamming and eavesdropping in . the problem of smart jamming is considered for an uplink massive mimo system in , which shows that a smart jammer can cause pilot contamination that substantially degrades the system performance .most of the above works have been considered from a jammer point of view : study the jamming strategy , which is the most harmful for the legitimate user or for the eavesdropper . in this work ,we motivate our study from the system perspective , in which we develop counter strategies to minimize the effect of jamming attacks . to this end, we first derive an achievable rate of a single user uplink massive mimo with the presence of a jammer .then , by exploiting asymptotic properties of massive mimo systems , we propose two anti - jamming strategies based on pilot retransmission protocols for the cases of random jamming and deterministic jamming attacks .numerical results show that the proposed anti - jamming strategies can significantly improve the system performance .we consider a single user massive mimo uplink with the presence of a jammer as depicted in fig .[ fig : system_model ] .further , we assume that the base station ( bs ) has antennas , the legitimate user and the jammer have a single antenna . [ ] [ ] [ 0.7] [ ] [ ] [ 0.7] [ ] [ ] [ 0.7] [ ] [ ] [ 0.9] [ ] [ ] [ 0.9] let us denote and as the channel vectors from the user and the jammer to the bs , respectively .we assume that the elements of are independent and identically distributed ( i.i.d . )zero mean circularly symmetric complex gaussian ( zmcscg ) random variables , i.e. , , where represents the large - scale fading ( path loss and shadowing ) .similarly , we assume that and is independent of .we consider a block - fading model , in which the channel remains constant during a coherence block of symbols , and varies independently from one coherence block to the next . to communicate with the bs, the legitimate user follows a two - phase tdd transmission protocol : i ) phase 1 : the user sends pilot sequences to the bs for channel estimation , and ii ) phase 2 : the user transmits the payload data to the bs .we assume that the jammer attacks the uplink transmission both in the training and in the data payload transmission phases . duringthe first channel uses ( ) , the user sends to the bs a pilot sequence , where is the transmit pilot power and originates from a pilot codebook containing orthogonal unit - power vectors . at the same time, the jammer sends to interfere the channel estimation , where satisfies and is the transmit power of the jammer during the training phase .accordingly , the received signals at the bs is given by [ ytm ] _t=__^t+__^t+_t , where is the additive noise matrix with unit power i.i.d .zmcscg elements .the bs then performs a de - spreading operation as : [ yt ] _t=_t_^ * = _ + _ _ ^t _ ^ * + _ t , where and . the minimum mean squared error ( mmse ) estimate of given is [ guh ] _ = c__t , where .the mmse estimator ( [ guh ] ) requires that the bs has to know , , and .since and are large - scale fading coefficients which change very slowly with time ( some times slower than the small - scale fading coefficients ) , they can be estimated at the bs easily .the quantity includes the jamming sequence which is unknown at the bs .however , by exploiting asymptotic properties of the massive mimo , the bs can estimate from the received pilot signal .we will discuss about this in detail in section [ sec : mp ] .let be the channel estimation error . from the properties of mmse estimation , and are independent .furthermore , we have and , where [ gamu ] _ = c__. during the last channel uses , the user transmits the payload data to the bs and the jammer continues to interfere with its jamming signal . let ( ) and ( ) be the transmitted signals from the user and the jammer , respectively .the bs receives _x_+_d , where and are the transmit powers from the user and jammer in the data transmission phase , respectively .the noise vector is assumed to have i.i.d . elements .to estimate , the bs performs the maximal ratio combining based on the estimated channel as follow [ y ] y=_^h_d = _ ^h_x_+ _ ^h_x_+ _ ^h_d .in order to analyze the impact of jamming attack on the system , we derive a capacity lower bound ( achievable rate ) for the massive mimo channel , described as in ( [ y ] ) . substituting into ( [ y ] ) , we have [ yb ] y=_^2x_+ _ ^h_x_+ _ ^h_x_+ _ ^h_d .since consists of the signals associated with the channel uncertainty and jamming , we derive an achievable rate using the method suggested in . to this end, we decompose the received signal in ( [ yb ] ) as [ yc ] & & y=\{_^2}x_+ + & & _n _ - since and the desired signal are uncorrelated , we can obtain an achievable rate by treating as the worst - case gaussian noise , which can be characterized as follows .[ pro ] an achievable rate of the massive mimo channel with jamming is [ r ] r=(1-/t)_2(1 + ) , where is the effective sinr , given by [ rho ] = .see appendix [ pro1proof ] .two interesting remarks can be made : * if the jammer does not attack during the training phase ( ) or and are orthogonal ( ) , the achievable rate becomes [ r_barj ] r & = & ( 1-)_2(1 + ) ^m .the achievable rate increases without bound as , even when the jammer attacks during the data transmission phase . *if the jammer attacks during the training phase and , we have r ^m & & ( 1-)_2 ( ) .this implies that when the training phase is attacked , the achievable rate is rapidly saturated even when .this is the effect of _ jamming - pilot contamination_.as discussed in section [ sec : ratenjam ] , the jamming attack during the training phase highly affects the system performance .therefore , we focus on the training phase and construct counter strategies to mitigate the effect of jamming - pilot contamination .we propose pilot retransmission schemes where the pilot will be retransmitted when the jamming - pilot contamination is high ( is large ) .note that , some overheads for synchronization are necessary for the pilot retransmission protocols .however , those overheads are negligible compared to the payload data .we show that by exploiting asymptotic properties of the massive mimo system , the bs can estimate and from the received pilot signals and even is unknown . by the law of large numbers , [ eq : as1 ] _t^2 & & = p_t + q_t|_^t_^*|^2 + + & & + + & & + _ ^t _ ^ * + & & + + & & ^a.s . p_t_+q_t|_^t _ ^*|^2 _ + 1 , m , where denotes almost sure convergence . from , and under the assumption that the bs knows and , can be estimated as [ eq : as2 ] = _ t^2 - - . from and again from the law of large numbers , as , we have _ t^h_t ^a.s .p_t__^*_^t + q_t__^*_^t + _ .thus , the bs can estimate as [ eq : es11 ] = _ t^h_t - _ ^*_^t - _ . based on the estimates of and , in next sections , we propose two pilot retransmission schemes to deal with two common jamming cases : random and deterministic jamming .in practice , if the jammer does not have the prior knowledge of the pilot sequences used by the user , then it will send a random sequence to attack the system . during the training phase, the user sends a pilot sequence , while the jammer sends a random jamming sequence .the bs estimates and requests the user to retransmit a new pilot sequence until is smaller than a threshold or the number of transmissions exceeds the maximum number .the pilot retransmission algorithm is summarized as follows : [ sec : algo1 ] * initialization : set , choose the values of pilot length , threshold , and ( ) .* user sends a random pilot sequence .* the bs estimates using . if or stopotherwise , go to step 4 . *set , go to step 2 .let and be the pilot and jamming sequences respectively , corresponding to the retransmission , . similar to, the achievable rate of the massive mimo with anti - jamming for random jamming is given by [ r_rj ] r_=(1-)_2 ( 1 + ) , where .note that in order to realize the achievable rate in ( [ r_rj ] ) , the bs has to buffer the received pilot signal then processes with the best one ( with minimal ) after pilot retransmissions .there exists case where is minimum which degrades the system performance since it consumes more training resource without finding a better candidate .however , the pilot is only retransmitted when the first transmission is bad , and hence , there will be a high probability that the retransmission is better than the first one .next , we assume that the jamming sequences are deterministic during the training phase , i.e. , .such scenario can happen , for instance , in case the jammer has the prior knowledge of the pilot length and pilot sequence codebook and tries to attack using a deterministic function of those training sequences .in this case , the massive mimo system can outsmart the jammer by adapting the training sequences based on the knowledge on the current pilot transmission instead of just randomly retransmitting them as in the previous case .we observe that can be decomposed as |_^t_^*|^2=_^t _^*_^t_^*. so , if the bs knows , it can choose to minimize . in section [ sec : estjj ] , we knowthat the bs can estimate from . from this observation, we propose the following pilot retransmission scheme : [ sec : algo2 ] * initialization : choose the values of pilot length and threshold .* user sends a pilot sequence .* the bs estimates using .if stop .otherwise , go to step 4 .* the bs estimates using .then , the bs finds so that is minimal . if , then the user will retransmit this new pilot . since the bs requests the user to retransmit its pilotonly if of the first transmission exceeds the threshold , the achievable rate is [ r_dj ] r_=(1-)_2(1 + ) , where the maximal number of retransmissions for this case is one .in this section , we numerically evaluate the performance of the proposed anti - jamming schemes in term of the average achievable rate .the average is taken over 50000 realizations of .we assume that the transmit powers at the user and jammer satisfy and .we also assume channel uses , maximum number of transmissions .[ vstau ] illustrates the average achievable rates for different anti - jamming schemes according to the training payload ( ) .it shows that in order to achieve the best performances , the training payloads should be selected properly to balance the channel estimation quality ( is large enough ) and the resource allocated for data transmission ( is not too large ) . as expected , the proposed schemes with pilot retransmission outperform the conventional scheme ( without pilot retransmission ) .when the training sequence is very long , i.e. , is large , the proposed schemes are close to the conventional one since the probability of pilot retransmission is very small as the channel estimation quality is often good enough after the first training transmission .note that in this simulation , we choose which is not optimal in general .it is expected that the benefits of the our proposed schemes are even larger with optimal . , , .the solid curves , dotted curves ( with label `` rj '' ) , and dashed curves ( with label `` dj '' ) denote the achievable rates without pilot retransmission ( c.f . proposition 1 ) , with counter strategy for random jamming ( c.f .1 ) , and with counter strategy for deterministic jamming ( c.f . alg .2 ) , respectively.,width=283 ] figure [ vsm ] shows the average achievable rates versus the number of bs antennas . without anti - jamming strategy ,the pilot contamination can severely harm the system performance and obstruct the scaling of achievable rate with .this is consistent with our analysis in section iii .the performance can be remarkably improved by using the pilot retransmission protocols .particularly , for the case of deterministic jamming , the proposed scheme can overcome the pilot contamination bottleneck and allows the achievable rates scale with even when is large .the problem of anti - jamming for a single - user uplink massive mimo has been considered .it showed that jamming attacks could severely degrade the system performance . by exploiting the asymptotic properties of large antenna array , we proposed two pilot retransmission protocols . with our proposed schemes ,the pilot sequences and training payload could flexibly be adjusted to reduce the effect of jamming attack and improve the system performance .future work may study multi - user networks .for instance , our results can be readily extended if a max - min fairness criterion is used .then the pilot retransmission protocol design is considering the worst user who has the smallest achievable rate .by treating as gaussian additive noise , an achievable rate of the channel in ( [ yc ] ) is given by [ eq : proofrate1a ] r&= & ( 1-)_2(1 + ) .let us define [ eq : proofrate1 ] , where , , and .since and are independent zero mean random vectors , we have [ eq : e1 ] e_1 & = & p_d\{_^4}-p_d(\{_^2})^2 + q_d\{|_^h_|^2 } + & = & p_dm(m+1)_^2-p_dm^2_^2 + p_dm_(_- _ ) + & = & m_p_d_. from ( [ guh ] ) , and using the fact that are independent and zero mean random vectors , we have e_2 & = & q_dc_^2\{|_^h_+ _ ^2_^t_^*+ _ t^h_|^2 } + & & q_dc_^2(p_t\{|_^h_|^2}+ q_t|_^t_^*|^2\{_^4 } + \{|_t^h_|^2 } ) + & = & q_dc_^2(p_tm__+q_tm(m+1)_^2|_^t_^*|^2+m_).then by using ( [ gamu ] ) , [ eq : e2 ] e_2 = m q_d_u(_+m_u |_^t_^*|^2 ) . similarly , [ eq : e3 ] e_3=\{|_^h_d|^2}=m_.
|
this letter proposes anti - jamming strategies based on pilot retransmission for a single user uplink massive mimo under jamming attack . a jammer is assumed to attack the system both in the training and data transmission phases . we first derive an achievable rate which enables us to analyze the effect of jamming attacks on the system performance . counter - attack strategies are then proposed to mitigate this effect under two different scenarios : random and deterministic jamming attacks . numerical results illustrate our analysis and benefit of the proposed schemes .
|
non - local applied mathematical models based on the use of fractional derivatives in time and space are actively discussed in the literature .many models , which are used in applied physics , biology , hydrology , and finance , involve both sub - diffusion ( fractional in time ) and super - diffusion ( fractional in space ) operators .super - diffusion problems are treated as problems with a fractional power of an elliptic operator .for example , suppose that in a bounded domain on the set of functions , there is defined the operator : .we seek the solution of the problem for the equation with the fractional power of an elliptic operator : with for a given . to solve problems with the fractional power of an elliptic operator , we can apply finite volume or finite element methods oriented to using arbitrary domains discretized by irregular computational grids .the computational realization is associated with the implementation of the matrix function - vector multiplication . for such problems , different approaches are available .problems of using krylov subspace methods with the lanczos approximation when solving systems of linear equations associated with the fractional elliptic equations are discussed , e.g. , in .a comparative analysis of the contour integral method , the extended krylov subspace method , and the preassigned poles and interpolation nodes method for solving space - fractional reaction - diffusion equations is presented in .the simplest variant is associated with the explicit construction of the solution using the known eigenvalues and eigenfunctions of the elliptic operator with diagonalization of the corresponding matrix .unfortunately , all these approaches demonstrates too high computational complexity for multidimensional problems .we have proposed a computational algorithm for solving an equation with fractional powers of elliptic operators on the basis of a transition to a pseudo - parabolic equation . for the auxiliary cauchy problem ,the standard two - level schemes are applied .the computational algorithm is simple for practical use , robust , and applicable to solving a wide class of problems .a small number of pseudo - time steps is required to reach a steady - state solution .this computational algorithm for solving equations with fractional powers of operators is promising when considering transient problems .the boundary value problem for the fractional power of an elliptic operator is singularly perturbed when . to solve it numerically ,we focus on numerical methods that are designed for classical elliptic problems of convection - diffusion - reaction .in particular , the main features are taken into account via using locally refining grids .the standard strategy of goal - oriented error control for conforming finite element discretizations is applied .in a bounded polygonal domain , with the lipschitz continuous boundary , we search the solution for the problem with a fractional power of an elliptic operator .define the elliptic operator as with coefficient .the operator is defined on the set of functions that satisfy on the boundary the following conditions : in the hilbert space , we define the scalar product and norm in the standard way : for the spectral problem we have and the eigenfunctions form a basis in . therefore , let the operator be defined in the following domain : under these conditions the operator is self - adjoint and positive defined : where is the identity operator in . for , we have . in applications ,the value of is unknown ( the spectral problem must be solved ) .therefore , we assume that in ( [ 3 ] ) .let us assume for the fractional power of the operator we seek the solution of the problem with the fractional power of the operator .the solution satisfies the equation with for a given .the key issue in the study of the computational algorithm for solving the problem ( [ 4 ] ) is to establish the stability of the approximate solution with respect to small perturbations of the right - hand side in various norms . in view of ([ 3 ] ) , the solution of the problem ( [ 4 ] ) satisfies the a priori estimate which is valid for all .the boundary value problem for the fractional power of the elliptic operator ( [ 4 ] ) demonstrates a reduced smoothness when .for the solution , we have ( see , e.g. , ) the estimate with , is is the norm in . for the limiting solution, we have thus , a singular behavior of the solution of the problem ( [ 4 ] ) appears with and is governed by the right - hand side .to solve numerically the problem ( [ 4 ] ) , we employ finite - element approximations in space . for ( [ 1 ] ) and ( [ 2 ] ) , we define the bilinear form by ( [ 3 ] ) , we have define a subspace of finite elements .let be triangulation points for the domain .define pyramid function , where for , we have where .we have defined lagrangian finite elements of first degree , i.e. , based on the piecewise - linear approximation. we will also use lagrangian finite elements of second degree defined in a similar way .we define the discrete elliptic operator as the fractional power of the operator is defined similarly to . for the spectral problem have the domain of definition for the operator is the operator acts on a finite dimensional space defined on the domain and , similarly to ( [ 3 ] ) , where . for the fractional power of the operator , we suppose for the problem ( [ 4 ] ) , we put into the correspondence the operator equation for : where with denoting -projection onto .for the solution of the problem ( [ 6 ] ) , ( [ 7 ] ) , we obtain ( see ( [ 5 ] ) ) the estimate for all .the object of our study is associated with the development of a computational algorithm for approximate solving the singularly perturbed problem ( [ 4 ] ) . after constructing a finite element approximation, we arrive at equation ( [ 7 ] ) .features of the solution related to a boundary layer are investigated on a model singularly perturbed problem for an equation of diffusion - reaction .the key moment is associated with selecting adaptive computational grids ( triangulations ) . in view of put the problem ( [ 7 ] ) into the correspondence with solving the equation the equation ( [ 9 ] ) corresponds to solving the dirichlet problem ( see the condition ( [ 2 ] ) ) for the diffusion - reaction equation basic computational algorithms for the singularly perturbed boundary problem ( [ 2 ] ) , ( [ 10 ] ) are considered , for example , in . in terms of practical applications ,the most interesting approach is based on an adaptation of a computational grid to peculiarities of the problem solution via a posteriori error estimates . among main approaches ,we highlight the strategy of the goal - oriented error control for conforming finite element discretizations , which is applied to approximate solving boundary value problems for elliptic equations .the strategy of goal - oriented error control is based on choosing a calculated functional .the accuracy of its evaluation is tracked during computations . in our dirichlet problem for the second - order elliptic equation , the solution is varied drastically near the boundary .so , it seems natural to control the accuracy of calculations for the normal derivatives of the solution ( fluxes ) across the boundary or a portion of it .because of this , we put where is the outward normal to the boundary .an adaptation of a finite element mesh is based on an iterative local refinement of the grid in order to evaluate the goal functional with a given accuracy on the deriving approximate solution , i.e. , to conduct our calculations , we used the framework ( see , e.g. , ) developed for general engineering and scientific calculations via finite elements .features of the goal - oriented procedure for local refinement of the computational grid are described in in detain . here , we consider only a key idea of the adaptation strategy of finite element meshes , which is associated with selecting the goal functional .the model problem ( [ 2 ] ) , ( [ 10 ] ) is considered with in the unit square ( ) .the threshold of accuracy for calculating the functional is defined by the value of . as an initial mesh, there is used the uniform grid obtained via division by 8 intervals in each direction ( step 0 128 cells ) .first , lagrangian finite elements of first order have been used in our calculations . for this case ,the improvement of the goal functional during the iterative procedure of adaptation is illustrated by the data presented in table [ tab-1 ] .table [ tab-2 ] demonstrates values of the goal functional calculated on the final computational grid , the number of vertices of this final grid and the number of adaptation steps for solving the problem at various values of the small parameter .these numerical results demonstrate the efficiency of the proposed strategy for goal - oriented error control for conforming finite element discretizations applied to approximate solving singular perturbed problems of diffusion - reaction ( [ 2 ] ) , ( [ 10 ] ) ..calculation of the goal functional during adaptation steps [ cols="^,<,>,<,>,<,>",options="header " , ] + 0 128 cells + 1 140 cells + 2 180 cells + 3 256 cells + 4 388 cells + 5 599 cells + 6 886 cells + 7 1313 cellsthis work was supported by the russian foundation for basic research ( projects 14 - 01 - 00785 , 15 - 01 - 00026 ) .ilic , m. , liu , f. , turner , i. , anh , v. : numerical approximation of a fractional - in - space diffusion equation .ii with nonhomogeneous boundary conditions .fractional calculus and applied analysis 9(4 ) , 333349 ( 2006 ) ili , m. , turner , i.w . ,anh , v. : a numerical solution using an adaptively preconditioned lanczos method for a class of linear systems related with the fractional poisson equation . international journal of stochastic analysis 2008 , 26 pages ( 2008 ) miller , j.j.h . ,oriordan , e. , shishkin , g.i .: fitted numerical methods for singular perturbation problems : error estimates in the maximum norm for linear problems in one and two dimensions .world scientific , new jersey
|
a boundary value problem for a fractional power of the second - order elliptic operator is considered . the boundary value problem is singularly perturbed when . it is solved numerically using a time - dependent problem for a pseudo - parabolic equation . for the auxiliary cauchy problem , the standard two - level schemes with weights are applied . the numerical results are presented for a model two - dimensional boundary value problem with a fractional power of an elliptic operator . our work focuses on the solution of the boundary value problem with .
|
quantum coherence originating from the quantum superposition principle is the most fundamental quantum feature of quantum mechanics .it plays an important role in various fields such as the thermodynamics , the transport theory , the living complexes and so on . with the resource - theoretic understanding of quantum feature in quantum information ,the quantification of coherence has attracted increasing interest in recent years and has also led to the operational resource theory of the coherence .the quantitative theory also makes it possible to understand one type of quantumness ( for example , the coherence ) by the other type of quantumness such as the entanglement and the quantum correlation , _ vice versafor example , for a bipartite pure state , the maximal extra average coherence that one party could gain was shown to be exactly characterized by the concurrence assisted by the local operations and classical communication ( locc ) with the other party . ref . showed that the maximal average coherence was bounded by some type of quantum correlation in some particular reference framework . in the asymptotic regime , showed that the rate of assisted coherence distillation for pure states was equal to the coherence of assistance under the local quantum - incoherent operations and classical communication .quite recently , a unified view of quantum correlation and quantum coherence has been given in ref .in addition , if only the incoherent operations are allowed , the state with certain amount of coherence assisted by an incoherent state can be converted to an entangled state with the same amount of entanglement or a quantum - correlated state with the same amount of quantum correlation . in this paper , _ instead of the quantum correlation _ , we find , it is _ the classical correlation _ of a bipartite quantum state that limits the extra average coherence at one side induced by the unilateral measurement at the other side .we also find the necessary and sufficient condition for the zero maximal average coherence that could be gained with all the possible measurements taken into account . besides , we show , through some examples , that quantum correlation is neither sufficient nor necessary for the extra average coherence subject to a given measurement .we have selected both the basis - dependent and the basis - free coherence measure to study this question and obtain the similar conclusions . in particular, one should note that all our results are valid for the positive - operator - valued measurement ( povm ) , even though we only consider the local projective measurement in the main text .* coherence measure- * to begin with , let s first give a brief review of the measure of the quantum coherence . if a quantum state can be written as is incoherent with respect to the basis .let denotes the set of incoherent states , then the operator is the incoherent operation if it satisfies .thus a good coherence measure of a -dimensional state should be : ( p1 ) nonnegative - i.e . , and if and only if the quantum state is incoherent .( p2 ) monotonic - i.e . , for any incoherent operation ; and strongly monotonic if with .( p3 ) convex - i.e ., .even though there are many good coherence measures such as the coherence measures based on -norm , trace norm , fidelity , the relative entropy and so on , in this paper we will only employ the relative entropy to quantify the quantum coherence , i.e. , where is the relative entropy , is the von neumann entropy and is the diagonal matrix by deleting all the off - diagonal entries of any ( we will use this notation throughout the paper ) . for simplicity, we will restrict ourselves in the computational basis throughout the paper .in contrast , the basis - free coherence ( or the total coherence ) is quantified by note that quantifies the maximal coherence of a state with all the bases taken into account . * the classical correlation as the upper bound- * now let s turn to our game sketched in fig .suppose two players , alice and bob , share a two - particle quantum state and alice performs some projective measurement on her particle and sends her outcomes to bob .bob is nt allowed to do any operation .based on alice s outcomes , bob will obtain the state with the probability .thus in the computational basis , the measurement - induced average coherence ( miac : bob s average coherence induced by alice s measurement ) is given by , the measurement - induced average total coherence ( miatc : bob s average total coherence induced by alice s measurement ) is with denoting the dimension of bob s space .with alice s measurement , the bob s average coherence is usually different from the coherence of .the extra miac and the extra miatc can be defined as it is obvious that which is impied by the convexity of the coherence , that is , with .thus our main results can be given by the following theorems .* theorem 1 * : for a bipartite quantum state , the extra miac is not greater than the extra , i.e. , _ proof ._ based on eq .( [ miac ] ) , we have . substituting the definition of miatc ( eq .( [ miatc ] ) ) into eq.([miatcda ] ) , we can obtain the \notag\\ & = & \delta\mathcal{c}_{\pi}^{t}.\end{aligned}\ ] ] the inequality holds if all bob s states and have the same diagonal entries .the proof is completed .* theorem 2 * : for a bipartite quantum state , the extra miac is upper bounded by the classical correlation of , that is , where the classical correlation is defined by with and defined by and the corresponding probability eq .( [ zong ] ) saturates if induced by the measurement achieves the classical correlation and s are the same for all .an example is the pure state where is unitary , , are the local computational basis .* theorem 3 * : the extra miatc for a bipartite quantum state is upper bounded by the classical correlation of , i.e. , the equality holds for the pure ._ from the classical correlation , we have substituting eq .( [ miatc ] ) into eq .( [ ljy ] ) , one can arrive at \notag\\ & = & \delta\mathcal{c}_{\pi}^{t}.\end{aligned}\ ] ] since both and hold for pure , the inequality ( [ ljy ] ) saturates for the pure quantum state .the proof is finished . all the above three theorems hold for any projective measurement , so if we specify the particular measurement such that the maximal extra miac or miatc can be achieved , the three theorems are also valid , which can be given in a rigorous way as : * corollary 1*. for a bipartite state with the reduced density matrix , the maximal extra miac and the maximal extra miatc satisfy and with if is incoherent , we have _ proof ._ it is obvious from theorem 1 , 2 and 3 . * corollary 2 * : if satisfies , then _ proof ._ if the initial quantum state satisfies the , we have where is the quantum discord defined by with . thus one can easily show which completes the proof . * theorem 4 . * taking all alice s possible measurements into account , no extra miac is present if and only if the state is block - diagonal under bob s computational basis or a product state ._ consider the computational basis , the state where is hermitian and positive and .it is obvious that if for all , the states bob obtains are always diagonal subject to .that is , no extra miac can be obtained .if is a product state which implies , it means that the upper bound of the extra miac is zero based on theorem 2 .so no extra miac could be obtained . on the contrary ,no extra miac includes two cases : one is that the final average coherence is zero , and the other is that the final nonzero average coherence is not increased compared with the coherence of .the first case means that alice performs a measurement ( optimal for the maximal average coherence ) such that bob obtains an ensemble where with all diagonal .thus can be written as where is diagonal and has no nonzero diagonal entries .assume there is at least one nonzero matrix among all , then one can always select a projector such that .this means that bob can get a state with some coherence . in other words, is not the optimal measurement , which is a contradiction .so we have . under this condition, one can find from eq .( [ fanz ] ) that is block - diagonal subject to bob s basis .the second case implies that there exists a decomposition ( optimal for the maximal average coherence ) with such that which , however , is only satisfied when all are the same for nonzero , since is a convex function .thus we have which leads to .now we claim that is also optimal for the classical correlation .this can be seen as follows .if there exists another decomposition for the classical correlation , can not be the same , which will lead to the larger average coherence due to the convexity of .this is a contradiction .so is the optimal decomposition for the classical correlation , that is , which implies is a product state .the proof is finished .* theorem 5 .* consider all alice s possible measurements , no extra miatc is present if and only if the state is a product state ._ a product state has no classical correlation , i.e. , which implies that the upper bound of the extra miatc is zero in terms of theorem 3 .thus no extra miatc could be obtained . on the contrary ,no extra miatc implies that , namely .similar to the proof of theorem 4 , one can find that which corresponds to a product state .the proof is finished . * examples- *the above theorems mainly show that , even though the coherence is the quantum feature of a quantum system , in the particular game as sketched in fig .1 , the extra average coherence obtained by bob with the assistance of alice s measurement is well bounded by the classical correlation of their shared state , instead of the quantum correlation .however , one can find that the necessity for all the attainable bounds is to share the pure states which happen to own the equal quantum and classical correlations .therefore , one could think that the classical correlation is trivial in contrast to the quantum correlation ( e.g. , quantum correlation serves as a tight upper bound , but is less than classical correlation ) .the following examples show that it is not the case ._ example.1-the extra average coherence could be induced in classical - classical states . _suppose a bipartite state is given by with , the reduced quantum state is incoherent .so the classical correlation is equal to the total correlation , i.e. , if the subsystem is measured by the projective measurements , subsystem b will collapse to the state with the probability .the extra miac and the extra miatc subject to the measurement can be calculated as if the subsystem is measured by the projective measurement , subsystem b will collapse to the state with the equal probability .so there is no extra miac and miatc .this example shows that the extra average coherence is well bounded by the classical correlation .in particular , it also shows that the extra average coherence could exist even though not any quantum correlation is present ._ example 2 .no extra average coherence could be induced in the classical - quantum state ._ set the classical - quantum state as with and .the reduced quantum states are given by since there is no quantum correlation subject to subsystem a , the corresponding classical correlation is directly determined by the total correlation as suppose that the projective measurement is performed on subsystem a while the subsystem b will collapse on the state with the equal probability .it is obvious that there is no extra average coherence ( ) gained by this measurement .however , if the projective measurement is selected as , subsystem b will be at the state and with the equal probability .therefore , the _ nonzero _ extra average coherence can be obtained as with given by eq .( [ jvj ] ) .this example shows that an improper measurement could induce no extra average coherence even though quantum correlation is absent ._ example 3 .no extra average coherence could be induced in the quantum - classical state ._ suppose the quantum - classical state is given by with and .it is easy to see that the reduced quantum state is incoherent , i.e. , .the classical correlation is if the projective measurement is used on subsystem a , subsystem b will be on the states and with the corresponding probability and .thus a simple calculation can show however , if we select another projective measurement where and with , subsystem b will collapse to where and with the probability and .it is easy to demonstrate that for which further leads to .thus there is no extra average coherence can be gained in terms of this measurement constraint , that is , similar to the second example , an improper measurement could induce no extra average coherence even though quantum correlation is present ._ example 4 .the classical correlation can be tighter than the quantum correlation ._ consider a bell - diagonal state where is the pauli matrices . is symmetric under exchanging the subsystems .the classical and the quantum correlations are respectively given by \nonumber\\ & -&\frac{1-c}{2}\log(1-c)-\frac{1+c}{2}\log(1+c),\end{aligned}\ ] ] with .suppose the projective measurement with is performed on subsystem a , subsystem b will collapse , with the equal probability , on the states where and .in addition , it is obvious that the reduced quantum states which implies .so the extra average coherence can be directly given by the miac or miatc as with .( solid line ) , the quantum correlation ( dotted - dashed line ) , the ( extra ) miatc ( dotted line ) and the ( extra ) miac ( dashed line ) versus for the bell - diagonal state . ] in fig .2 , we plot the quantum and classical correlations and the extra average coherence with the varying .the parameters are chosen as and .the solid line , dotted - dashed line , dotted line and dashed line correspond to the classical correlation , the quantum correlation , the miatc and the miac , respectively .one can find that the classical correlation serves as the good upper bound for both the ( extra ) miatc and the ( extra ) miac and meanwhile , the ( extra ) miatc is always greater than the ( extra ) miatc .however , the quantum correlation crossing the classical correlation , the ( extra ) miatc and the ( extra ) miac with the increasing can not act as a good bound .before the end , we would like to emphasize that all the results in the paper are valid for the povms , since it was shown that the classical correlations always attained by the rank - one povm .in addition , we have claimed that bob is nt allowed to do any operation , which is mainly for the basis - dependent coherence measure .in fact , when we consider the basis - free coherence measure , it is equivalent to allowing bob to select the optimal unitary operations on his particle . in this case ,theorem 3 implies that for pure states the extra miatc is the exact quantum entanglement of their shared state ( von neumann entropy of the reduced density matrix ) .thus the coherence also provides an operational meaning for the pure - state entanglement under locc . to sum up ,we employ the basis - dependent and basis - free coherence measure to study the extra average coherence induced by a unilateral quantum measurement . despite that the coherence is the most fundamental quantum feature , we find that the extra average coherence is limited by the classical correlation instead of the quantum correlation . in addition , we find the necessary and sufficient condition for the zero maximal average coherence . we also show that the quantum correlation is neither sufficient nor necessary for the extra average coherence by some examples .* proof of theorem 2.- * we will give the main proof the theorem 2 .in the main text .following eq .( [ miatcda ] ) , we have the second inequality holds due to the optimal implied in eq .( [ jingdian ] ) .so eq . ( [ zong ] ) is satisfied .in addition , eq .( [ zong ] ) saturates if both eq .( [ miatcda ] ) and eq .( 15 ) saturate .( 15 ) means that induced by the measurement achieves the classical correlation and eq .( [ miatcda ] ) implies s are the same for all . in order to find an explicit example ,suppose with satisfying .it is obvious is incoherent with respect to the basis .it means in order to select a proper measurement , alice first applies a unitary operation such that with denoting the dimension of the subsystem a. thus becomes on , bob will obtain his state as with the probability corresponding to the measurement outcome .bob s miac can be given by \notag\\ & = & \sum_{\omega}p_{\omega}(-\sum_{j}\lambda_{j}^{2}\ln \lambda_{j}^{2 } ) \notag\\ & = & s(\sigma_{b}),\label{phim}\end{aligned}\ ] ] with .for the pure state , it can prove that the classical correlation is exactly given by .( [ rmb ] ) , ( [ phim ] ) and ( [ phid ] ) show that eq .( [ zong ] ) saturates for the pure state given by eq .( [ ps ] ) . the proof is finished . 99 berg , j. catalytic coherence .* 113 * , 150402 ( 2014 ) .narasimhachar , v. & gour , g. low - temperature thermodynamics with quantum coherence .commun . _ * 6 * , 7689 ( 2015 ) .wikliski , p. , studziski , m. , horodecki , m. & oppenheim , j. towards fully quantum second laws of thermodynamics : limitations on the evolution of quantum coherences .lett . _ * 115 * , 210403 ( 2015 ) .lostaglio , m. , jennings , d. & rudolph , t. description of quantum coherence in thermodynamic processes requires constraints beyond free energy .commun . _ * 6 * , 6383 ( 2015 ) .scully , m. o. et al .quantum heat engine power can be increased by noise - induced coherence .u. s. a. _ * 108 * , 15097 ( 2011 ) .scully , m. o. , zubairy , m. s. , agarwal , g. s. & walther , h. extracting work from a single heat bath via vanishing quantum coherence . _science _ * 299 * , 862 ( 2003 ) .levi , f. & mintert , f. a. quantitative theory of coherent delocalization ._ new j. phys ._ 16 , 033007 ( 2014 ) .rebentrost , p. , mohseni , m. & aspuru - guzik , a. role of quantum coherence and environmental fluctuations in chromophoric energy transport ._ j. phys .113 , 9942 ( 2009 ) .witt , b. & mintert , f. stationary quantum coherence and transport in disordered networks ._ new j. phys . _ 15 , 093020 ( 2013 ) .wang , l. & yu , c. s. the roles of a quantum channel on a quantum state .phys . _ * 53 * , 715 ( 2014 ) .plenio , m. b. & huelga , s. f. dephasing - assisted transport : quantum networks and biomolecules ._ new j. phys . _ * 10 * , 113019 ( 2008 ) .lloyd , s. quantum coherence in biological systems . _ j. phys .ser . _ * 302 * , 012037 ( 2011 ) .huelga , s. f. & plenio , m. b. vibrations , quanta and biology ._ contemp .phys . _ * 54 * , 181 ( 2013 ) .baumgratz , t. , cramer , m. & plenio , m. b. quantifying coherence .lett . _ * 113 * , 140401 ( 2014 ) .girolami , d. observable measure of quantum coherence in finite dimensional systems .lett . _ * 113 * , 170401 ( 2014 ) .pires , d. p. , cleri , l. c. & soares - pinto , d. o. geometric lower bound for a quantum coherence measure .rev . a _ * 91 * , 042330 ( 2015 ) . shao , l. h. , xi , z. j. , fan , h. & li , y. m. fidelity and trace - norm distances for quantifying coherence .a _ * 91 * , 042120 ( 2015 ) .rana , s. , parashar , p. & lewenstein , m. trace - distance measure of coherence ._ phys . rev . a _ * 93 * , 012110 ( 2016 ). zhang , y. r. , shao , l. h. , li , y. m. & fan , h .quantifying coherence in infinite - dimensional systems .* 93 * , 012334 ( 2016 ) .winter , a. & yang , d .operational resoures theory of coherence .lett . _ * 116 * , 120404 ( 2016 ) .yu , c. s. & song , h. s. bipartite concurrence and localized coherence .a _ * 80 * , 022324 ( 2009 ) .hu , x. y. & fan , h. extracting quantum coherence via steering ._ * 6 * , 34380 ( 2016 ) .chitambar , e. et al .assisted distillation of quantum coherence .lett . _ * 116 * , 070402 ( 2016 ) .tan , k. c. , kwon , h. , park , c .- y . &jeong , h. unified view of quantum correlations and quantum coherence .rev . a _ * 94 * , 022329 ( 2016 ) .yu , c. s. , zhang , y. & zhao , h. q. quantum correlation via quantum coherence .proc . _ * 13 * , 1437 ( 2014 ) .yao , y. , xiao , x. , ge , l. & sun , c. p. quantum coherence in multipartite systems .a _ * 92 * , 022112 ( 2015 ) .xi , z. j. , li , y. m. & fan , h. quantum coherence and correlation in quantum system .rep . _ * 5 * , 10922 ( 2015 ) .cheng , s. m. & hall , m. j. w. complementarity relations for quantum coherence .a _ * 92 * , 042101 ( 2015 ) .singh , u. , bera , m. n. , dhar , h. s. & pati , a. k. maximally coherent mixed states : complementarity between maximal coherence and mixedness .rev . a _ * 91 * , 052115 ( 2015 ) .singh , u. , zhang , l. & pati , a. k. average coherence and its typicality for random pure state .rev . a _ * 93 * , 032125 ( 2016 ). du , s. p. , bai , z. f. & guo , y. conditions for coherence transformations under incohernet operations ._ phys . rev . a _ * 91 * , 052120 ( 2015 ) . streltsov , a. et al .measuring quantum coherence with entanglement .lett . _ * 115 * , 020403 ( 2015 ) .ma , j. j. et al .coverting coherence to quantum correlation .lett . _ * 116 * , 160407 ( 2016 ) .yu , c. s. , yang s. r. & guo , b. q. total quantum coherence and its applications ._ quantm inf ._ * 15 * , 3773 ( 2016 ) .henderson , l. & vedral , v. classical , quantum and total correlations ._ j. phys .* 34 * , 6899 ( 2001 ) .xi , z. j. , lu , x. m. , wang , x. g. & li , y. m. necessary and sufficient condition for saturating the upper bound of quantum discord .rev . a _ * 85 * , 032109 ( 2012 ) .luo , s. l. quantum discord for two - qubit systems .rev . a _ * 77 * , 042303 ( 2008 ) . datta , a. studies on the role of entanglement in mixed - state quantum computation ._ arxiv : _ 0807.4490 [ quant - ph ] .this work was supported by the national natural science foundation of china , under grant no.11375036 , the xinghai scholar cultivation plan and the fundamental research funds for the central universities under grant no .j.z . and s .- r.y . and y.z . and c .- s.y . analyzed the results and wrote the main manuscript text .all authors reviewed the manuscript .competing financial interests : the authors declare no competing financial interests .
|
coherence is the most fundamental quantum feature in quantum mechanics . for a bipartite quantum state , if a measurement is performed on one party , the other party , based on the measurement outcomes , will collapse to a corresponding state with some probability and hence gain the average coherence . it is shown that the average coherence is not less than the coherence of its reduced density matrix . in particular , it is very surprising that the extra average coherence ( and the maximal extra average coherence with all the possible measurements taken into account ) is upper bounded by the classical correlation of the bipartite state instead of the quantum correlation . we also find the sufficient and necessary condition for the null maximal extra average coherence . some examples demonstrate the relation and , moreover , show that quantum correlation is neither sufficient nor necessary for the nonzero extra average coherence within a given measurement . in addition , the similar conclusions are drawn for both the basis - dependent and the basis - free coherence measure .
|
debt analysis has received recently a lot of attention from the research community in an effort to explain the `` nature '' of consumer indebtness that has emerged recently in the developed countries . among the three fundamental research questions posed in the analysis of this social problem the identification of factors that affect the level of consumer debt . answering the latter, ongoing research revealed a series of diverse factors , economic , demographic and psychological , that are related to how deep a consumers goes in debt providing a deep insight in the `` nature '' of this problem .the discovery of these factors was mainly carried out by traditional statistical models like linear regression which has the ability to reveal linear associations between variables . however , as common as the utilisation of these models in the field of economics might be , so is their limited ability to deal with characteristics that data from real world applications possess .their difficulty to handle non - linearity in the data makes them unable to solve non - linear classification problems , while the colinearity between the independent variables can lead to incorrect identifications of most predictors .these limitations make them inappropriate to model successfully consumer indebtness since socio - economic datasets exhibit strong non - linearity among several other inconsistencies .it also raises questions regarding the validity of the relationships uncovered by these models as their small predictive accuracy can not guarantee the identification of the correct predictors .in addition to this , most of the research has been conducted on a limited number of observations making hard to consider the findings as representative . as the need to develop fairly accurate quantitative prediction models becomes apparent , we argue that the field of economics can benefit from the variety of techniques and models computational intelligence has to offer .such a computational model is the neural networks , a system of interconnected `` neurons '' , inspired by the functioning of the central nervous system .neural networks are capable of machine learning and not only they manage to achieve remarkable prediction accuracy by successfully handling non - linearity in the data but their flexibility in the design of their topology also offers a way to incorporate important steps of the data mining process into a regression model .the potential of data mining is evident in the numerous ways to pre - process the data in order to tackle any inconsistencies they may contain and to explore the relationships in the data , that be can combined in an elaborate process for knowledge discovery in any difficult real world problem like consumer indebtness . therefore in order to evaluatethe impact neural networks can make on modelling the consumer debt in a large socio - economic dataset in this work , we compare their performance against random forests and linear regression . in the same experimental setupwe also evaluate the contribution on the performance of these models of a series of data mining techniques like the transformations performed on the data in order to deal with the inconsistencies they contain , such noise , high dimensionality and the presence of outliers and the a classification of debtors identified by clustering .finally we take advantage of the ability to design the topology of neural networks and we introduce a novel way to incorporate into the topology meaningful information that derives from explanatory techniques applied on data , like clustering and factor analysis , and we assess its performance .our results show that the transformations on the data improve in a great extend the accuracy of all three regression models and that neural networks achieve the best performance . the contribution of the classifications provided by clustering remains argumentative when it is used as an extra variable but proves to be very useful when it is incorporated in an appropriate way in the topology of the neural networks which leads to a further improvement in the performance of the model .therefore , we believe that this work not only serves as a comparison between neural networks and other regression models but it also verifies the great of potential of neural networks that can be strong predictors and take advantage of significant results from data mining methods at the same time , sketching a complete framework for the consumer debt analysis including necessary transformations of data , exploratory models and reliable regression model that it may extend to any real world application problem that contains a dataset with similar inconsistencies and characteristics as this one .the rest of the paper is organised as following . in the 2nd sectionwe discuss the related work on the level of debt predictions and on the models we use for our purposes . in the 3rd sectionwe introduce briefly the cccs dataset together with transformations performed on its attributes and the clustering approach that identified classes of debtors .we then present the models in the 4th section whereas in the 5th we proceed with the details of the experimental set up .finally in the 6th section we analyse the results of our experiments and we conclude our work in the 7th section .statistical models and linear regression are primarily used for the level of debt prediction in the literature .a significant amount of the work is summarised in where they also provide a model for separating debtors from non - debtors .however , their suggested logit model suffers from a low ( 33% ) . in a similar way , in the proposed models that take into account psychological factors as predictors , exhibit even lower in their probit models ( around 10% ) .surprisingly enough the linear regression model presented in achieves a remarkable 66% but as it is explained in , this big proportion of variance explained , is due to the small number of respondents . a linear regression model built for estimating the outstanding credit card balance in exhibits 30% . based on these results and the fact that the models are built on a limited number of observations , we are unsure whether to regard these findings as reliablesince the suggested models fail to explain the variance that exists in the data and the small number of instances can not be considered representative enough .this is further enhanced by the criticism statistical techniques receive in , where it is argued that they have reached their limitations in applications with datasets that contain non - linearity in the data , like an indebtness dataset . on the other hand , random forests ,a popular machine learning algorithm for data mining , has been shown to be able to handle non - linearities in the data .they have received a lot of attention in biostatistics and other fields due to their ability to handle a large number of variables with a relatively small number of observations and because they provide a way to identify variable importance .they manage to demonstrate exceptional performance with only one parameter and their regression has been proven not to overfit the data . an interesting application of random forests is in where a model measuring the impact of the reviews of products in sales and perceived usefulness was constructed .similarly , neural networks exhibit better generalisation than linear regression models , allow for extrapolation and can handle non - linearity posing as strong predictors .their huge learning capacity has led many of researchers to believe that they are able to approximate any function that is encountered in applications .they have been shown to outperform linear regression models and in economics they have been successfully used for stock performance modelling and for credit risk assessment . a very interesting ability they possess is the ability to fully parametrise the topology of the network introducing a concept of logical structure among the neurons that consist the network .this has been exploited in where factor analysis is utilised in order to define the topology of the network and although their result has shown not to actually improve the precision of the existing neural network , it manages to speed up the convergence of the algorithm .the same idea has been adopted by us in this work for further experimentation in our dataset and has been extended in order to include further information that derives from clustering the data . as neural networks have not been used so far for the purposes of consumer debt analysis , in this work we exploit the many advantages they offer in order to achieve a better modelling of consumer indebtness than the existing ones , supporting their utilisation in the field of economics , in applications of which they already have replaced traditional econometric models . .the cccs dataset , introduced in , is a socioeconomic crossectional dataset based on the data provided by the consumer credit counseling service .its 58 attributes contain information about approximately 70000 clients who contacted the service between the years 2004 and 2008 in order to require advice about how they can overcome their debts .the information was gathered through interviews when each client first contacted the service and it varies from standard demographics to financial details , aggregated spending in categories and debt details .the attributes of interest for the purpose of consumer debt analysis are limited to demographics , expenditure and financial attributes as they can be seen in table i together with their description .ll & + pid & individual identifier + * demographics * + age & age of person + mstat & marital status + empstat & employment status + male & sex of person + hstatus & housing status + ndep & number of dependants in household + nadults & number of adults in household + * financial attributes * + udebt & total value of unsecured debt + mortdebt & total value of mortgage debt + hvalue & total value all housing owned + finasset & total value of financial assets + carvalue & resale value of car + income & total monthly income + * expenditure * + clothing & total monthly spending on clothing + travel & total monthly spending on travel + food & total monthly spending on food + services & total monthly spending on utilities + housing & total monthly spending on housing + motoring & total monthly spending on motoring + leisure & total monthly spending on leisure + priority & total monthly spending on priority debt + sundries & total monthly spending on sundries + sempspend & total monthly self - employed spending + other & total other spending + * debt details * + ndebtitems & number of debt items + like other real world dataset , cccs contains noise and outliers , while at the same time it suffers from high dimensionality . in order to tackle the aforementioned difficulties a series of transformations stepswere performed in an earlier work that proved to be beneficial for the unsupervised approach of this dataset .more precisely , homogeneity analysis ( homals ) was utilised in order to map the categorical demographic data , significant attributes concerning the consumer debt analysis , into two - dimensional coordinates together with a factor analysis on the financial attributes and a clustering on the correlation of the spending items .these transformations reduced the dimensionality to more compact attributes , removed noise and outliers , provided a sense of interpretability and improved the quality of the clustering .a summary of the transformations can be seen in fig.1 whereas the new nine transformed attributes include two spatial coordinates that discriminate the demographic variables , three financial factors that summarise all the informations that lies in financial attributes and four behavioural spending clusters that characterise spending in necessity , household , excessive and leisure .finally , in these transformations were proved to be useful for the clustering of a random sample of 10000 debtors from the cccs dataset that managed to classify 8370 debtors in seven classes with distinct characteristics .the characteristics of these classes can be seen in the table ii , which also includes the 1630 debtors that remained unclassified .further information regarding the dataset itself , the suggested transformations and the clustering results can be found in as it is not the subject of this work .our objective is to use the information that derives from the exploratory research that was conducted in , meaning the transformed attributes and classifications , in order to evaluate their contribution in the level of debt prediction .ll < p3 cm & & + 1 & 2301 & young single unemployed debtors with low income , debt and spending + 2 & 1440 & average income - spending- debt debtors usually p / t employed and cohabiting with high spending in clothing and food + 3 & 1033 & high income - debt - spending debtors , usually self - employed and with expensive houses + 4 & 948 & older and retired debtors with average income - spending and low levels of debt + 5 & 507 & high income - debt - spending debtors with cheap houses + 6 & 1588 & average income - spending - debt debtors usually p / t employed but single , divorced or separated + 7 & 553 & old and retired debtors with low income , debt and spending , other marital status + 8 & 1630 & unclassified +linear regression is the simplest of the statistical models and it tries to model the relationship between a dependant variable and one or more explanatory variables .as someone can refer from the name , linear regression assumes a linear relationship between the dependant variable and the explanatory variables and tries to fit a straight line in the data . more formallylinear regression is defined as : where , , .... , are the coefficients and ,j=1, .... p denote p regressor variables . finally denotes the error term which is assumed to be uncorrelated to the regressors and have mean and variance equal to 0 .the model takes as input the observations and tries to fit the straight line by estimating the parameters ( coefficients and error term ) . a widely used algorithm for estimatingthe parameters is the ordinary least squares(ols ) which tries to minimise the sum of squared residuals .random forest is an example of ensemble learning that generates many classifiers and aggregate the results .the random forest method creates large number of decision trees for the case of classification or regression trees for the case of regression from different random samples of the data .the samples are being drawn based on bootstrap techniques that allow resampling of instances .the appropriate tree is being constructed based on each sample and its accuracy is evaluated on the rest of the samples .the difference from the common decision tree is that when a split on a node is to be decided , a specific number of the attributes can participate as candidates and not all of them .when the random forest is built the prediction is made by aggregating the votes of all the trees for the case of classification and by averaging the results of all the trees for the case of regression .it needs the specification of only two parameters , the size of the forest and the number of predictors that can be candidates for each node split and its success is based on its simplicity . the notion of randomness it adopts in its process allows the model to be robust against data overfitting .a neural network is a directed graph consisting of nodes and edges that are organised in layers . as it models a relationship between the predictors and the response variables , the input layer is consisted of nodes that represent the predictors and the output layer of nodes that represent the response variables if there are more than one .one or more hidden layers of an arbitrary number of nodes connect these two layers .each layer is fully connected with the next layer and each edge assigns a weight to the value it takes as input and passes it on the next node .thus in each node the weighted sum of all the nodes that belong to the previous layer is calculated adding the intercept and the result is being fed into an activation function and passed to the next layer .the activation function is usually a non - linear activation function like the sigmoid function or the hyperbolic tangent .the simplest neural network ( perceptron ) has n inputs and one output and it is identical to the logistic regression as it is a non - linear function of the linear aggregation of the input . with this in mind we can easily conclude that a neural network with more than one node in the hidden layer is an extension of the generalised linear models .a neural network takes as parameters the starting weights of the edges that are usually initialised randomly and the network topology meaning the organisation of the nodes in the hidden layers. then the model tries to find the optimal weights of the edges by using a learning algorithm like backpropagation on the data .backpropagation tries to minimise the difference between the predicted value calculated by the model and the actual value .it does that by calculating this difference and then following the chain rule it moves from the output to the input adapting all the appropriate weights according to a specific learning rate .resilient backpropagation which is argued to be more suitable for regression purposes is similar to backpropagation but instead of subtracting a ratio of the gradient of the error function like backpropagation does , it increases the weight if the gradient is negative and reduces it if its positive .it updates the weights by using only the sign of the gradient and some predefined values .the value of the update is bigger if the gradient changes sign from the previous update and smaller if it keeps the same sign .this way it ensures that a local minimum wo nt be missed .the neural networks tend to overfit the data , a fact that raises a concern of how they can be properly used .a common technique for avoiding data overfitting is to train the model on a subset of the data and validate it on the rest of the data .a very popular technigue in supervised learning for this , is the 10-fold cross validation where the data is divided in ten folds and then a model is trained for each fold and gets validated on the rest of the folds .this is the way to evaluate the accuracy of the model and thus to choose the appropriate number of hidden layers and hidden nodes since this is not known beforehand .usually different topologies are being tested and the one that minimises the error between the predicted and the actual values on the test set is selected . the flexibility that neural networks provide in designing the topologycan be exploited to incorporate knowledge extracted by unsupervised learning performed on the data .thus , in this work we tried to organise the neurons in the hidden layers based on the knowledge extracted by factor analysis and clustering .the idea behind this was based on the striking resemblance neural networks have with latent factor models , like factor analysis , and on the assumption that the classes of debtors identified by clustering define different relationships between the response variable and the predictors .factor analysis is a common latent factor model that organises the variables of a dataset into a smaller number of hidden factors that would still contain most of the information from the initial variables .this way neurons in the first hidden layer can be depicted as latent factors that summarise the input .the only difference with factor analysis , a widely used latent factor model , is that the relationship between the input variables and the factors is non - linear .this non - linear relationship would also be able to model the linear relationships between the input variables and the neurons identified by factor analysis .this idea has been incorporated with the algorithm proposed in .clustering on the other hand divides the debtors into classes with distinct characteristics .as these classes may model different relationships between the response variables and the explanatory variables this could be introduced in the neural network as an extra hidden layer with as many neurons as the classes .this would create different functions for each class that will be combined in a more complex relationship in order to produce the final modelling .the intuition is something similar to clusterwise regression but the combination of different functions for each class is more fuzzy since they are included in a neural network and not hard .these two ideas form this novel method to use neural networks that we named topology defined neural network ( topdnn ) .our aim is to test topdnn in the socio - economic context but its disciplines can be extended in creating neural networks models for any real world application .the aim of this work is to evaluate the performance of neural networks as a regression model that can predict the amount of unsecured debts ( _ udebt _ ) a debtor in the cccs has by using the rest of the variables as predictors .for this reason we compare its performance against different regression models with different characteristics , like linear regression , random forest regression .furthermore we check whether a series of transformations we performed in and the classification of debtors we provided in the same work can improve the performance of the regression so that they be incorporated in the final neural network we aim to develop .since these models try to optimise different criteria and they are internally validated on different measures when they are fitted into data , we needed to test all these models under a common framework .so we use the 10-fold cross validation as the method to compare the different models and we selected rmse and as the evaluation criteria .10-fold cross validation is a standard method for evaluating models in unsupervised learning and it also allows neural networks to avoid data overfitting providing more representative results for their case . measures the percentage of variance that is explained by the model and it a standardised measure taking values from 0 to 1 with 1 being a perfect fit .the root mean square error ( rmse ) measures the difference between the predicted values from the model and the actual values .it is defined as : where n is the number of observations , is the observed value of the observation i and is the calculated value of the observation i. the best model will minimise the rmse .for model training we use a random sample of 10000 debtors from the cccs dataset , a subset of dataset that contains no missing values and we already had performed the transformations on and divided in classes .all the models are built in r using the _ caret _ package and for linear regression we calculate the weights using the ols algorithm , for random forests we create 500 trees and initialise the number of potential candidates for a node split as m/3 where m equals the number of predictors . for neural networksthe initial weights are randomly assigned and a hidden layer is chosen . in order to choose the optimal number of hidden nodes , we produce ten neural networks for each case with the number of neurons varying from 1 to 10 .10-fold cross validation is used to evaluate all of them and the one with that minimised rmse is selected as the best model .we also use both backpropagation and resilient backpropagation for making the appropriate comparisons .all models are built using both the actual data and the transformed and the classification is introduced as an additional categorical variable . for all of the above we had to create four different datasets that all the regression models will be build upon .these necessary datasets in order test the contribution of the transformation and the classification provided by clustering together with the performance of the regression models are summarised in table iii ..description of datasets [ cols= " < , < " , ] [ table 8 ] the plot of the neural network build with the topdnn approach can be seen in fig .the weights of the edges have been omitted for classification reasons but the lines have modified accordingly to depict the magnitude of the weights with thinner line representing small or negative weights and thicker lines large weights .we can notice that the interpretation of neural network is not a trivial task , especially when the network is complicated .that is their main drawback comparing to linear regression and random forest which have mechanism to assess the variable importance of their models .however tracing the very thick black lines of the plot we can immediately detect the strong influence _financialfactor1 _ has on the final outcome as it influences heavily the first neuron of the first hidden which influenced strongly the sixth neuron of the 2nd hidden layer which belongs to the four neurons of the 2nd layer that affect moderately the final outcome .this relationship between the _ financialfactor1 _ and _ udebt _ can not be quantified or defined but it can be signified .there are techniques to assess variable importance in neural networks , like sensitivity analysis that can provide the desired interpretabily that is valuable for the analysis of real world applications but we leave this for the future part of our research .in this work we tried to construct an accurate regression model for the level of debt prediction , a significant task for consumer debt analysis utilising a widely used computational model , neural networks .for this reason we compared their performance against linear regression and random forests .our results show that neural networks clearly outperform linear regression .random forests achieve comparable performance but their only one parameter does not allow for more improvements .they also proved that all the regression models can benefit from the necessary data transformations and from the unsupervised learning approaches on the data , if these are incorporated properly in the data . trying the latter we devised a novel method for designing the topology of the neural networks utilising information that stems from the factor analysis and clustering performed on the data .topdnn as our method was named , improved the performance of the models even more and signified the ability neural networks offer in adopting in their design results from previous steps of explanatory research conducted on the dataset .our work forms a complete computational intelligence framework with the pre - processing of data , clustering to uncover important relationships and the regression model that is suitable for the purposes of consumer data analysis .this framework exhibits much better performance than the existing statistical methods that dominate the field of economics and it highlights a more sophisticated way to model consumer indebtness that it can extend to any real world application .22 atiya , amir f. `` bankruptcy prediction for credit risk using neural networks : a survey and new results . '' neural networks , ieee transactions on 12.4 ( 2001 ) : 929 - 935 .bernadette kamleitner and erich kirchler .consumer credit use : a process model and literature review ._ revue europeenne de psychologie appliquee / european re- view of applied psychology _ , 57(4):267 - 283 , 2007 .bernadette kamleitner , erik hoelzl , and erich kirchler .credit use : psychological perspectives on a multifaceted phenomenon ._ international journal of psychology _ , 47(1):1 - 27 , 2012 .breiman , leo .`` random forests '' _ machine learning _ 45.1 ( 2001 ) : 5 - 32 . brice stone and rosalinda vasquez maury .indicators of personal financial debt using a multi - disciplinary behavioral model . _ journal of economic psychology _ , 27 ( 4):543 - 556 , 2006 . de leeuw , jan , and patrick mair .`` gifi methods for optimal scaling in r : the package homals . ''_ journal of statistical software _ , forthcoming ( 2009 ) : 1 - 30 .ding , shifei and jia , weikuan and xu , xinzheng and zhu , hong .`` neural networks algorithm based on factor analysis''_advances in neural networks _ ( 2010 ) : 319 - 324 disney r. , and gathergood j.,``understanding consumer over - indebtedness using counselling sector data : scoping study . '' , _ report to the department for business , innovation and skills ( bis ) _ , university of nottingham , 2009 .fabrigar , leandre r. , et al .`` evaluating the use of exploratory factor analysis in psychological research . ''psychological methods 4.3 ( 1999 ) : 272 .gathergood , john .`` self - control , financial literacy and consumer over - indebtedness . '' _ journal of economic psychology _ 33.3 ( 2012 ) : 590 - 602 .ghose , anindya , and panagiotis g. ipeirotis .`` estimating the helpfulness and economic impact of product reviews : mining text and reviewer characteristics . '' _ knowledge and data engineering _ , ieee transactions on 23.10 ( 2011 ) : 1498 - 1512 . grmping , ulrike . `` variable importance assessment in regression : linear regression versus random forest . ''_ the american statistician _ 63.4 ( 2009 ) .gunther , f. , and fritsch s. , `` neuralnet : training of neural networks'',_the r journal_,vol 2/1 , ( 2010):30 - 37 hornik , kurt , maxwell stinchcombe , and halbert white . `` multilayer feedforward networks are universal approximators . ''_ neural networks _ 2.5 ( 1989 ) : 359 - 366 .kim , haejeong , and sharon a. devaney .`` the determinants of outstanding balances among credit card revolvers . ''_ financial counseling and planning 12.1 _ ( 2001 ) : 67 - 77 .ladas a. , aickelin u. , garibaldi j. , scarpel r. , and ferguson e. `` the impact of preprocessing on clustering socio - economic data : a step towards consumer debt analysis '' , under review .livingstone , sonia m. , and peter k. lunt . `` predicting personal debt and debt repayment : psychological , social and economic determinants . ''_ journal of economic psychology _ 13.1 ( 1992 ) : 111 - 134 .lili wang , wei lu , and naresh k malhotra .demographics , attitude , personality and credit card features correlate with credit card debt : a view from china ._ journal of economic psychology _, 32(1):179 - 193 , 2011 .nicholas refenes , apostolos , achileas zapranis , and gavin francis .`` stock performance modeling using neural networks : a comparative study with regression models . ''_ neural networks _ 7.2 ( 1994 ) : 375 - 388 .ottaviani , cristina , and daniela vandone .`` impulsivity and household indebtedness : evidence from real life . ''_ journal of economic psychology 32.5 _ ( 2011 ) : 754 - 761 .segal , mark r. `` machine learning benchmarks and random forest regression . ''sousa , s. i. v. , et al .`` multiple linear regression and artificial neural networks based on principal components to predict ozone concentrations . '' _ environmental modelling & software _ 22.1 ( 2007 ) : 97 - 103 .
|
consumer debt has risen to be an important problem of modern societies , generating a lot of research in order to understand the nature of consumer indebtness , which so far its modelling has been carried out by statistical models . in this work we show that computational intelligence can offer a more holistic approach that is more suitable for the complex relationships an indebtness dataset has and linear regression can not uncover . in particular , as our results show , neural networks achieve the best performance in modelling consumer indebtness , especially when they manage to incorporate the significant and experimentally verified results of the data mining process in the model , exploiting the flexibility neural networks offer in designing their topology . this novel method forms an elaborate framework to model consumer indebtness that can be extended to any other real world application . knowledge discovery , neural networks , regression , consumer debt analysis
|
managing database inconsistency has received a lot of attention in the past two decades .inconsistency arises for different reasons and in different applications .for example , in common applications of big data , information is obtained from imprecise sources ( e.g. , social encyclopedias or social networks ) via imprecise procedures ( e.g. , natural - language processing ) .it may also arise when integrating conflicting data from different sources ( each of which can be consistent ) .arenas , bertossi and chomicki introduced a principled approach to managing of inconsistency , via the notions of and .informally , a _ repair _ of an inconsistent database is a consistent database that differs from in a `` minimal '' way , where refers to the . in the case of anti - symmetric integrity constraints ( e.g. , denial constraints andthe special case of functional dependencies ) , such a repair is a ( i.e. , is a consistent subinstance of that is not properly contained in any consistent subinstance of ) .various computational problems around database repairs have been extensively investigated .most studied is the problem of computing the _ consistent answers _ of a query on an inconsistent database ; these are the tuples in the intersection , in this approach inconsistency is handled at query time by returning the tuples that are guaranteed to be in the result no matter which repair is selected .another well studied question is that of : given instances and , determine whether is a repair of . depending on the type of repairs and the type of integrity constraints , these problems may vary from tractable to highly intractable complexity classes . see for an overview of results . in the above framework ,all repairs of a given database instance are taken into account , and they are treated on a par with each other .there are situations , however , in which it is natural to prefer one repair over another .for example , this is the case if one source is regarded to be more reliable than another ( e.g. , enterprise data vs. internet harvesting , precise vs. imprecise sensing equipment , etc . ) or if available timestamp information implies that a more recent fact should be preferred over an earlier fact .recency may be implied not only by timestamps , but also by evolution semantics ; for example , `` divorced '' is likely to be more updated than `` single , '' and similarly is `` sergeant '' compared to `` private . ''motivated by these considerations , staworko , chomicki and marcinkowski introduced the framework of repairs .the main characteristic of this framework is that it uses a relation between conflicting facts of an inconsistent database to define a notion of repairs .specifically , the notion of and that of are based on two different notions of the property of one consistent subinstance being preferred to another .improvements are basically lifting of the priority relation from facts to consistent subinstances ; is an improvement of if contains a fact that is better than all those in ( in the pareto semantics ) , or if for every fact in there exists a better fact in ( in the global semantics ) . in each of the two semantics ,an is a repair that can not be improved .a third semantics proposed by staworko et al . is that of a repair , which is a globally optimal repair under some extension of the priority relation into a relation . in this paper, we refer to these preferred repairs as , and , respectively .fagin et al . have built on the concept of preferred repairs ( in conjunction with the framework of ) to devise a language for declaring in text information - extraction systems .they have shown there that preferred repairs capture ad - hoc cleaning operations and strategies of some prominent existing systems for text analytics .staworko et al . have proved several results about preferred repairs .for example , every c - repair is also a g - repair , and every g - repair is also a p - repair .they also showed that p - repair and c - repair checking are solvable in polynomial time ( under data complexity ) when constraints are given as denial constraints , and that there is a set of functional dependencies ( fds ) for which g - repair checking is conp - complete .later , fagin et al . extended that hardness result to a full dichotomy in complexity over all sets of fds : g - repair checking is solvable in polynomial time whenever the set of fds is equivalent to a single fd or two key constraints per relation ; in every other case , the problem is conp - complete . while the classic complexity problems studied in the theory of repairs include repair checking and consistent query answering , the presence of repairs gives rise to the , which staworko et al . refer to as : determine whether the provided priority relation suffices to clean the database unambiguously , or in other words , decide whether there is exactly one optimal repair .the problem of repairing uniqueness ( in a different repair semantics ) is also referred to as by fan et al . . in this paper, we study the three variants of this computational problem , under the three optimality semantics pareto , global and completion , and denote them as , and , respectively .it is known that under each of the three semantics there is always at least one preferred repair , and staworko et al . present a polynomial - time algorithm for finding such a repair .( we recall this algorithm in section [ sec : categoricity ] . ) hence , the categoricity problem is that of deciding whether the output of this algorithm is the only possible preferred repair . as we explain next , it turns out that each of the three variants of the problem entails quite a unique picture of complexity . for the problem of p - categoricity ,we focus on integrity constraints that are fds , and establish the following dichotomy in data complexity , assuming that . for a relational schema with a set of fds : * if associates ( up to equivalence ) a single fd with every relation symbol , then p - categoricity is solvable in polynomial time . * in , p - categoricity is conp - complete .for example , with the relation symbol and the fd , p - categoricity is solvable in polynomial time ; but if we add the dependency then it becomes conp - complete .our proof uses a reduction technique from past dichotomies that involve fds , but requires some highly nontrivial additions .we then turn to investigating c - categoricity , and establish a far more positive picture than the one for p - categoricity .in particular , the problem is solvable in polynomial time for every set of fds .in fact , we present an algorithm for solving c - categoricity in polynomial time , assuming that constraints are given as an input .( in particular , we establish polynomial - time data complexity for other types of integrity constraints , such as and . )the algorithm is extremely simple , yet its proof of correctness is quite intricate . finally , we explore g - categoricity , and focus first on fds . we show that in the tractable case of p - categoricity ( equivalence to a single fd per relation ) , g - categoricity is likewise solvable in polynomial time .for example , with the dependency has polynomial - time g - categoricity .nevertheless , we prove that if the we add the dependency ( that is , the attribute should have the same value across all tuples ) , then g - categoricity becomes -complete .we do not complete a dichotomy as in p - categoricity , and leave that open for future work .lastly , we observe that in our proof of -hardness , our reduction constructs a non - transitive priority relation , and we ask whether transitivity makes a difference .the three semantics of repairs remain different in the presence of transitivity . in particular, we show such a case where there are globally - optimal repairs that are not completion optimal repairs .nevertheless , quite interestingly , we are able to prove that g - categoricity and c - categoricity are actually if transitivity is assumed .in particular , we establish that in the presence of transitivity , g - categoricity is solvable in polynomial time , even when constraints are given as a conflict hypergraph .we now present some general terminology and notation that we use throughout the paper .a ( ) is a finite set of , each with a designated positive integer as its , denoted .we assume an infinite set of , used as database values .an over a signature consists of finite relations , where .we write to denote the set , and we refer to the members of as of . if is an instance over and is a tuple in , then we say that is a . by a slight abuse of notation , we identify an instance with the set of its facts .for example , denotes that is a fact of . as another example, means that for every ; in this case , we say that is of . in our examples ,we often name the attributes and refer to them by their names .for instance , in figure [ fig : companyceo - instance ] we refer to the relation symbol as where and refer to attributes 1 and 2 , respectively . in the case of generic relation symbols ,we implicitly name their attributes by capital english letters with the corresponding numeric values ; for instance , we may refer to attributes , and of by , and , respectively .we stress that attribute names are not part of our formal model , but are rather used for readability .let be a signature , and an instance over . in this paperwe consider two representation systems for integrity constraints .the first is and the second is .let be a signature .a ( for short ) over is an expression of the form , where is a relation symbol of , and and are subsets of .when is clear from the context , we may omit it and write simply .a special case of an fd is a , which is an fd of the form where .an fd is if ; otherwise , it is . when we are using the alphabetic attribute notation , we may write and by simply concatenating the attribute symbols .for example , if we have a relation symbol , then denotes the fd .an instance over an fd if for every two facts and over , if and agree on ( i.e. , have the same values for ) the attributes of , then they also agree on the attributes of .we say that satisfies a set of fds if satisfies every fd in ; otherwise , we say that .two sets and of fds are if for every instance over it holds that satisfies if and only if it satisfies .for example , for the sets and are equivalent . in this work , a is a pair , where is a signature and is a set of fds over .if is a schema and , then we denote by the restriction of to the fds over .r|c|c| & + & & + & & + & & + & & + & & + & & + [ example : ceo ] in our first running example , we use the schema , defined as follows .the signature consists of a single relation , which associates companies with their chief executive officers ( ceo ) .figure [ fig : companyceo - instance ] depicts an instance over .we define as the following set of fds over . hence , states that in , each company has a single ceo and each ceo manages a single company .observe that violates .for example , has three ceos , has two ceos , and each of and is the ceo of two companies .while fds define integrity logically , at the level of the signature , a provides a direct specification of inconsistencies at the instance level , by explicitly stating sets of tuples that can not co - exist . in the case of fds ,the conflict hypergraph is a graph that has an edge between every two facts that violate an fd .formally , for an instance over a signature , a ( ) is a hypergraph that has the facts of as its node set .a subinstance of is with respect to ( w.r.t . ) if is an of ; that is , no hyperedge of is a subset of .we say that is if is inconsistent for every .when all the edges of a conflict hypergraph are of size two , we may call it a . recallthat conflict hypergraphs can represent inconsistencies for various types of integrity constraints , including fds , the more general , and the more general .in fact , every constraint that is anti - monotonic ( i.e. , where subsets of consistent sets are always consistent ) can be represented as a conflict hypergraph . in the case of denial constraints , the translation from the logical constraints to the conflict hypergraph can be done in polynomial time under ( i.e. , when the signature and constraints are assumed to be fixed ) .let be a schema , and let be an instance over . recall that is assumed to have only fds .we denote by the conflict graph for that has an edge between every two facts that violate some fd of .note that a subinstance of satisfies if and only if is consistent w.r.t . . as an example, the left graph of figure [ fig : ceo - completions ] depicts the graph for our running example ; for now , the reader should ignore the directions on the edges , and view the graph as an undirected one .the following example involves a conflict hypergraph that is not a graph .[ example : followers - instance ] in our second running example , we use the toy scenario where the signature has a single relation symbol , where means that person follows person ( e.g. , in a social network ) .we have two sets of people : for , and for .all the facts have the form ; we denote such a fact by .the instance has the following facts : , , , , , , , , , the hypergraph for encodes the following rules : each can follow at most people .each can be followed by at most people .specifically , contains the following hyperedges : , , , + , , , , , an example of a consistent subinstance is the reader can verify that is maximal .we now recall the framework of preferred repairs by staworko et al .let be an instance over a signature .a relation over is an acyclic binary relation over the facts in . bywe mean that does not contain any sequence of facts such that for all and .if is a priority relation over and is a subinstance of , then denotes the set of tuples such that no satisfies .an over is a triple , where is an instance over , is a conflict hypergraph for , and a priority relation over with the following property : for every two facts and in , if then and are neighbors in ( that is , and co - occur in some hyperedge ) . , holds as well without this requirement .we defer to future work the thorough investigation of the impact of relaxing this requirement . ] for example , if ( where all the constraints in are fds ) , then implies that violates at least one fd .[ example : ceo - priority ] we continue our running company - ceo example .we define a priority relation by , and .we denote by corresponding arrows on the left graph of figure [ fig : ceo - completions ] .( therefore , some of the edges are directed and some are undirected . ) we then get the inconsistent prioritizing instance over .observe that the graph does not contain directed cycles , as required from a priority relation .[ example : followers - priority ] recall that the instance of our followers example is defined in example [ example : followers - instance ] .the priority relation is given by if one of the following holds : and , and .for example , we have and .but we do not have ( hence , is not transitive ) .let be an inconsistent prioritizing instance over a signature .we say that is if for every two facts and in , if and are neighbors then either or .a priority over is a of ( w.r.t . ) if is a subset of and is total .as an example , the middle and right graphs of figure [ fig : ceo - completions ] are two completions of the priority relation depicted on the left side .a of is an inconsistent prioritizing instance where is a completion of .let be an inconsistent prioritizing instance over .as defined by arenas et al . , is a of if is a maximal consistent subinstance of .staworko et al . define three different notions of repairs : , , and .the first two notions are based on checking whether a repair of can be improved by replacing a set of facts in with a more preferred set of facts from .they differ by the way they define when one set of facts is considered more preferred than another one .the last notion is based on the notion of completion .next we give the formal definitions .let be an inconsistent prioritizing instance over a signature , and and two distinct consistent subinstances of .* is a of if there exists a fact such that for all facts .* is a of if for every fact there exists a fact such that .that is , is a pareto improvement of if , in order to obtain from , we insert and delete facts , and one of the inserted facts is preferred to all deleted facts . and is a global improvement of if , in order to obtain from , we insert and delete facts , and every deleted fact is preferred to by some inserted fact .[ example : ceo - improvement ] we continue the company - ceo running example .we define three consistent subinstances of . note the following .first , is a pareto improvement of , since and for every fact in ( where in this case there is only one such an , namely ) .second , is a global improvement of because and .( we refer to in later examples . )we then get the following variants of .let be an inconsistent prioritizing instance , and let be a consistent subinstance of .then is a : * if there is no pareto improvement of . * if there is no global improvement of . * if there exists a completion of such that is a globally - optimal repair of .we abbreviate `` pareto - optimal repair , '' `` globally - optimal repair , '' and `` completion - optimal repair '' by , and , respectively .we remark that in the definition of a completion - optimal repair , we could replace `` globally - optimal '' with `` pareto - optimal '' and obtain an equivalent definition .let be an inconsistent prioritizing instance over a signature .we denote the set of all the repairs , p - repairs , g - repairs and c - repairs of by , , and , respectively .the following was shown by staworko et al . .dblp : journals / amai / staworkocm12[prop : containments ] for all inconsistent prioritizing instances we have , and moreover , [ example : ceo - repairs ] we continue our company - ceo example .recall the instances defined in example [ example : ceo - improvement ] .we have shown that has a pareto improvement , and therefore , is a p - repair ( although it is a repair in the ordinary sense ) .the reader can verify that has no pareto improvements , and therefore , it is a p - repair .but is not a g - repair , since is a global improvement of .the reader can verify that is a g - repair ( hence , a p - repair ) . finally , observe that is a g - repair w.r.t. the left completion of in figure [ fig : ceo - completions ] ( and also w.r.t . the right one ) .hence , is a c - repair ( hence , a g - repair and a p - repair ) . in constrast , observe that has a global improvement ( and a pareto improvement ) in both completions ; but it does not prove that is not a c - repair ( since , conceptually , one needs to consider all possible completions of ) .[ example : follows - repair ] we now continue the follower example .the inconsistent prioritizing instance is defined in examples [ example : followers - instance ] and [ example : followers - priority ] .consider the following instance . the reader can verify that is a c - repair ( e.g. , by completing through the lexicographic order ) .the subinstance is a repair but not a p - repair , since we can add and remove both and , and thus obtain a pareto improvement .in this section we define the computational problem of , which is the main problem that we study in this paper .proposition [ prop : containments ] states that , under each of the semantics of preferred repairs , at least one such a repair exists . in general , there can be many possible preferred repairs .the problem of is that of testing whether there is one such a repair ; that is , there do not exist two distinct preferred repairs , and therefore , the priority relation contains enough information to clean the inconsistent instance unambiguously .problems , , and are those of testing whether , and , respectively , given a signature and an inconsistent prioritizing instance over .as defined , categoricity takes as input both the signature and the inconsistent prioritizing instance , where constraints are represented by a conflict hypergraph .we also study this problem from the perspective of , where we fix a schema , where is a set of fds . in that case, the input consists of an instance over and a priority relation over .the conflict hypergraph is then implicitly assumed to be .we denote the corresponding variants of the problem by p - categoricity , g - categoricity and c - categoricity , respectively .[ example : ceo - categoricity ] continuing our company - ceo example , we showed in example [ example : ceo - repairs ] that there are at least two g - repairs and at least three p - repairs .hence , a solver for g - categoricity should return false on , and so is a solver for p - categoricity . in contrast, we will later show that there is precisely one c - repair ( example [ example : ccat - ceo ] ) ; hence , a solver for c - categoricity should return true on . if , on the other hand , we replaced with any of the completions in figure [ fig : ceo - completions ] , then there would be precisely one p - repair and one g - repair ( namely , the current single c - repair ) .this follows from a result of staworko et al . , stating that categoricity holds in the case of total priority relations .we begin with some basic insights into the different variants of the categoricity problem .we recall an algorithm by staworko et al . for greedily constructing a c - repair .this is the algorithm of figure [ alg : ccat - opt - alg ] .the algorithm takes as input an inconsistent prioritizing instance and returns a c - repair .it begins with an empty , and incrementally inserts tuples to , as follows . in each iteration of lines 36 ,the algorithm selects a fact from and removes it from .then , is added to if it does not violate consistency , that is , if does not contain any hyperedge such that .the specific way of choosing the fact among all those in is ( deliberately ) left unspecified , and hence , different executions may result in different c - repairs . in that sense , the algorithm is nondeterministic .staworko et al . proved that the possible results of these different executions are the c - repairs .dblp : journals / amai / staworkocm12[thm : cgreedy ]let be an inconsistent prioritizing instance over .let be a consistent subinstance of .then is a c - repair if and only if there exists an execution of that returns .t[alg : ccat - opt - alg]finding a c - repair findcrep choose a fact in * return * due to theorem [ thm : cgreedy ] , we often refer to a c - repair as a repair .this theorem , combined with proposition [ prop : containments ] , has several implications for us .first , we can obtain an x - repair ( where x is either p , g or c ) in polynomial time .hence , if a solver for x - categoricity determines that there is a single x - repair , then we can actually generate that x - repair in polynomial time .second , c - categoricity is the problem of testing whether returns the same instance on every execution .moreover , due to proposition [ prop : containments ] , p - categoricity ( resp .g - categoricity ) is the problem of testing whether every p - repair ( resp .g - repair ) is equal to the one that is obtained by some execution of the algorithm .we consider the application of the algorithm to the instance of our company - ceo example ( where ) .the following are two different executions .we denote inclusion in ( i.e. , the condition of line 5 is true ) by plus and exclusion from by minus . , , , , . , , , , .observe that both executions return .this is in par with the statement in example [ example : ceo - categoricity ] that in this running example there is a single c - repair .our goal is to study the complexity of x - categoricity ( where x is g , p and c ) .this problem is related to that of , namely , given and , determine whether is an x - repair of .the following is known about this problem .dblp : journals / amai / staworkocm12,dblp : conf / pods / faginkk15[thm : repairchecking ] the following hold .* p - repair checking and c - repair checking are solvable in polynomial time ; g - repair checking is in conp .* let be a fixed schema . if is equivalent to either a single fd or two key constraints for every , then g - repair checking is solvable in polynomial time ; otherwise , g - repair checking is conp - complete .recall from proposition [ prop : containments ] that there is always at least one x - repair .therefore , given we can solve the problem using a conp algorithm with an oracle to x - repair checking : for all two distinct subinstances and , either or is not an x - repair .therefore , from theorem [ thm : repairchecking ] we conclude the following .[ cor : upperbounds ] the following hold .* p - categoricity and c - categoricity are in conp .* for all fixed schemas , g - categoricity is in , and moreover , if is equivalent to either a single fd or two key constraints for every then g - categoricity is in conp .we stress here that if x - categoricity is solvable in polynomial time , then x - categoricity is solvable in polynomial time for schemas ; this is true since for every fixed schema the hypergraph can be constructed in polynomial time , given .similarly , if x - categoricity is conp - hard ( resp .-hard ) for , then x - categoricity is conp - hard ( resp .-hard ) .when we are considering x - categoricity , we assume that all the integrity constraints are fds .therefore , unlike the general problem of x - categoricity , in x - categoricity conflicting facts always belong to the same relation .it thus follows that our analysis for x - categoricity can restrict to single - relation schemas .formally , we have the following . [ prop : single - relation ] let be a schema and x be one of p , g and c. for each relation , let be the schema .if x - categoricity is solvable in polynomial time for every , then x - categoricity is solvable in polynomial time . if x - categoricity is conp - hard ( resp .-hard ) for at least one , then x - categoricity is conp - hard ( resp .-hard ) .observe that the phenomenon of proposition [ prop : single - relation ] hold for x - categoricity , since the given conflict hypergraph may include hyperedges that cross relations . in the following sections we investigate each of the three variants of categoricity : p - categoricity ( section [ sec : p ] ) , c - categoricity ( section [ sec : c ] ) and g - categoricity ( section [ sec : g ] ) .in this section we prove a dichotomy in the complexity of p - categoricity all schemas ( where consists of fds ) .this dichotomy states that the only tractable case is where the schema associates a single fd ( which can be trivial ) to each relation symbol , up to equivalence . in all other cases , p - categoricity conp - complete .formally , we prove the following .[ thm : pareto ] let be a schema .the problem p - categoricity be solved in polynomial time if is equivalent to a single fd for every . in every other case ,p - categoricity conp - complete .the proof of theorem [ thm : pareto ] is involved , and we outline it in the rest of this section .the tractability side is fairly simple ( as we show in the next section ) , and the challenge is in the hardness side . due to proposition [ prop : single - relation ] , it suffices to consider schemas with a single relation .hence , in the remainder of this section we consider only such schemas . in this sectionwe fix a schema , such that consist of a single relational symbol . we will prove that p - categoricity solvable in polynomial time if is a singleton .we denote the single fd in as .we fix the input for p - categoricity . for a fact , we denote by ] the restriction of the tuple of to the attributes in and , respectively . adopting the terminology of koutris and wijsen , a of is a maximal collection of facts of that agree on all the attributes of ( i.e. , facts that have the same ] , and by the subblock of facts with ={\mathbf{a}} ] .consider again the instance of figure [ fig : companyceo - instance ] , and suppose that consists of only ( i.e. , each company has a single ceo , but a person can be the ceo of several companies ) .then for and the block is and the subblock is the singleton .tractability for is based on the following lemma .[ lemma : key - for - pareto-1fd ] we then get the following lemma .[ lemma : single - factorized ] a polynomial - time algorithm then follows directly from lemma [ lemma : single - factorized ] and the fact that p - repair checking is solvable in polynomial time ( theorem [ thm : repairchecking ] ) .the hardness side of the dichotomy is more involved than its tractability side .our proof is based on the concept of a , which has also been used by fagin et al . in the context of g - repair checking .let and be two schemas .a from to is a function that maps facts over to facts over .we naturally extend a mapping to map instances over to instances over by defining to be .a from to is a mapping from to with the following properties . 1 . is injective ; that is , for all facts and over , if then .2 . preserves consistency and inconsistency ; that is , for every instance over , the instance satisfies if and only if satisfies . is computable in polynomial time .let and be two schemas , and let be a fact - wise reduction from to . given an inconsistent instance over and a priority relation over , we denote by the priority relation over where if and only if .if is the inconsistent prioritizing instance , then we denote by the triple , which is also an inconsistent prioritizing instance .the usefulness of fact - wise reductions is due to the following proposition , which is straightforward .let and be two schemas , and suppose that is a fact - wise reduction from to .let be an inconsistent instance over , a priority relation over , and the inconsistent prioritizing instance .then there is a bijection between and .we then conclude the following corollary .[ cor : fact - wise ] if there is a fact - wise reduction from to , then there is a polynomial - time reduction from p - categoricity to p - categoricity . in the proofwe consider seven specific schemas .the importance of these schemas will later become apparent .we denote these schemas by , for , where each is the schema , and is the singleton .the specification of the is as follows .1 . and 2 . and 3 . and 4 . and 5 . and 6 . and 7 . and ( in the definition of , recall that denotes the fd , meaning that all tuples should have the same value for their first attribute . ) in the proof we use fact - wise reductions from the , as we explain in the next section .our proof boils down to proving conp - hardness for two specific schemas , namely and , and then using ( known and new ) fact - wise reductions in order to cover all the other schemas .for the proof is fairly simple . buthardness for turns out to be quite challenging to prove , and in fact , this part is the hardest in the proof of theorem [ thm : pareto ] .note that is the schema of our company - ceo running example ( introduced in example [ example : ceo ] ) .[ thm : hardness - specific - pareto ] the proof ( as well as all the other proofs for the results in this paper ) can be found in the appendix .the following has been proved by fagin et al . .dblp : conf / pods / faginkk15[lemma :fw - from - s1 - 6 ] let be a schema such that consists of a single relation symbol .suppose that is equivalent to neither any single fd nor any pair of keys .then there is a fact - wise reduction from some to , where . in the appendixwe prove the following two lemmas , giving additional fact - wise reductions .[ lemma : fw - from - s0 ] [ lemma : fw - from - s0-to - s1 - 5 ] the structure of our fact - wise reductions is depicted in figure [ fig : pcategoricitystrategy ] .dashed edges are known fact - wise reductions , while solid edges are novel .observe that each single - relation schema on the hardness side of theorem [ thm : pareto ] has an ingoing path from either or , both shown to have conp - hard p - categoricity ( theorem [ thm : hardness - specific - pareto ] ) .we now investigate the complexity of c - categoricity .our main result is that this problem is tractable .[ thm : c - categoricity - ptime ] the c - categoricity problem is solvable in polynomial time . in the remainder of this section we establish theorem [ thm : c - categoricity - ptime ] by presenting a polynomial - time algorithm for solving c - categoricity .the algorithm is very simple , but its proof of correctness ( given in the appendix ) is intricate . to present our algorithm ,some notation is required .let be an inconsistent prioritizing instance .the of , denoted , is the priority relation over the facts of where for every two facts and in it holds that if and only if there exists a sequence of facts , where , such that , , and for all .obviously , is acyclic ( since is acyclic ) .but unlike , the relation may compare between facts that are not necessarily neighbors in .let be an inconsistent prioritizing instance , let be a set of facts of , and let be a fact of .by we denote the fact that for fact .t[alg : ccat - alg]algorithm for c - categoricity ccategoricity + * return * true iff is consistent figure [ alg : ccat - alg ] depicts a polynomial - time algorithm for solving c - categoricity .we next explain how it works , and later discuss its correctness .as required , the input for the algorithm is an inconsistent prioritizing instance .( the signature is not needed by the algorithm . )the algorithm incrementally constructs a subinstance of , starting with an empty .later we will prove that there is a single c - repair if and only if is consistent ; and in that case , is the single c - repair .the loop in the algorithm constructs fact sets and .each is called a and each is called a . both and constructed in the iteration . on that iterationwe add all the facts of to and remove from all the facts of and all the facts of .the sets and are defined as follows . consists of the maximal facts in the current , according to . consists of all the facts that , together with , complete a hyperedge of preferred facts ; that is , contains a hyperedge that contains , is contained in , and satisfies for every incident .the algorithm continues to iterate until gets empty . as said above , in the end the algorithm returns true if is consistent , and otherwise false .next , we give some examples of executions of the algorithm .[ example : ccat - ceo ] consider the inconsistent prioritizing instance from our company - ceo running example , illustrated on the left side of figure [ fig : ceo - completions ] .the algorithm makes a single iteration on this instance , where and .both and are in since both are maximal .also , each of , and is in conflict with , and we have , , and . now consider the inconsistent prioritizing instance from our followers running example .figure [ fig : follows - exec ] illustrates the execution of the algorithm , where each column describes or , from left to right in the order of their construction . for convenience , the priority relation ,as defined in example [ example : followers - priority ] , is depicted in figure [ fig : follows - exec ] using corresponding edges between the facts .on iteration 1 , for instance , we have , since and are the facts without incoming edges on figure [ fig : follows - exec ] .moreover , we have . the reason why contains , for example ,is that is a hyperedge , the fact is in , and ( hence , ) . for a similar reason . is in as is a hyperedge , and though , we have . as another example, contains since has the hyperedge , the set is contained in , and . in the end , , which is also the subinstance of example [ example : follows - repair ] .since is consistent , the algorithm will determine that there is a single c - repair , and that c - repair is .[ example : ccat - fail ] we now give an example of an execution on a negative instance of c - categoricity .( in section [ sec : g ] we refer to this example for a different reason . )figure [ fig : transitive - example ] shows an instance over the schema , which is defined in section [ sec : spec - schemas ] .recall that in this schema every two attributes form a key .each fact in is depicted by a tuple that consists of the three values .for example , contains the ( conflicting ) facts and .hereon , we write instead of . the priority relation is given by the directed edges between the facts ; for example , .undirected edges are between conflicting facts that are incomparable by ( e.g. , and ) .the execution of the algorithm on is as follows .on the first iteration , and .in particular , note that does not contain since it conflicts only with in , but the two are incomparable . similarly , does not contain since it is incomparable with .consequently , in the second iteration we have and . in the end , is inconsistent , and therefore , the algorithm will return false .indeed , the reader can easily verify that each of the following is a c - repair : , , and .correctness of is stated in the following theorem .[ thm : ccategoricity - correct ] theorem [ thm : ccategoricity - correct ] , combined with the observation that the algorithm terminates in polynomial time , imply theorem [ thm : c - categoricity - ptime ] . as previously said ,the proof of theorem [ thm : ccategoricity - correct ] is quite involved .the direction is that of if the algorithm returns true then there is precisely one c - repair .the direction is that of if there is precisely one c - repair then the algorithm returns true .soundness is the easier direction to prove .we assume , by way of contradiction , that there is a c - repair different from the subinstance returned by the algorithm .such must include a fact from some negative stratum .we consider an execution of the algorithm that returns , and establish a contradiction by considering the first time such an is being added to the constructed solution. proving completeness is more involved .we assume , by way of contradiction , that the constructed is inconsistent .we are looking at the first positive stratum such that contains a hyperedge .then , the crux of the proof is in showing that we can then construct two c - repairs using the algorithm : one contains some fact from and another one does not contain that fact .we then establish that there are at least two c - repairs , hence a contradiction .in this section , we investigate the complexity of the g - categoricity .we first show a tractability result for the case of a schema with a single fd .then , we show -completeness for a specific schema .finally , we discuss the implication of assuming in the priority relation , and show a general positive result therein .recall from theorem [ thm : pareto ] that , assuming , the problem p - categoricity is solvable in polynomial time if and only if consists ( up to equivalence ) of a single fd per relation .the reader can verify that the same proof works for g - categoricity .hence , our first result is that the tractable schemas of p - categoricity remain tractable for g - categoricity .[ thm : global-1fd ] it is left open whether there is any schema that is not as in theorem [ thm : global-1fd ] where g - categoricity solvable in polynomial time . in the next sectionwe give an insight into this open problem ( theorem [ thm : gcat - s0-hypothetical ] ) .our next result shows that g - categoricity a harder complexity class than p - categoricity .in particular , while p - categoricity always in conp ( due to theorem [ cor : upperbounds ] ) , we will show a schema where g - categoricity -complete .this schema is the schema from section [ sec : spec - schemas ] .[ thm : piptwo ] the proof of theorem [ thm : piptwo ] is by a reduction from the -complete problem : given a cnf formula , determine whether it is the case that for every truth assignment to there exists a truth assignment to such that the two assignments satisfy .we can generalize theorem [ thm : piptwo ] to a broad set of schemas , by using fact - wise reductions from .this is done in the following theorem .[ thm : piptwoextend ] as an example , recall that in we have .this schema is a special case of theorem [ thm : piptwoextend ] , since we can use as and as ; and indeed , each of and contains an attribute ( namely and , respectively ) that is not in any of the other three sets .additional examples of sets of fds that satisfy the conditions of theorem [ thm : piptwoextend ] ( and hence the corresponding g - categoricity -complete ) follow .all of these sets are over a relation symbol .( and in each of these sets , the first fd corresponds to and the second to . ) , , , , unlike , to this day we do not know what is the complexity of g - categoricity for any of the other ( defined in section [ sec : spec - schemas ] ) .this includes , for which all we know is membership in conp ( as stated in theorem [ cor : upperbounds ] ) . however , except for this open problem , the proof technique of theorem [ thm : pareto ] is valid for g - categoricity .consequently , we can show the following .[ thm : gcat - s0-hypothetical ] let be an inconsistent prioritizing instance .we say that is if for every two facts and in , if and are neighbors in and , then .transitivity is a natural assumption when is interpreted as a partial order such as `` is of better quality than '' or `` is more current than . '' in this section we consider g - categoricity in the presence of this assumption .the following example shows that a g - repair is not necessarily a c - repair , even if is transitive .this example provides an important context for the results that follow .[ example : x - repair - transitive ] consider again and from example [ example : ccat - fail ] ( depicted in figure [ fig : transitive - example ] ) .observe that is transitive .in particular , there is no priority between and , even though , because and are not in conflict ( or put differently , they are not neighbors in ) .consider the following subinstance of . the reader can verify that is a g - repair , but not a c - repair ( since no execution of can generate ) .example [ example : x - repair - transitive ] shows that the notion global optimality is different from completion optimality , even if the priority relation is transitive . yet , quite remarkably , the two notions behave the same when it comes to categoricity .[ thm : g - c - same - transitive ] let be an inconsistent prioritizing instance such that is transitive . if and only if .the `` if '' direction follows from proposition [ prop : containments ] , since every c - repair is also a g - repair .the proof of the `` only if '' direction is based on the special structure of the c - repair , as established in section [ sec : c ] , in the case where only one c - repair exists .specifically , suppose that there is a single c - repair and let be a consistent subinstance of .we need to show that has a global improvement .we claim that is a global improvement of .this is clearly the case if .so suppose that .let be a fact in .we need to show that there is a fact such that .we complete the proof by finding such an .recall from theorem [ thm : ccategoricity - correct ] that is the result of executing .consider the positive strata and the negative strata constructed in that execution . since is the union of the positive strata , we get that necessarily belongs to a negative stratum , say . from the definition of it follows that has a hyperedge such that , , and .let be such a hyperedge .since is consistent , it can not be the case that contains all the facts in .choose a fact such that . then , and since is transitive ( and and are neighbors ) , we have .so and , as required .combining theorems [ thm : c - categoricity - ptime ] and [ thm : g - c - same - transitive ] , we get the following . [ cor : g - categoricity - transitive - ptime ] for transitive priority relations , problems g - categoricity and c - categoricity coincide , and in particular, g - categoricity is solvable in polynomial time. the reader may wonder whether theorem [ thm : g - c - same - transitive ] and corollary [ cor : g - categoricity - transitive - ptime ] hold for p - categoricity as well .this is not the case .the hardness of p - categoricity ( theorem [ thm : hardness - specific - pareto ] ) is proved by constructing a reduction where the priority relation is transitive ( and in fact , it has no chains of length larger than one ) . in their analysis ,fagin et al . have constructed various reductions for proving conp - hardness of g - repair checking . in several of these ,the priority relation is transitive .we conclude that there are schemas such that , on transitive priority relations , g - repair checking is conp - complete whereas g - categoricity is solvable in polynomial time .[ sec : related ] we now discuss the relationship between our work and past work on data cleaning .specifically , we focus on relating and contrasting our complexity results with ones established in past research . to the best of our knowledge , there has not been any work on the complexity of categoricity within the prioritized repairing of staworko et al .fagin et al . investigated a static version of categoricity in the context of text extraction , but the settings and problems are very different , and so are the complexity results ( e.g. , fagin et al . establish undecidability results ) .bohannon et al . have studied a repairing framework where repairing operations involve attribute updates and tuple insertions , and where the quality of a repair is determined by a cost function ( aggregating the costs of individual operations ) .they have shown that finding an optimal repair is np - hard , in data complexity , even when integrity constraints consist of only fds .this result could be generalized to hardness of categoricity in their model ( e.g. , by a reduction from the problem ) .the source of hardness in their model is the cost minimization , and it is not clear how any of our hardness results could derive from those , as the framework of preferred repairs ( adopted here ) does not involve any cost - based quality ; in particular , as echoed in this paper , an optimal repair can be found in polynomial time under each of the three semantics . in the framework of (* chapter 6) , relations consist of entities with attributes , where each entity may appear in different tuples , every time with possibly different ( conflicting ) attribute values . a partial order on each attributeis provided , where `` greater than '' stands for `` more current . ''a of an instance is obtained by completing the partial order on an attribute of every entity , and it defines a where each attribute takes its most recent value .in addition , a completion needs to satisfy given ( denial ) constraints , which may introduce interdependencies among completions of different attributes .fan et al . have studied the problem of determining whether such a specification induces a single current instance ( i.e. , the corresponding version of categoricity ) , and showed that this problem is conp - complete under data complexity .it is again not clear how to simulate their hardness in our p - categoricity and g - categoricity , since their hardness is due to the constrains on completions , and these constraints do not have correspondents in our case ( beyond the partial orders ) .a similar argument relates our lower bounds to those in the framework of conflict resolution by fan geerts ( * ? ? ?* chapter 7.3 ) , where the focus is on establishing a unique tuple from a collection of conflicting tuples .fan et al . show that in the absence of constraints , their categoricity problem can be solved in polynomial time ( even in the presence of `` copy functions '' ) .this tractability result can be used for establishing the tractability side of theorem [ thm : pareto ] in the special case where the single fd is a key constraint . in the general case of a single fd , we need to argue about relationships among sets , and in particular , the differences among the three x - categoricity problems matter .cao et al . have studied the problem of entity record cleaning , where again the attributes of an entity are represented as a relation ( with missing values ) , and a partial order is defined on each attribute .the goal is to increase the accuracy of values from the partial orders and an external source of reliable ( `` master '' ) data .the specification now gives update steps that have the form of logical rules that specify when one value should replace a null , when new preferences are to be derived , and when data should be copied from the master data .hence , cleaning is established by these rules .they study a problem related to categoricity , namely the property : is it the case that every application of the chase ( in any rule - selection and grounding order ) results in the same instance ? they show that this property is testable in polynomial time by giving an algorithm that tests whether some invalid step in the end of the execution has been valid sometime during the execution .we do not see any clear way of deriving any of our upper bounds from this result , due to the difference in the update model ( updating nulls and preferences vs. tuple deletion ) , and the optimality model ( chase termination vs. x - repair ) .the works on ( * ? ? ?* chapters 7.17.2 ) consider models that are substantially different from the one adopted here , where repairs are obtained by chasing update rules ( rather than tuple deletion ) , and uniqueness applies to chase outcomes ( rather than maximal subinstances w.r.t. preference lifting ) .the problems relevant to our categoricity are the ( w.r.t .guarantees on the consistency of some attributes following certain patterns ) , and the problem .they are shown to be intractable ( conp - complete and pspace - complete ) under combined complexity ( while we focus here on data complexity ) .finally , we remark that there have several dichotomy results on the complexity of problems associated with inconsistent data , but to the best of our knowledge this paper is the first to establish a dichotomy result for any variant of repair uniqueness identification .we investigated the complexity of the categoricity problem , which is that of determining whether the provided priority relation suffices to clean the database unambiguously in the framework of preferred repairs .following the three semantics of optimal repairs , we investigated the three variants of this problem : p - categoricity , g - categoricity and c - categoricity .we established a dichotomy in the data complexity of p - categoricity for the case where constraints are fds , partitioning the cases into polynomial time and conp - completeness .we further showed that the tractable side of p - categoricity extends to g - categoricity , but the latter can reach -completeness already for two fds .finally , we showed that c - categoricity is solvable in polynomial time in the general case where integrity constraints are given as a conflict hypergraph .we complete this paper by discussing directions for future research . in this workwe did not address any qualitative discrimination among the three notions of x - repairs .rather , we continue the line of work that explores the impact of the choice on the entailed computational complexity .it has been established that , as far as repair checking is concerned , the pareto and the completion semantics behave much better than the global one , since g - repair checking is tractable only in a very restricted class of schemas . in this workwe have shown that from the viewpoint of categoricity , the pareto semantics departs from the completion one by being likewise intractable ( while the global semantics hits an even higher complexity class ) , hence the completion semantics outstands so far as the most efficient option to adopt .it would be interesting to further understand the complexity of g - categoricity , towards a dichotomy ( at least for fds ) .we have left open the question of whether there exists a schema with a single relation and a set of fds , equivalent to a single fd , such that g - categoricity is solvable in polynomial time . beyond that , for both p - categoricity and g - categoricity it is important to detect islands of tractability based on properties of the data and/or the priority relation ( as schema constraints do not get us far in terms of efficient algorithms , at least by our dichotomy for p - repairs ) , beyond transitivity in the case of g - categoricity ( corollary [ cor : g - categoricity - transitive - ptime ] ) .another interesting direction would be the generalization of categoricity to the problems of and the preferred repairs .for classical repairs ( without a priority relation ) , maslowski and wijsen established dichotomies ( fp vs. # p - completeness ) in the complexity of counting in the case where constraints are primary keys . for the general case of denial constraints , counting the classical repairs reduces to the enumeration of independent sets of a hypergraph with a bounded edge size , a problem shown by boros et al . to be solvable in incremental polynomial time ( and in particular polynomial input - output complexity ) . for a general given conflict hypergraph ,repair enumeration is the well known problem of enumerating the ( also known as the hypergraph problem ) ; whether this problem is solvable in polynomial total time is a long standing open problem . in this work we focused on cleaning within the framework of preferred repairs , where integrity constraints are anti - monotonic and cleaning operations are tuple deletions ( i.e. , ) .however , the problem of categoricity arises in every cleaning framework that is based on defining a set of repairs with a preference between repairs , including different types of integrity constraints , different cleaning operations ( e.g. , tuple addition and cell update ) , and different priority specifications among repairs .this includes preferences by means of general scoring functions , aggregation of scores on the individual cleaning operations , priorities among resolution policies and preferences based on soft rules .this also includes the llunatic system where priorities are defined by lifting partial orders among `` cell groups , '' representing either semantic preferences ( e.g. , timestamps ) or level of completeness ( e.g. , null vs. non - null ) .a valuable future direction would be to investigate the complexity of categoricity in the above frameworks , and in particular , to see whether ideas or proof techniques from this work can be used to analyze their categoricity .motivated by the tractability of c - categoricity , we plan to pursue an implementation of an interactive and declarative system for database cleaning , where rules are of two kinds : integrity constraints and priority specifications ( e.g. , based on the semantics of of fagin et al . ) . to make such a system applicable to a wide range of practical use cases, we will need to extend beyond subset repairs , and consequently , investigate the fundamental direction of extending the framework of preferred repairs towards such repairs .in this section we provide proofs for section [ sec : p ] . in the following section , we say that a block ( respectively , ) is the block ( respectively , subblock ) of a fact if ( respectively ) .note that each fact has a unique block and subblock .recall that is the set .we start by proving the second part of the lemma .that is , we show that each p - repair of a block is a subblock .let be a p - repair of .then is contained in a single subblock of , since is consistent .moreover , contains all the facts in , or otherwise has a pareto improvement .next , we prove the first part of the lemma .let be a p - repair of .we need to show that is a union of p - repairs over all the blocks of .observe that is consistent , and so , for each block it contains facts from at most one subblock .moreover , since is maximal , it contains at least one representative from each block , and furthermore , it contains the entire subblock of each such a representative .we conclude that is the union of subblocks of .it is left to show that if a subblock is contained in , then is a p - repair of .let be a subblock contained in and assume , by way of contradiction , that is a pareto - improvement of in .let be the instance that is obtained from by replacing with .observe that is consistent , since no facts in other than those in conflict with facts from .then clearly , is a pareto improvement of , which contradicts the fact that is a p - repair .let be a union of p - repairs over all the blocks of .we need to show that is a p - repair . by the second part of the lemma, is a union of subblocks .since each subblock is consistent and facts from different blocks are consistent , we get that is consistent .it is left to show that does not have a pareto improvement .assume , by way of contradiction , that has a pareto improvement . by the definition of a pareto improvement , contains a fact such that for all .let be such a fact .let be the subblock of .then , by our assumption the subinstance contains a p - repair of , and from the second part of the lemma this p - repair is a subblock of , say .but then , is not in ( since ) , and therefore , does not contain any fact from ( since is consistent ) . we conclude that for all , and hence , has a pareto improvement ( namely ) , in contradiction to the fact that is a p - repair of .we construct a reduction from the exact - cover problem ( ) to the complement of p - categoricity .the input to is a set of elements and a collection of subsets of , such that their union is .the goal is to identify whether there is an exact cover of by .an of by is a collection of pairwise disjoint sets from whose union is . in the sequel, we relate to these facts by according to their roman number . for example , facts of the form , where , and , will be referred to as facts of type .our construction is partly illustrated in figure [ fig : paretohardness ] for the following input to the problem : , where , and .note that we denote the fact by . in this case , there is an exact cover of by that consists of the sets and .the gray facts represent a p - repair .we start by finding a c - repair . in the the remainder of the proof, we relate to the the c - repair from lemma [ pareto : l : greedy ] by .note that every c - repair is also a p - repair and therefore is a p - repair of . to complete the proofwe show that there is a solution to if and only if has a p - repair different from .we construct a p - repair of , namely , different from based on a solution to .let the collection of sets be a solution to .let consist of the following facts for all , and .note that since is different from ( see lemma [ pareto : l : greedy ] ) , it is left to show that is a p - repair of . to do so, we show that is consistent and that it does not have a pareto improvement .it suffices to show that for all in , there exists in such that is inconsistent ( w.r.t . ) and . for each choose such that the conditions hold .we divide to different cases according to the type of . *that is , there exists an element in such that .since the collection is a cover of , there exists in such that .hence , and we choose .* this is impossible , since contains all the facts of this type .* that is , there exists and such that .we choose . *that is , there exists a set and such that . since , it holds that .hence and we choose .* that is , there exists a set and such that . since , it holds that .hence and we choose .* that is , there exists a set where and such that .since the collection is a cover of , there exists some in such that .hence , we get that and we choose .we show that if has a p - repair different from ( from lemma [ pareto : l : greedy ] ) then there is a solution to .the proof of this direction consists of several lemmas , and the dependencies between them are described in figure [ fig : lemmas - onlyif ] .for example ,the proof of lemma [ pareto : l : all ( u , u ) together ] is based on lemmas [ pareto :l : notmiddle r2(xx , f_u ) ] , [ pareto : l : ( u , u ) not in k , ( u , fu ) in k ] and [ pareto :l : ( u , f_u ) in k , ( xu , u ) in k ] . let , and .assume , by way of contradiction , that .since is consistent and is inconsistent with , we obtain that .since , it holds that must contain a fact that is inconsistent with ( but is consistent with ) .therefore , must agree with on .that leads to a contradiction since there is no such fact .let , and where .assume that .since for all , and , we have that , the p - repair must contain a fact that is inconsistent with ( but is consistent with ) .since is consistent , must agree with on .thus , must be the fact . replacing both facts and with results in a pareto improvement of which leads to a contradiction since is a p - repair .assume . assume , by way of contradiction , that .since is maximal , it must contain a fact that is inconsistent with .if agrees with on then it can only be of type .that is , the only possibility is that which leads to a contradiction . if agrees with on , it can only be of type which leads to a contradiction since by lemma [ pareto : l : notmiddle r2(xx , f_u ) ] , such a fact can not be in a p - repair .assume . since is consistent , .it follows from that must contain a fact that is inconsistent with .since is consistent , is consistent with .therefore , must agree with on . the possible types for are and . by lemme [ pareto :l : notmiddle r2(xx , u ) ] , is not of type .thus , must be of type .therefore , for such that ( there exists such since the union of sets of is ) .assume for all .assume , by way of contradiction , that .by lemma [ pareto : l : ( u , u ) not in k , ( u , fu ) in k ] , . by lemma [ pareto :l : ( u , f_u ) in k , ( xu , u ) in k ] , there exists such that and .this is a contradiction .let and assume .assume , by way of contradiction , there exists such that and . by lemma [ pareto :l : ( u , u ) not in k , ( u , fu ) in k ] , .thus , by lemma [ pareto : l : ( u , f_u ) in k , ( xu , u ) in k ] we have that for some such that . note that since , there must be a fact in that is inconsistent with and is consistent with .the only such a fact is ( i.e , ) .this is a contradiction since and is consistent .let and assume , by way of contradiction , that .by lemma [ pareto :l : all ( u , u ) together ] , for all . since is consistent , it does not contain facts of the form for all ( since the fact is inconsistent with ) .moreover , does not contain facts of the form where and ( for a similar reason ) . by lemma [ pareto :l : notmiddle r2(xx , f_u ) ] ( respectively , [ pareto : l : notmiddle r2(xx , u ) ] ) , does not contain facts of the form ( respectively , ) where and .it holds that is maximal and thus it contains all of the facts of type .thus , we conclude that is exactly which leads to a contradiction .let us denote and assume without loss of generality that .we prove by induction on that for all .assume .since , there must be a fact that is inconsistent with .since is consistent , must agree with on . by lemma [ pareto :l : notmiddle r2(xx , f_u ) ] , is not of the form for and by lemma [ pareto : l : notmiddle r2(xx , u ) ] is also not of the form for .therefore , must be .next , we show that encodes a solution for . specifically , we contend that there is an exact cover of by , namely , that is defined as follows : if and only if for some . let and assume , by way of contradiction , that for all , it holds that . by our assumption , for all such that .note that since the union of the sets in is , there exists a set such that .thus , the definition of the set implies that for all such that , we have that . by lemma [ pareto :l : ( x_x , x ) not in k , ( x , x ) in k ] , . by lemma [ pareto :l : ( u , u ) not in k ] , this is a contradiction .let where .assume , by way of contradiction , that there exists .by s definition , for some .lemma [ pareto :l : whole set left column ] implies that for all , it holds that .similarly , for all , we have that . since , both facts and are in .this is a contradiction to s consistency .we construct a reduction from cnf satisfiability to p - categoricity .the input to cnf is a formula with the free variables , such that has the form where each is a clause .each clause is a conjunction of variables from the set .the goal is to determine whether there is a true assignment that satisfies .given such an input , we will construct the input for p - categoricity .for each and , contains the following facts : * if appears in clause , * if appears in clause , and * , where , if neither nor appear in clause .our construction is illustrated in figure [ fig : paretohard6 ] for the formula : .observe that the subinstance that consists of the facts for all , is the only c - repair . to complete the proof, we will show that is satisfiable if and only if has a p - repair different from .assume is satisfiable .that is , there exists an assignment that satisfies .we claim that the subinstance that consists of the facts for all is a p - repair ( that is different from ) . is consistent since is an assignment ( i.e. , each has exactly one value and the constraint is satisfied ) .for the same reason , is maximal ( facts of the form can not be added to because of the constraint ) .it is left to show that does not have a pareto improvement .assume , by way of contradiction , that it does .that is , there exists a fact such that for every .note that follows from s definition , must be of the form .nevertheless , this implies the clause is not satisfied by which leads us to the conclusion that is not satisfied by .assume there is a p - repair different from .since is different from , it must contain a fact for some .since is consistent and the fd is in , it holds that for all facts in , we have that =\odot ] . by s definition , we have that is a global improvement of which is in contradiction to being a g - repair. 2 . for each fact , it holds that = 0 ] , there is no such fact and as a consequence no such .hence for all , we have that .moreover , since is maximal , it must contain a fact for each .note that contains the fd .this insures that can not contain both of the facts and .this implies that encodes an assignment for the variables .this assignment is given by since is a `` yes '' instance , there exists an assignment for the variables that together with satisfies .let be the subinstance of that consists of the facts for all and for all .note that is consistent .we contend that is a global improvement of .since and are disjoint , it suffices to show that for every fact there is a fact such that .let .if is of the form then we choose .if is of the form , then we choose where is the variable of a literal that satisfies under the union of and .we conclude that is a global improvement of , that is , is not a g - repair in contradiction to our assumption .assume that is the only g - repair of .we contend that for every assignment to there exists an assignment to such that the two satisfy .let be an assignment for and let be a consistent subinstance of that consists of the facts for and where .since is the only g - repair , it holds that has a global improvement .let us denote such a global improvement by .assume , without loss of generality , that is maximal ( if not , we can extend with additional facts ) .by s definition , must consist of facts of the form .since the fd is in , for each fact , it holds that =1 $ ] . since is maximal andsince is in , we have that encodes an assignment for and .this assignment is given by since is a global improvement of , it must contain the fact whenever is in ( this is true since no other fact has a priority over ) .therefore extends .finally we observe that every is satisfied by . a satisfying literal is one that corresponds to a fact that satisfies .we conclude that is a `` yes '' instance as claimed .recall the schema where consists of a single ternary relation and .we define a fact - wise reduction , using the constants .let .we define by where for all it is left to show that is a fact - wise reduction .to do so , we prove that is well defined , is injective and preserves consistency and inconsistency .it suffices to show that each is well - defined .we show that the sets in the definition of are pairwise disjoint .indeed , is disjoint from the sets , and .moreover , is a subset of and therefore it is disjoint from and .clearly , and are disjoint .hence , each is well - defined .let where and .assume that .let us denote and .note that is not empty since the fd is nontrivial .moreover , and are not empty since each of and contains an attribute that is in none of the other three sets. therefore , there are , and such that , and .hence , implies that , and .therefore we obtain , and which implies .assume is consistent w.r.t . we prove that is consistent w.r.t .note that and agree on since for each we have that , regardless of the input . since is consistentw.r.t , it holds that . by the definition of and since , we have that and agree on .hence , satisfies the constraint .assume that and agree on . by the definition of ,since is not empty , it holds that . since is consistent w.r.t , the fact that implies that also ( due to the constraint ) .hence , .this implies and that is consistent w.r.t .* and do not agree on .it holds , by s definition , that and agree on . nevertheless ,since and do not agree on , we have that .hence and do agree on .that is , the constraint is not satisfied , which leads us to the conclusion that is inconsistent w.r.t . * and agree on .since is inconsistent w.r.t , we have that and agree on ( i.e. , ) but disagree on ( i.e. , ) .note that and agree on since and .nevertheless , they do not agree on since and the set is not empty .that is , the constraint is not satisfied which leads us to the conclusion that is inconsistent w.r.t .
|
in its traditional definition , a repair of an inconsistent database is a consistent database that differs from the inconsistent one in a `` minimal way . '' often , repairs are not equally legitimate , as it is desired to prefer one over another ; for example , one fact is regarded more reliable than another , or a more recent fact should be preferred to an earlier one . motivated by these considerations , researchers have introduced and investigated the framework of preferred repairs , in the context of denial constraints and subset repairs . there , a priority relation between facts is lifted towards a priority relation between consistent databases , and repairs are restricted to the ones that are optimal in the lifted sense . three notions of lifting ( and optimal repairs ) have been proposed : pareto , global , and completion . in this paper we investigate the complexity of deciding whether the priority relation suffices to clean the database unambiguously , or in other words , whether there is exactly one optimal repair . we show that the different lifting semantics entail highly different complexities . under pareto optimality , the problem is conp - complete , in data complexity , for every set of functional dependencies ( fds ) , except for the tractable case of ( equivalence to ) one fd per relation . under global optimality , one fd per relation is still tractable , but we establish -completeness for a relation with two fds . in contrast , under completion optimality the problem is solvable in polynomial time for every set of fds . in fact , we present a polynomial - time algorithm for arbitrary conflict hypergraphs . we further show that under a general assumption of transitivity , this algorithm solves the problem even for global optimality . the algorithm is extremely simple , but its proof of correctness is quite intricate .
|
solar faculae , usually called plagesin the photosphere or calcium and hydrogen flocculi in the chromosphere , are often observed in the solar atmosphere . they may play a significant role in energy exchange processes between layers of the solar atmosphere due to their prevalence .one of possible ways to transport energy is waves propagating in faculae .the facular oscillations have been actively studied since the 1960s . found no evidence of a difference between the velocity amplitudes underlying weak plages or the chromospheric network , and regions free of calcium flocculi .there was only a slight indication that the periods are a little longer in photospheric regions that underlie the network .found that the amplitudes of five - minute oscillations are about 25% weaker in regions with a significant magnetic field ( g ) . compared power spectra of velocities in plages with velocity power spectra of quiet photosphere and suggested that photospheric oscillations are not gravity waves .found that the power of five - minute range oscillations can both increase and decrease in faculae.the analysis of line - of - sight ( los ) velocity oscillations in fei 15648 and 15652 spectral lines revealed dominant oscillatory signal in the velocity data is due to five - minute range oscillations .also temporal variations of the magnetic - field strength were observed which exhibit a possible five - minute and nine ten - minute oscillation . and found that the time delay between photospheric ( sii 10827 ) and chromospheric ( hei 10830 ) five - minute oscillations in faculae is 300500 seconds . in their opinion , this is a evidence of linear vertical propagation of acoustic waves . noticed that the five - minute los - velocity oscillations observed in h and fei 6569 show ambiguous phase lags in faculae .this article is the third in a series dealing with the investigation of features of oscillations in faculae .it was preceded by and , hereafter paper i and paper ii , respectively . in this articlewe focus on the determination of time lag between signals of the los velocity measured in the hei 10830 and sii 10827 spectral lines. direct measurements of the time lag between chromospheric and photospheric signals of the los velocity are of interest to test the assumption that oscillations propagate upwards .the observational spectral data analyzed here were obtained with the horizontal solar telescope of the sayan solar observatory .the photoelectric guider of the telescope is capable of tracking the solar image with an accuracy of 1 arcsecond for several hours of observations and compensates for the rotation of the sun .we used a princeton instruments rte / ccd 256h camera 256 pixels with a pixel size of 24 microns .one pixel corresponds to 0.24 arcsecond along the entrance slit of the spectrograph and about 12 m along the dispersion .thus , the spectral snapshot contains information about a spatial region of about 60 .the real resolution of the telescope is 11.5 because of the earth s atmosphere . seeing and stray lightwere assessed according to the contrast of granulation and the limb in the visible .the intensity of stray light in the infrared range at the altitude of the observatory ( 2000 m ) is low . for 10830its intensity is roughly 16 times less than for 5000 .we made observations in the hei 10830 and sii 10827 spectral lines .the los velocity in the hei and sii lines was measured by the method that is known as the doppler compensation method .a pair of virtual slits were placed symmetrically on the spectrogram on both sides of the line centre ( .2 for hei and .18 for sii ) in such a way that the corresponding intensities and could be the same .intensity in hei was calculated as , where continuum intensity .when the spectral line was shifted , the virtual slits were displaced in the same direction , equalizing the intensities .the displacement is proportional to the doppler velocity .the initial position of the virtual slits ( zero point ) was determined relative to the telluric line 10832 , that was used to eliminate the spectrograph noise . with a decrease in distance between the virtual slits ( up to .1 ) , no significant changes in hei signal were noted . only insignificant changes in signal / noise ratio , associated with a decrease in the steepness of the working part of the line profile ,were noted . for sii 10827the picture was similar .for the both spectral lines behaviour of the los - velocity signal did not change appreciably .the width of virtual slits was 0.06 for both lines .compared to undisturbed regions , the depth of the hei 10830 line profile in faculae sharply increases ( figure [ f - faculafeature]a ) . according to our observations , this increase in differentfaculae varies from 2 to 5 .this feature of the hei line is very useful for precise pointing at faculae near the disk centre . at the beginning , we chose an object with the use of images in the caiih line and determined its coordinates .then we slowly scanned the chosen region and corrected its position in the spectrograph slit , using the working camera and taking the maximum depth of the hei 10830 line into consideration .we also should take into account the fact that the depth of this line increases in filaments .that is why we used h filtergrams for additional control . in most cases ,the facular region was rather inhomogeneous and occupied the major part of the entrance aperture .compact faculae were observed more seldom . in the present analysiswe have used faculae in which the residual intensity in the core of hei 10830 did not exceed 0.8 for the space of 20 .it is noteworthy that space time diagrams of the hei 10830 intensity can be used to control position of the observed object in the spectrograph slit throughout a time series ( figure [ f - faculafeature]b ) . in the summer 2010, we obtained 33 time series for 24 faculae near the disk centre ; this allowed us to avoid the influence of projection effects when comparing photospheric and chromospheric signals . the most complete and multiple - factor analysis given in the articlewas carried out on basis of nine time series .we selected those series that corresponded to better seeing and minimal blurring .in addition , we took into account the following criteria : the remoteness from sunspot no less than 60 , the facular size no less than 20 , and homogeneity of facula ( the residual intensity in the core of hei 10830 no more than 0.8 ) , as well as the duration of the series no less than 50 minutes .preprocessing of spectrograms consisted of standard procedures : removal of the dusteffect and determination of the flat field . using special programmes, we then plotted space time diagrams of the hei 10830 intensity and diagrams showing the spatial distribution of different - frequency modes in the 18 mhz range for photospheric and chromospheric signals of the los velocity . with the use of these diagrams , we determined facular fragments in which five - minute oscillations are present both in the photosphere and chromosphere .we detected five - minute oscillations in the 2.54.5 mhz band .the significance level for the power of los - velocity oscillations in this band was 95% .the los - velocity signals in the facular fragments of 2 located at a distance of 5 from boundaries were then subjected to frequency filtering in the 1 mhz band centred at 3.5 mhz .our chosen parameters of frequency filtering corresponded to those of in order to compare results properly .the average amplitude of the photospheric signal was then increased up to the average amplitude of the chromospheric signal , and the average lag of the chromospheric signal relative to the photospheric one was determined using cross - correlation .table[t - timeseries ] presents obtained lag values and the amplitude multiplication factor of the photospheric los - velocity signal for each time series .the coefficient relates only to the los velocities filtered in the 34 mhz band .cccccccc + * no.*&*date*&*time,*&*disk*&*cadence,*&*duration,*&*_k_*&*lag } ] for different faculae are in the interval from 1.2 to 8 and are presented in table [ t - timeseries ] .one could suppose that this spread of is not random and is probably connected with a feature of magnetic - field structure ( magnetic - field strength , filling factor , inclination to vertical , magnetic shadow , and the presence of an admixture of opposite polarity ) .apparently more attention should be paid to this problem in future . .[ f - phaserelation ] to determine the lag more accurately , we shifted the photospheric signal to later times in step of five seconds up to a maximum of 400 seconds and found the cross - correlation maximum that corresponded to the average time lag ] for each analysed time series . as would be expected , the chromospheric signal of five - minute oscillations of the los velocity lags behind the photospheric in the majority of faculae under investigation .in figures [ f - phaserelation ] and [ f - signalsexemple ] , graphs of unshifted signals of los velocity are given .the time lag differs for different series and sometimes can vary within one time series ( figure [ f - phaserelation ] ) .however , it never reaches values presented in . according to our measurements ,the average lag is about 50 seconds . in table[ t - timeseries ] , the spread of lag values obtained for different parts of each facula is given .the spread depicts the maximum and minimum of the lag values calculated for several spatial elements of the facula .only those elements were taken into account in which five - minute oscillations are well detected in both the photosphere and the chromosphere .it is worthy of note that the wavetrain structure of los velocity excludes possible error in a value divisible by period when making quantitative estimation of the lag ( _ e.g. _ see figure [ f - phaserelation ] ) .we also tried to narrow the frequency filtering band of five - minute oscillations from 1 mhz down to 0.2 mhz .no significant difference was observed afterwards , and the lag did not change ( figure [ f - signalsexemple ] ) .we calculated the phase - difference spectra for each facula studied ( figure [ f - phasedifference ] ) .each mark on the figure [ f - phasedifference ] means the phase difference of los - velocity oscillations ( hei sii ) for each facula 2 element along the entrance slit and particular frequency .each phase - difference spectrum was plotted over the whole facula ( from 10 to 30 points ) . as was to be expected ,phase - difference spectra seem ambiguous as a result of the smallness of the average lag measured . only in four spectra ( 1,4,7,9 )weak indications of propagating oscillations can be seen .we do not exclude the possibility that our measurements of the photospheric los velocity contains a contribution of small non - magnetic inclusions located along the entrance aperture .thus , our investigation of facular oscillations in the hei 10830 and sii 10827 lines enables two propositions to be made . the first proposition : we are in full agreement with and about the fact that spectra of the los - velocity oscillations observed in the hei and sii lines look very similar . the second proposition : in contradiction with the results obtained by these authors : the temporal lag of the chromospheric signal does not exceed a period and is about 50 seconds on average .the extreme similarity between spectra can be interpreted as a result of the fact that all oscillations from the upper photosphere successfully reach the upper chromosphere . at the same time, this similarity can be caused by closer heights of the hei and sii line formation in facular regions .possible decrease in the height of the hei 10830 line formation in faculae was noticed by . inwe showed how similar oscillation spectra in faculae can be when the difference in heights is only 300400 km , using observations in the baii 4554 and fei 4551 lines .noteworthy is that the spatial averaging over the entire facula in this case does not make the spectra dissimilar .if the velocity of the upward wave propagation is 46 ( as in ) , the difference in heights of the hei and sii line formation in faculae will be about 250 km , considering our measured time lag of about 50 seconds . at the same time , if the difference in formation heights of these lines is 1500 km ( as in ) , the velocity of the upward wave propagation will be about 30 .this velocity is six times greater than that mentioned above .earlier the same problems occurred while researching oscillations observed at two levels of the undisturbed atmosphere . and explained a high phase velocity of about 30 in the chromosphere by restoring forces in the presence of magnetic fields ., analyzing observations in hei 10830 and mgi 8807 , discussed the two alternatives that we cited above for faculae .in addition , considered a variant in which it is supposed that the measured phase - difference forms in the range between temperature minimum and 1000 km .they suggested that the oscillations become standing at heights of more than 1000 km .found that variations of intensity in several uv spectral lines , observed with the sumer onboard soho , demonstrated features of evanescent or standing waves . if it is assumed that , within the cavity observed , both standing and propagating waves exist at the same time ( quite an ordinary situation for any real resonator being the source of oscillations ) , then the phase lag being measured will depend on the contribution of each of the components .their rate is determined by the degree of penetrability of the boundaries of the resonator that in real conditions changes following local changes in temperature , pressure and the magnetic field .previously , indicated such a possibility when explaining the inadequacy of phase lags obtained from observations in h and fei 6569 .our preliminary measurements of phase difference between velocity and intensity oscillations showed that the lag of 90 is more characteristic of the signals measured in hei 10830 than for those measured in sii 10827 . in the case of acoustic waves ,this may imply that standing waves make a major contribution to oscillations observed at the level of hei 10830 formation .there is a contradiction that can be resolved with the supplementary information specifying both formation depths of these lines and a possible propagation velocity of oscillations . in the future it is essential to obtain useful information on the structure and dynamics of the magnetic field in the faculaebeing studied .it would be useful to investigate the differences in spectral oscillations of faculae in active regions ( ar ) and faculae outside ar ( for example , in polar regions ) in more detail .it is desirable also to carry out a cycle of observations of limb faculae simultaneously with spectroscopic and filter methods . in any case, one should consider the results of our analysis carried out for nine faculae .we carried out an investigation of features of periodic oscillations in nine facula regions for two heights ( hei 10830 and sii 10827 ) .power spectra of photospheric and chromospheric oscillations of the los velocity were found to be very similar . according to our measurements , the lag of the chromospheric signal relative to the photospheric one varies from -12 to 100 seconds and is , on average , about 50 seconds for five - minute oscillations .the features that we have revealed can be interpreted in two ways : either a difference in heights of the hei and sii lines formation in faculae that is substantially smaller than the generally accepted estimate ( 1500 km ) , or propagation velocity of five - minute oscillations above faculae is several times greater than the commonly used value of 46 .another possible explanation supposes that within the cavity being observed the standing and propagating waves exist at one and the same time . in this instancethe phase lag measured will depend on the contribution of each of the components .the work is supported by rfbr grants 08 - 02 - 91860-rs - a and 10 - 02 - 00153-a , the federal agency for science and innovation ( state contract 02.740.11.0576 ) and basic research program no .16 ( part 3 ) of the presidium of the russian academy of sciences .
|
an analysis of line - of - sight velocity oscillation in nine solar faculae was undertaken with the aim of studying of phase relations between chromosphere ( hei 10830 line ) and photosphere ( sii 10827 line ) five - minute oscillations . we found that time lag of the chromospheric signal relative to photospheric one varies from -12 to 100 seconds and is about 50 seconds on average.we assume that the small observed lag can have three possible explanations : _ i _ ) convergence of formation levels of hei 10830 and sii 10827 in faculae ; _ ii _ ) significant increase of five - minute oscillation propagation velocity above faculae ; _ iii _ ) simultaneous presence of standing and travelling waves .
|
with the recent advancement in computer vision and in natural language processing ( nlp ) , image question answering ( qa ) becomes one of the most active research areas . unlike pure language based qa systems that have been studied extensively in the nlp community ,image qa systems are designed to automatically answer natural language questions according to the content of a reference image .1.0 1.0 most of the recently proposed image qa models are based on neural networks .a commonly used approach was to extract a global image feature vector using a convolution neural network ( cnn ) and encode the corresponding question as a feature vector using a long short - term memory network ( lstm ) and then combine them to infer the answer .though impressive results have been reported , these models often fail to give precise answers when such answers are related to a set of _ fine - grained _ regions in an image . by examining the image qa data sets , we find that it is often that case that answering a question from an image requires multi - step reasoning .take the question and image in fig .[ fig : model_example ] as an example .there are several objects in the .the san consists of three major components : ( 1 ) the image model , which uses a cnn to extract high level image representations , e.g. one vector for each region of the image ; ( 2 ) the question model , which uses a cnn or a lstm to extract a semantic vector of the question and ( 3 ) the stacked attention model , which locates , via multi - step reasoning , the image regions that are relevant to the question for answer prediction . as illustrated in fig .[ fig : vqa_attention ] , the san first uses the question vector to query the image vectors in the first visual attention layer , then combine the question vector and the retrieved image vectors to form a refined query vector to query the image vectors again in the second attention layer .the higher - level attention layer gives a sharper attention distribution focusing on the regions that are more relevant to the answer .finally , we combine the image features from the highest attention layer with the last query vector to predict the answer .the main contributions of our work are three - fold .first , we propose a stacked attention network for image qa tasks .second , we perform comprehensive evaluations on four image qa benchmarks , demonstrating that the proposed multiple - layer san outperforms previous state - of - the - art approaches by a substantial margin .third , we perform a detailed analysis where we visualize the outputs of different attention layers of the san and demonstrate the process that the san takes multiple steps to progressively focus the attention on the relevant visual clues that lead to the answer .image qa is closely related to image captioning . in ,the system first extracted a high level image feature vector from googlenet and then fed it into a lstm to generate captions .the method proposed in went one step further to use an attention mechanism in the caption generation process .different from , the approach proposed in first used a cnn to detect words given the images , then used a maximum entropy language model to generate a list of caption candidates , and finally used a deep multimodal similarity model ( dmsm ) to re - rank the candidates .instead of using a rnn or a lstm , the dmsm uses a cnn to model the semantics of captions . unlike image captioning , in image qa, the question is given and the task is to learn the relevant visual and text representation to infer the answer . in order to facilitate the research of image qa , several data sets have been constructed in either through automatic generation based on image caption data or by human labeling of questions and answers given images . among them , the image qa data set in is generated based on the coco caption data set . given a sentence that describes an image , the authors first used a parser to parse the sentence , then replaced the key word in the sentence using question words and the key word became the answer . created an image qa data set through human labeling .the initial version was in chinese and then was translated to english . also created an image qa data set through human labeling .they collected questions and answers not only for real images , but also for abstract scenes .several image qa models were proposed in the literature . used semantic parsers and image segmentation methods to predict answers based on images and questions . both used encoder - decoder framework to generate answers given images and questions .they first used a lstm to encoder the images and questions and then used another lstm to decode the answers .they both fed the image feature to every lstm cell . proposed several neural network based models , including the encoder - decoder based models that use single direction lstms and bi - direction lstms , respectively .however , the authors found the concatenation of image features and bag of words features worked the best . first encoded questions with lstms and then combined question vectors with image vectors by element wise multiplication . used a cnn for question modeling and used convolution operations to combine question vectors and image feature vectors .we compare the san with these models in sec .[ sec : experiments ] .to the best of our knowledge , the attention mechanism , which has been proved very successful in image captioning , has not been explored for image qa .the san adapt the attention mechanism to image qa , and can be viewed as a significant extension to previous models in that multiple attention layers are used to support multi - step reasoning for the image qa task .the overall architecture of the san is shown in fig . [fig : vqa_attention ] . we describe the three major components of san in this section : the image model , the question model , and the stacked attention model .the image model uses a cnn to get the representation of images .specifically , the vggnet is used to extract the image feature map from a raw image : unlike previous studies that use features from the last inner product layer , we choose the features from the last pooling layer , which retains spatial information of the original images .we first rescale the images to be pixels , and then take the features from the last pooling layer , which therefore have a dimension of , as shown in fig .[ fig : cnn_img ] . is the number of regions in the image and is the dimension of the feature vector for each region .accordingly , each feature vector in corresponds to a pixel region of the input images .we denote by ] , where is the one hot vector representation of word at position , we first embed the words to a vector space through an embedding matrix .then for every time step , we feed the embedding vector of words in the question to lstm : as shown in fig . [fig : lstm ] , the question ` what are sitting in the basket on a bicycle ` is fed into the lstm .then the final hidden layer is taken as the representation vector for the question , i.e. , . in this study, we also explore to use a cnn similar to for question representation .similar to the lstm - based question model , we first embed words to vectors and get the question vector by concatenating the word vectors : . \vspace{-0.5cm}\end{aligned}\ ] ] then we apply convolution operation on the word embedding vectors .we use three convolution filters , which have the size of one ( unigram ) , two ( bigram ) and three ( trigram ) respectively .the -th convolution output using window size is given by : the filter is applied only to window of size . is the convolution weight and is the bias .the feature map of the filter with convolution size is given by : .\end{aligned}\ ] ] then we apply max - pooling over the feature maps of the convolution size and denote it as .\end{aligned}\ ] ] the max - pooling over these vectors is a coordinate - wise max operation . for convolution feature maps of different sizes , we concatenate them to form the feature representation vector of the whole question sentence : ,\end{aligned}\ ] ] hence is the cnn based question vector .the diagram of cnn model for question is shown in fig .[ fig : cnn ] .the convolutional and pooling layers for unigrams , bigrams and trigrams are drawn in red , blue and orange , respectively .given the image feature matrix and the question feature vector , san predicts the answer via multi - step reasoning . in many cases , an answer only related to a small region of an image .for example , in fig .[ fig : example ] , although there are multiple objects in the and the question vector , we first feed them through a single layer neural network and then a softmax function to generate the attention distribution over the regions of the ] where , is the image representation dimension and is the number of image regions , is a dimensional vector. suppose and , then is an dimensional vector , which corresponds to the attention probability of each image region given .note that we denote by the addition of a matrix and a vector .since and both are vectors , the addition between a matrix and a vector is performed by adding each column of the matrix by the vector . based on the attention distribution , we calculate the weighted sum of the image vectors , each from a region , as in eq .[ eq : weighted_sum ] .we then combine with the question vector to form a refined query vector as in eq .[ eq : query ] . is regarded as a refined query since it encodes both question information and the visual information that is relevant to the potential answer : compared to models that simply combine the question vector and the global image vector , attention models construct a more informative since higher weights are put on the visual regions that are more relevant to the question .however , for complicated questions , a single attention layer is not sufficient to locate the correct region for answer prediction .for example , the question in fig .[ fig : model_example ] ` what are sitting in the basket on a bicycle ` refers to some subtle relationships among multiple objects in an image .therefore , we iterate the above query - attention process using multiple attention layers , each extracting more fine - grained visual attention information for answer prediction . formally , the sans take the following formula : for the -th attention layer , we compute : where is initialized to be .then the aggregated image feature vector is added to the previous query vector to form a new query vector : that is , in every layer , we use the combined question and image vector as the query for the image .after the image region is picked , we update the new query vector as .we repeat this times and then use the final to infer the answer : fig .[ fig : example ] illustrates the reasoning process by an example . in the first attention layer ,the model identifies roughly the area that are relevant to ` basket , bicycle ` , and ` sitting in ` . in the second attention layer, the model focuses more sharply on the region that corresponds to the answer ` dogs ` .more examples can be found in sec .[ sec : experiments ] .we evaluate the san on four image qa data sets .* daquar - all * is proposed in .there are training questions and test questions .these questions are generated on and images respectively .the images are mainly indoor scenes .the questions are categorized into three types including _ object _ , _ color _ and _number_. most of the answers are single words . following the setting in , we exclude data samples that have multiple words answers . the remaining data set covers of the original data set . * daquar - reduced * is a reduced version of daquar - all .there are training samples and test samples .this data set is constrained to object categories and uses only test images .the single word answers data set covers of the original data set .* coco - qa * is proposed in .based on the microsoft coco data set , the authors first parse the caption of the image with an off - the - shelf parser , then replace the key components in the caption with question words for form questions .there are training samples and test samples in the data set .these questions are based on and images respectively .there are four types of questions including _ object _ , _ number _ , _ color _ , and _location_. each type takes , and of the whole data set , respectively .all answers in this data set are single word .* vqa * is created through human labeling .the data set uses images in the coco image caption data set . unlike the other data sets , for each image , there are three questions and for each question , there are ten answers labeled by human annotators .there are training questions and validation questions in the data set .following , we use the top most frequent answer as possible outputs and this set of answers covers of all answers .we first studied the performance of the proposed model on the validation set .following , we split the validation data set into two halves , val1 and val2 .we use training set and val1 to train and validate and val2 to test locally .the results on the val2 set are reported in table .[ tab : vqa ] .we also evaluated the best model , san(2 , cnn ) , on the standard test server as provided in and report the results in table .[ tab : vqa_server ] .we compare our models with a set of baselines proposed recently on image qa . since the results of these baselines are reported on different data sets in different literature , we present the experimental results on different data sets in different tables . for all four data sets ,we formulate image qa as a classification problem since most of answers are single words .we evaluate the model using classification accuracy as reported in .the reference models also report the wu - palmer similarity ( wups ) measure .the wups measure calculates the similarity between two words based on their longest common subsequence in the taxonomy tree .we can set a for wups , if the similarity is less than the threshold , then it is zeroed out . following the reference models, we use wups0.9 and wups0.0 as evaluation metrics besides the classification accuracy .the evaluation on the vqa data set is different from other three data sets , since for each question there are ten answer labels that may or may not be the same .we follow to use the following metric : , which basically gives full credit to the answer when three or more of the ten human labels match the answer and gives partial credit if there are less matches . for the image model, we use the vggnet to extract features . when training the san , the parameter set of the cnn of the vggnet is fixed .we take the output from the last pooling layer as our image feature which has a dimension of . for daquar and coco - qa , we set the word embedding dimension and lstm s dimension to be in the question model . for the cnn based question model, we set the unigram , bigram and trigram convolution filter size to be , , respectively .the combination of these filters makes the question vector size to be . for vqa dataset ,since it is larger than other data sets , we double the model size of the lstm and the cnn to accommodate the large data set and the large number of classes . in evaluation, we experiment with san with one and two attention layers .we find that using three or more attention layers does not further improve the performance . in our experiments ,all the models are trained using stochastic gradient descent with momentum .the batch size is fixed to be .the best learning rate is picked using grid search .gradient clipping technique and dropout are used .the experimental results on daquar - all , daquar - reduced , coco - qa and vqa are presented in table .[ tab : daquar_all_results ] to [ tab : vqa ] respectively .our model names explain their settings : san is short for the proposed stacked attention networks , the value 1 or 2 in the brackets refer to using one or two attention layers , respectively .the keyword lstm or cnn refers to the question model that sans use .l c c c methods & accuracy & wups0.9 & wups0.0 + + multi - world & 7.9 & 11.9 & 38.8 + + language & 19.1 & 25.2 & 65.1 + language + img & 21.7 & 28.0 & 65.0 + * ` cnn : ` * + img - cnn & 23.4 & 29.6 & 63.0 + + san(1 , lstm ) & 28.9 & 34.7 & 68.5 + san(1 , cnn ) & 29.2 & 35.1 & 67.8 + san(2 , lstm ) & * 29.3 * & 34.9 & 68.1 + san(2 , cnn ) & * 29.3 * & * 35.1 * & * 68.6 * + + human & 50.2 & 50.8 & 67.3 + l c c c methods & accuracy & wups0.9 & wups0.0 + + multi - world & 12.7 & 18.2 & 51.5 + + language & 31.7 & 38.4 & 80.1 + language + img & 34.7 & 40.8 & 79.5 + + guess & 18.2 & 29.7 & 77.6 + bow & 32.7 & 43.2 & 81.3 + lstm & 32.7 & 43.5 & 81.6 + img+bow & 34.2 & 45.0 & 81.5 + vis+lstm & 34.4 & 46.1 & 82.2 + 2-vis+blstm & 35.8 & 46.8 & 82.2 + + img - cnn & 39.7 & 44.9 & 83.1 + + san(1 , lstm ) & 45.2 & 49.6 & 84.0 + san(1 , cnn ) & 45.2 & 49.6 & 83.7 + san(2 , lstm ) & * 46.2 * & * 51.2 * & * 85.1 * + san(2 , cnn ) & 45.5 & 50.2 & 83.6 + + human & 60.3 & 61.0 & 79.0 + .coco - qa results , in percentage [ cols="<,^,^,^",options="header " , ] the experimental results in table . [tab : daquar_all_results ] to [ tab : vqa ] show that the two - layer san gives the best results across all data sets and the two kinds of question models in the san , lstm and cnn , give similar performance . for example, on daquar - all ( table .[ tab : daquar_all_results ] ) , both of the proposed two - layer sans outperform the two best baselines , the img - cnn in and the ask - your - neuron in , by and absolute in accuracy , respectively .similar range of improvements are observed in metrics of wups0.9 and wups0.0 .we also observe significant improvements on daquar - reduced ( table .[ tab : daquar_reduced_results ] ) , i.e. , our san(2 , lstm ) outperforms the img - cnn , the 2-vis+blstm , the ask - your - neurons approach and the multi - world by , , and absolute in accuracy , respectively . on the larger coco - qa data set , the proposed two - layer sans significantly outperform the best baselines from ( img - cnn ) and ( img+bow and 2-vis+blstm ) by 5.1% and 6.6% in accuracy ( table .[ tab : coco_results ] ) . table .[ tab : vqa_server ] summarizes the performance of various models on vqa , which is the largest among the four data sets .the overall results show that our best model , san(2 , cnn ) , outperforms the lstm q+i model , the best baseline from , by 4.8% absolute .the superior performance of the sans across all four benchmarks demonstrate the effectiveness of using multiple layers of attention . in order to study the strength and weakness of the san in detail , we report performance at the question - type level on the two large data sets , coco - qa and vqa , in table . [tab : coco_perclass ] and [ tab : vqa_server ] , respectively .we observe that on coco - qa , compared to the two best baselines , img+bow and 2-vis+blstm , out best model san(2 , cnn ) improves 7.2% in the question type of _ color _ , followed by 6.1% in _ objects _, 5.7% in _ location _ and 4.2% in _ number_. we observe similar trend of improvements on vqa . as shown in table .[ tab : vqa_server ] , compared to the best baseline lstm q+i , the biggest improvement of san(2 , cnn ) is in the _ other _ type , 9.7% , followed by the 1.4% improvement in _ number _ and 0.4% improvement in _ yes / no_. note that the _ other _ type in vqa refers to questions that usually have the form of `` what color , what kind , what are , what type , where '' etc . , which are similar to question types of _ color _ , _ objects _ and _ location _ in coco - qa .the vqa data set has a special _ yes / no _ type of questions .the san only improves the performance of this type of questions slightly .this could due to that the answer for a _ yes / no _ question is very question dependent , so better modeling of the visual information does not provide much additional gains .this also confirms the similar observation reported in , e.g. , using additional image information only slightly improves the performance in _ yes / no _ , as shown in table .[ tab : vqa_server ] , q+i vs question , and lstm q+i vs lstm q. our results demonstrate clearly the positive impact of using multiple attention layers . in all four data sets ,two - layer sans always perform better than the one - layer san .specifically , on coco - qa , on average the two - layer sans outperform the one - layer sans by 2.2% in the type of _ color _ , followed by 1.3% and 1.0% in the _ location _ and _ objects _ categories , and then 0.4% in _number_. this aligns to the order of the improvements of the san over baselines .similar trends are observed on vqa ( table .[ tab : vqa ] ) , e.g. , the two - layer san improve over the one - layer san by 1.4% for the _ other _ type of question , followed by 0.2% improvement for _ number _ , and flat for _ yes / no_. in this section , we present analysis to demonstrate that using multiple attention layers to perform multi - step reasoning leads to more fine - grained attention layer - by - layer in locating the regions that are relevant to the potential answers .we do so by visualizing the outputs of the attention layers of a sample set of images from the coco - qa test set .note the attention probability distribution is of size and the original image is , we up - sample the attention probability distribution and apply a gaussian filter to make it the same size as the original image .[ fig : vqa_more_examples ] presents six examples .more examples are presented in the appendix .they cover types as broad as _ object _ , _ numbers _ , _ color _ and _location_. for each example , the three images from left to right are the original image , the output of the first attention layer and the output of the second attention layer , respectively .the bright part of the image is the detected attention . across all those examples, we see that in the first attention layer , the attention is scattered on many objects in the image , largely corresponds to the objects and concepts referred in the question , whereas in the second layer , the attention is far more focused on the regions that lead to the correct answer .for example , consider the question ` what is the color of the horns ` , which asks the color of the horn on the woman s head in fig .[ fig : vqa_more_examples](f ) . in the output of the first attention layer, the model first recognizes a woman in the image . in the output of the second attention layer ,the attention is focused on the head of the woman , which leads to the answer of the question : the color of the horn is ` red ` .we randomly sample 100 images from the coco - qa test set that the san make mistakes .we group the errors into four categories : ( i ) the sans focus the attention on the wrong regions ( 22% ) , e.g. , the example in fig . [fig : vqa_wrong_examples](a ) ; ( ii ) the sans focus on the right region but predict a wrong answer ( 42% ) , e.g. , the examples in fig .[ fig : vqa_wrong_examples](b)(c)(d ) ; ( iii ) the answer is ambiguous , the sans give answers that are different from labels , but might be acceptable ( 31% ) .e.g. , in fig .[ fig : vqa_wrong_examples](e ) , the answer label is ` pot ` , but out model predicts ` vase ` , which is also visually reasonable ; ( iv ) the labels are clearly wrong ( 5% ) .e.g. , in fig .[ fig : vqa_wrong_examples](f ) , our model gives the correct answer ` trains ` while the label ` cars ` is wrong .in this paper , we propose a new stacked attention network ( san ) for image qa .san uses a multiple - layer attention mechanism that queries an image multiple times to locate the relevant visual region and to infer the answer progressively .experimental results demonstrate that the proposed san significantly outperforms previous state - of - the - art approaches by a substantial margin on all four image qa data sets .the visualization of the attention layers further illustrates the process that the san focuses the attention to the relevant visual clues that lead to the answer of the question layer - by - layer .
|
this paper presents stacked attention networks ( sans ) that learn to answer natural language questions from images . sans use semantic representation of a question as query to search for the regions in an image that are related to the answer . we argue that image question answering ( qa ) often requires multiple steps of reasoning . thus , we develop a multiple - layer san in which we query an image multiple times to infer the answer progressively . experiments conducted on four image qa data sets demonstrate that the proposed sans significantly outperform previous state - of - the - art approaches . the visualization of the attention layers illustrates the progress that the san locates the relevant visual clues that lead to the answer of the question layer - by - layer .
|
in 1811 , sir humphrey davy ( davy 1811 ) was the first to report the existence of clathrate , a variety of compounds in which water forms a continuous and known crystal structure with small cages .these cages trap guests , such as methane or ethane , needed to stabilize the water lattice .the two most common clathrate structures found in nature are known as structures i and ii , which differ in the type of water cages present in the crystal lattice ( sloan & koh 2008 ) .structure i has two types of cages , a small pentagonal dodecahedral cage , denoted 5 ( 12 pentagonal faces in the cage ) and a large tetrakaidecahedral cage , denoted 5 ( 12 pentagonal faces and 2 hexagonal faces in the cage ) .structure ii also has two types of cages , a small 5 cage and a large hexakaidecahedral cage , denoted 5 ( 12 pentagonal faces and 4 hexagonal faces in the cage ) .the type of structure that forms depends largely on the size of the guest molecule .for example , methane and ethane induce water to form structure i clathrate and propane structure ii clathrate ( sloan & koh 2008 ) . on titan , the temperature and atmospheric pressure conditions prevailing at the ground level permit clathrates formation when liquid hydrocarbons enter in contact with the exposed water ice ( mousis & schmitt 2008 ) . assuming an open porosity for titan s upper crust , clathrates made from hydrocarbons are even expected to be stable down to several kilometers from the surface ( mousis & schmitt 2008 ). an interesting feature of clathrates is that their formation induces a fractionation of the trapped molecules ( van der waals & platteuw 1959 ; lunine & stevenson 1985 ; mousis et al .this property has been used to suggest that the noble gas depletion observed in titan s atmosphere could result from their efficient sequestration in a global clathrate layer located in the near subsurface ( thomas et al .2007 ; mousis et al . 2011 ) .it has also recently been used to propose that the satellite s polar radius , which is smaller by several hundred meters than the value predicted by the flattening due to its spin rate , would result from the substitution of methane by ethane percolating in liquid form in clathrate layers potentially existing in the polar regions ( choukroun & sotin 2012 ) . in this paper, we investigate the interplay that may happen between an alkanofer , namely a reservoir of liquid hydrocarbons located in titan s subsurface , and a hypothetical clathrate reservoir that progressively forms when the liquid mixture diffuses throughout a preexisting porous icy layer .this porous layer might have been generated by cryovolcanic events resulting from the ascent of liquid from subsurface ocean ( mitri et al .2006 ) or from the destabilization of clathrates in the ice shell of titan ( tobie et al .2006 ) . in both cases , a highly porous icy material in contact with the atmosphereis generated , probably similar to basaltic lava flows ( mousis & schmitt 2008 ) .the cooling of cryolava is expected to take less than 1 yr to decrease down to titan s surface temperature ( lorenz 1996 ) , implying that it should be fast enough to allow preservation of the created porosity .hundreds of lakes and a few seas are observed to cover the polar regions of titan ( stofan et al .kraken mare and ligeia mare , namely the two largest seas of titan , have surface areas estimated to be ,000 km ( jaumann et al .2010 ) and 126,000 km ( mastrogiuseppe et al . 2014 ) , respectively . with an average depth of m, ligeia mare contains 10 kg of hydrocarbons , about 100 times the known terrestrial oil and gas reserves , but still only .4% of titan s atmospheric methane ( mastrogiuseppe et al .2014 ) . while a significant number of these lakes and seas should be regularly filled by hydrocarbon rainfalls ( turtle et al .2011 ) , some of them could be also renewed via their interconnection with alkanofers . using porous media properties inferred from huygens probe observations , hayes et al .( 2008 ) found that the timescales for flow into and out of observed lakes via subsurface transport are order of tens of years . because the porosity is not expected to evolve significantly over myr within the subsurface of titan ( kossacki & lorenz 1996 ), clathrates might form and equilibrate with liquid hydrocarbons well prior that the porosity reaches its close - off value .a fraction of these lakes may then result from the interaction between alkanofers and clathrate reservoirs through the ice porosity and possess a composition differing from that of lakes and rivers sourced by precipitation .as a liquid reservoir occupies a finite volume , the progressive transfer and fractionation of the molecules in the forming clathrate reservoir could alter the lakes chemical composition . in order to explore this possibility , we use a statistical thermodynamic model derived from the approach of van der waals & platteuw ( 1959 ) to compute the composition of the clathrate reservoir that forms as a result of the progressive entrapping of the species present in the liquid mixture .the major ingredient of our model is the description of the guest clathrate interaction by a spherically averaged kihara potential with a set of potential parameters based on the literature .this allows us to track the evolution of the mole fractions of species present in the liquid reservoir as a function of their progressive entrapment in the clathrate layer .section 2 is devoted to the description of our computational approach and the physical ingredients of our model .we also discuss the underlying assumptions of our approach in this section .the results concerning the composition of lakes interacting with clathrate reservoirs at polar or equatorial zones are presented in section 3 .section 4 is devoted to discussion and conclusions .we assume that the liquid reservoir is in contact with porous ice and that clathrates form at the liquid / ice interface .we consider an isolated system composed of a clathrate reservoir that progressively forms and replaces the h crustal material with time and a liquid reservoir that correspondingly empties due to the net transfer of molecules to the clathrate reservoir ( see concept pictured in fig . [ draw ] ) .based on this approach , we have elaborated a computational procedure with the intent to determine the mole fractions of each species present in the liquid reservoir and trapped in the forming clathrate reservoir , as a function of the fractions of the initial liquid volume ( before volatile migration ) remaining in lake and present in clathrates , respectively . at the beginning of our computations ,the liquid reservoir s composition is derived from those computed by cordier et al .( 2009 , 2013 ) for lakes at polar and equator temperatures , which result from models assuming thermodynamic equilibrium between the atmosphere and the lakes ( see table [ lake0 ] ) .because the clathration kinetics of hydrocarbons is poorly constrained at the titan s surface temperatures considered , the present calculations use an iterative process for which the number of molecules in the liquid phase being trapped in clathrates between each iteration is equal to 10 the total number of moles .initially , all molecules are in the liquid phase .the mole fraction of species in lake , is given in table [ lake0 ] .the corresponding number of moles of species is defined by = , with the number of moles of liquid available in the lake at this time .the mole fraction of the enclathrated species and the number of moles of liquid trapped in clathrate are set to zero . at iteration ,the mole fraction of each enclathrated guest is calculated by using the statistical - thermodynamic model described in section [ stat ] and the relative abundances in the liquid phase of the previous iteration .the new numbers of moles in the lake and in clathrate are calculated for each species , with = - 10 and = + 10 .the mole fraction of each species present at iteration in the lake and clathrate are defined by and , respectively , with ( ) = ( ) and ( ) = ( ) . at any iteration , = ( ) + ( ) .the new values of and are introduced in the next loop and the process is run until eventually gets to zero . to calculate the relative abundances of guest species incorporated in the clathrate phase at given temperature and pressure, we use a model applying classical statistical mechanics that relates the macroscopic thermodynamic properties of clathrates to the molecular structure and interaction energies ( van der waals & platteuw 1959 ; lunine & stevenson 1985 ; mousis et al .it is based on the original ideas of van der waals and platteeuw for clathrate formation , which assume that trapping of guest molecules into cages corresponds to the three - dimensional generalization of ideal localized adsorption ( see sloan & koh ( 2008 ) for an exhaustive description of the statistical thermodynamics of clathrate equilibra ) . in this formalism , the fractional occupancy of a guest molecule for a given type ( = small or large ) of cage can be written as where the sum in the denominator includes all the species which are present in the liquid phase . is the langmuir constant of species in the cage of type , and the fugacity of species in the mixture . using the redlich - kwong equation of state ( redlich and kwong 1949 ) in the case of a mixture dominated by c , we find that the coefficient of fugacity of the mixture ( defined as the ratio of the mixture s fugacity to its vapor pressure ) converges towards 1 at titan s surface temperatures and corresponding c vapor pressures . in our approach , the value of each species is calculated via the raoult s law , which states with the vapor pressure of species in the mixture and the vapor pressure of pure component . is defined via the antoine equation with the parameters a , b and c listed in table [ abc ] ( is expressed in bar and in k ) .the langmuir constant depends on the strength of the interaction between each guest species and each type of cage , and can be determined by integrating the molecular potential within the cavity as where represents the radius of the cavity assumed to be spherical , the boltzmann constant , and is the spherically averaged kihara potential representing the interactions between the guest molecules and the h molecules forming the surrounding cage .this potential can be written for a spherical guest molecule , as ( mckoy & sinanolu 1963 ) , \label{eq3}\end{aligned}\ ] ] with . \label{eq4}\ ] ] in eq . 5, is the coordination number of the cell .this parameter depends on the structure of the clathrate ( i or ii ; see sloan & koh 2008 ) and on the type of the cage ( small or large ) .the kihara parameters , and for the molecule - water interactions , given in table [ kihara ] , have been taken from the recent compilation of sloan & koh ( 2008 ) when available and from parrish & prausnitz ( 1972 ) for the remaining species .finally , the mole fraction of a guest molecule in a clathrate can be calculated with respect to the whole set of species considered in the system as where and are the number of small and large cages per unit cell respectively , for the clathrate structure under consideration , and with .values of , , and are taken from parrish & prausnitz ( 1972 ) . among the different species considered in the present study ,c is the only molecule whose size is too large to be trapped either in small or large cages of structure i clathrate .because c can only be trapped in the large cages of structure ii clathrate ( sloan & koh 2008 ) , we assume that this molecule remains in the liquid phase in the case of structure i clathrate formation . is the lennard - jones diameter , is the depth of the potential well , and is the radius of the impenetrable core , for the guest - water pairs . the scenario we propose is based on some underlying assumptions , indicated below : * _ statistical thermodynamic model_. the predictive capabilities of our model , which is derived from the approach of van der waals & platteeuw , rely on four key assumptions : ( i ) the host molecules contribution to the free energy is independent of the clathrate occupancy ( the guest species do not distort the cages ) ; ( ii ) the cages are singly occupied ; ( iii ) guest molecules rotate freely within the cage and they do not interact with each other ; ( iv ) classical statistics is valid , i.e. , quantum effects are negligible. however , these assumptions are subject to caution since encaged molecules with larger dimensions may distort the cages . also , for certain small - sized molecules ( like h ) multiple gas occupancy can occur , and non spherical molecules may not be free to rotate in the entire cavity .molecular dynamics simulations ( erfan - niya et al .2011 ; fleischer & genda 2013 ) are typically used to investigate these effects but , due to the amount of time they require , these computations do not easily provide quantitative estimates on the fractionation of the different species encaged in clathrates at the macroscopic level , in particular in systems considering a large number of species . for these reasons , and because it is often based on interaction parameters fitted on laboratory measurements of phase equilibria , allowing accurate prediction when compared to experiments, the approach of van der waals & platteeuw remains the main tool employed in industry and research to determine clathrate composition ( sloan & koh 2008 ) . * _ kinetics of clathrate formation ._ kinetics data concerning clathrate formation are scarce and mostly concern gas / ice interaction ( see sloan & koh for a review of measurements ) . in the present case , clathrates form from the interaction between liquid hydrocarbons and ice .to the best of our knowledge , the kinetics measurements of the closest system reported so far are those concerning clathrate formation from a mixture of liquid methane and ethane at temperatures ranging from 260 to 280 k ( murshed et al .because of the large temperature difference between titan s surface and these experiments , the uncertainty is too large to make use of these kinetics data .kinetics measurements remain to be done at titan s conditions .* _ assumption of equilibrium ._ our model is restricted to equilibrium calculations between subsurface alkanofers and coexisting clathrate layers .hence it should be applied with caution to the case where lakes and rivers located on titan s surface directly equilibrate with a clathrate layer located beneath .to do so , we would need to compute the simultaneous equilibrium between the clathrate reservoir , lake and atmosphere .however , our computations are a good approximation if the reequilibration timescale between the lake and the atmosphere is short compared to the timescale of clathrate formation .figure [ lake ] represents the evolution of the mole fractions of species present in subsurface alkanofers of titan , starting with the one given in table [ lake0 ] , as a function of their progressive entrapping in structures i and ii clathrate reservoirs located at the poles ( k ) and at the equator ( .6 k ) .the evolution of the liquid reservoir s composition varies significantly if one assumes the formation of a structure i or a structure ii clathrate reservoir .in particular , the change of clathrate structure in our model alters the number of dominant species present in the liquid phase at high mole fractions of entrapped liquid .it also drastically affects the evolution of the abundances of secondary species during the progressive liquid entrapping .when considering the formation of a structure i clathrate , and irrespective of the liquid reservoir s temperature , the dominating species is c until that a liquid mole fraction of .85 has been trapped into clathrate . above this value , and because of the entrapping of the other molecules in structure i clathrate , c becomes the only remaining species in the liquid reservoir . at both temperatures considered ,the initial abundance of ch in the liquid phase is close to that of c ( see table [ lake0 ] ) .however , as the liquid progressively forms clathrate with ice , the mole fraction of ch rapidly decreases and finally converges towards zero after a mole fraction of .30.5 of the initial liquid reservoir has been entrapped .meanwhile , the mole fractions of n , ar and co form plateaus and finally drop towards zero at the same mole fraction of entrapped liquid that corresponds to the disappearance of ch .when considering the formation of a structure ii clathrate , the dominant species remains c , irrespective of the mole fraction of liquid entrapped in clathrate and the temperature of the reservoirs .instead of increasing with the progressive liquid entrapment in clathrate as in the previous case , the abundance of c decreases and suddenly drops when the mole fraction of entrapped liquid is .150.23 .the abundances of ch , n , ar and co also decrease with the progressive formation of structure ii clathrate and suddenly drop at mole fractions of entrapped liquid in the .110.18 range .the temperature of the liquid and clathrate reservoirs also plays a role in the determination of their composition , but in a less important manner than the modification of the clathrate structure .figure [ lake ] shows that the rise of temperature decreases by several per cents the mole fraction of entrapped liquid at which the abundances of minor species drop in the solution .compared to the change of clathrate structure , a temperature variation also affects ( to a lower extent ) the mole fractions of secondary species in the liquid reservoir but does not influence those of major compounds .figure [ clat ] represents the evolution of the composition of structures i and ii clathrate reservoirs on titan as a function of the progressive entrapping of subsurface alkanofers located at the poles and at the equator .as noted for the composition of the liquid reservoirs , the structure change affects the number of dominant species in clathrate .it also significantly affects the evolution of the mole fractions of secondary species during their progressive trapping .the mole fractions of the different entrapped species strongly differ from those in solution at the beginning of their entrapping , as a result of the fractionation occurring during the clathration .however , irrespective of the structure considered and because of the conservation of the total number of moles in our system , the mole fraction of each species trapped in clathrate converges towards its initial abundance in solution when the fraction of entrapped liquid approaches 1 . in the case of structurei clathrate formation , c remains the dominant volatile . on the other hand , with a mole fraction ranging between 0.11 and 0.25 ,ch is the second most abundant volatile present in clathrate , except at fractions of entrapped liquid close to 1 and at equator temperatures . with a mole fraction in the 10 range ,n is the third most abundant volatile trapped in clathrate at fractions of entrapped liquid lower than .9 .the mole fractions of ar and co are in the 10 and 10 ranges , respectively , making them the less abundant species present in the forming structure i clathrate . as mentioned in sec .[ mode ] , c is not trapped in structure i clathrate , due to its large size compared to those of small and large cages . when considering the formation of structure ii clathrate , the two most abundant species present in clathrate are ch and c for mole fractions of entrapped liquid lower than .20.3 .above this range of values , c becomes again the most abundant volatile present in clathrate . on the other hand , the mole fractions of n , ar and co are in ranges close to those computed in the case of structure i clathrate formation . in both clathrate structures, the temperature plays the same role as the one noted for liquid reservoirs . at a similar species abundance , an increase of temperature decreases by several percent the corresponding mole fraction of entrapped liquid .because of the large uncertainties on the kinetics of clathrate formation ( see sec. 2.3 ) , the reliability of our conceptual model requires future laboratory experiments at conditions close to those encountered on titan in order to be assessed . if alkanofers equilibrated with clathrate layers , then our computations should allow disentangling in situ measurements of lakes and rivers flowing from alkanofers from those of liquid areas directly sourced by precipitation . in the case of structurei clathrate formation , and irrespective of the temperature considered , the solution is dominated by ethane at mole fractions of the initial liquid reservoir trapped in clathrate lower than .9 . at higher mole fractions , propane becomes the only species remaining in the liquid phase . in the case of structure ii clathrate formation , ethane is the only dominant species in solution , irrespective of the temperature considered and mole fraction of entrapped liquid .these trends can be explained by the fact that ethane naturally enters the large cages of a structure i clathrate , while the large size of propane only allows this molecule to enter the large cages of a structure ii clathrate .as both guests are very strong clathrate formers ( e.g. , sloan & koh 2008 ) , they compete for the same site .therefore , in the case of formation of a structure i clathrate , propane remains in the liquid phase .conversely , in the case of formation of a structure ii clathrate , propane dominates in the large cages , forcing ethane to remain in the liquid phase .our model then suggests that ethane and propane should be the discriminating markers of the clathrate structure forming from the solution when a significant fraction ( 0.9 ) of the initial liquid reservoir has been entrapped. in our model , any river or lake emanating from alkanofers possessing these characteristics should present a similar composition . on the other hand , lakes and rivers sourced by precipitation should contain substantial fractions of ch and n , as well as minor traces of ar and co ( see table [ lake0 ] ) . here ,the lake compositions have been computed in the cases of liquid trapping in structures i and ii clathrate reservoirs .however , it has been shown that a mixture dominated by ethane in presence of methane leads to the formation of a structure i clathrate ( takeya et al .we then believe that clathrate reservoirs formed from the lakes on titan should be essentially of structure i. interestingly , if one postulates that the pores are fully filled by liquid hydrocarbons , it is possible to estimate the maximum porosity of the alkanofer that is consistent with a full enclathration of the solution . assuming that the composition of the liquid reservoir is dominated by c in the case of structure i clathrate , the number of molecules present in the liquid is , with the porosity , the density of the liquid , the molecular mass of c and the volume of the alkanofer . on the other hand , the maximum number of available clathrate cages in the porous matrix is , with the density of solid water ice , the molecular mass of h , and 7.66 the number of h molecules per enclathrated c molecule ( sloan & koh 2008 ) . the value of that satisfies the conditions / = 1 is the maximum porosity for which the number of trapped molecules is lower than or equal to the number of available clathrate cages .a larger value implies that all the liquid can not be trapped as clathrates and a fraction of the liquid remains in the alkanofer . assuming that the volume of ice remains constant during clathrate formation ( erfan - niya et al .2011 ) , we find a maximum porosity value of .23 .this value is larger than some estimates of 10% to 15% for the porosity of titan s subsurface ( kossacki & lorenz 1996 ) .the reservoir would be filled up until the ice is fully transformed into clathrates .liquids filling the reservoir would then react with the clathrate matrix with exchange mechanisms such as those described in choukroun & sotin ( 2012 ) . as mentioned in sec .2.3 , our model can apply to the case where lakes and rivers located on titan s surface directly equilibrate with a clathrate layer located beneath if the reequilibration timescale between the liquid and the atmosphere is short compared to the clathration timescale .indeed , the massive atmosphere would serve only to buffer methane and n ( and the minor species co and ar ) , but not ethane and propane . because the vapor pressures of ethane and propane are so small , the atmosphere is not a reservoir of those species .hence methane and other minor gaseous compounds would draw into the atmosphere when the ethane / propane go into the clathrate , and would be again introduced into the lake / sea when ethane / propane are added .so , the methane abundance in the seas would adjust so as to be in a thermodynamically correct proportion to the ethane and propane in the lakes / seas for the given methane atmospheric humidity . in these conditions ,the abundances of volatiles trapped in clathrate would correspond to the values computed when the mole fraction of entrapped liquid is very low and the compositions of coexisting lakes / seas would be very close to those given in table [ lake0 ] when the clathrate reservoir is absent . in any case, it must be borne in mind that the possible range of initial compositions of the liquid in equilibrium with the atmosphere ( the one used in the starting composition of our liquid reservoirs ) remains poorly constrained at present .mole fractions predictions of current lake models vary by tens of percents ( e.g. , cordier et al .2012 ; tan et al .2013 ; glein & shock 2013 ) . our predictions of clathrate / liquid equilibrium compositions are valid in any case where the mole fraction of c is prominent in the initial liquid reservoir .for example , similar conclusions are found when using the lake composition determined by tan et al .( 2013 ) at 93.7 k as the starting one of the alkanofer ( % of c , 32% of ch , 7% of c , and 7% of n ) . on the other hand, our results strongly vary when using the liquid composition calculated by tan et al .( 2013 ) at 90 k ( % of c , 69% of ch , 1% of c , and 22% of n ) , which is very different from the one they obtained at higher temperature . in this case ,irrespective of the clathrate structure , the solution is dominated by n and ch at high mole fractions of entrapped liquid .the results of our model are wrong since the predicted mole fraction of remaining n , which can be as high as 90% , exceeds its solubility limit in hydrocarbons ( % ; hibbard & evans 1968 ) . to derive the correct composition of the solution , we would need to compute the simultaneous equilibrium between the clathrate reservoir , liquid and the generated n atmosphere .new experimental data obtained on the atmosphere liquid equilibrium composition at titan s conditions are needed to refine the expected composition of the lakes .another limitation of the model , and of all models of the composition and stability of mixed clathrates on titan s surface or subsurface , is related to the paucity of experimental data available to constrain both the kihara potential parameters for many clathrate formers ( e.g. choukroun et al . 2013 ) and the fugacity of these gases at conditions relevant to titan s surface .o. m. acknowledges support from cnes . m.c . and c.s .acknowledge support from the nasa outer planets research program .part of this work has been conducted at the jet propulsion laboratory , california institute of technology , under contract to nasa .government sponsorship acknowledged .cordier , d. , mousis , o. , lunine , j. i. , lebonnois , s. , rannou , p. , lavvas , p. , lobo , l. q. , ferreira , a. g. m. 2012 .titan s lakes chemical composition : sources of uncertainties and variability .planetary and space science 61 , 99 - 107 .davy , h. 1811 . the bakerian lecture : on some of the combinations of oxymuriatic acid and oxygen , and on the chemical relations to these principles to inflammable bodies .philosophical transactions of the royal society of london , 101 , 1 .mousis , o. , lunine , j. i. , picaud , s. , cordier , d. , waite , j. h. , jr . ,mandt , k. e. 2011 .removal of titan s atmospheric noble gases by their sequestration in surface clathrates .the astrophysical journal 740 , l9 . murshed , m. m. , schmidt , b. c. , kuhs , w. f. 2010 .kinetics of methane - ethane gas replacement in clathrate - hydrates studied by time - resolved neutron diffraction and raman spectroscopy .j. phys . chem .a , 114 , 247 - 255 .parrish , w. r. , prausnitz , j. m. , 1972 .dissociation pressures of gas hydrates formed by gas mixtures .industrial and engineering chemistry : process design and development , 11 ( 1 ) , 26 - 35 .erratum : parrish , w. r. , prausnitz , j. m. , 1972 .industrial and engineering chemistry : process design and development 11 ( 3 ) , 462 .takeya , s. , kamata , y. , uchida , t. , nagao , j. , ebinuma , t. , narita , h. , hori , a. , and hondoh , t. , 2003 .coexistence of structure i and ii hydrates formed from a mixture of methane and ethane gases .81 , 479 - 484 .yaws , c. l. , yang , h. c. 1989 . to estimate vapor pressure easily .antoine coefficients relate vapor pressure totemperature for almost 700 major organic compounds . hydrocarbon processing .68(10 ) , 65 - 68 .
|
hundreds of lakes and a few seas of liquid hydrocarbons have been observed by the cassini spacecraft to cover the polar regions of titan . a significant fraction of these lakes or seas could possibly be interconnected with subsurface liquid reservoirs of alkanes . in this paper , we investigate the interplay that would happen between a reservoir of liquid hydrocarbons located in titan s subsurface and a hypothetical clathrate reservoir that progressively forms if the liquid mixture diffuses throughout a preexisting porous icy layer . to do so , we use a statistical thermodynamic model in order to compute the composition of the clathrate reservoir that forms as a result of the progressive entrapping of the liquid mixture . this study shows that clathrate formation strongly fractionates the molecules between the liquid and the solid phases . depending on whether the structure i or structure ii clathrate forms , the present model predicts that the liquid reservoirs would be mainly composed of either propane or ethane , respectively . the other molecules present in the liquid are trapped in clathrates . any river or lake emanating from subsurface liquid reservoirs that significantly interacted with clathrate reservoirs should present such composition . on the other hand , lakes and rivers sourced by precipitation should contain higher fractions of methane and nitrogen , as well as minor traces of argon and carbon monoxide . titan titan , hydrology titan , surface titan , atmosphere
|
each day , millions of conversations , emails , sms , blog posts and comments , instant messages , tweets or web pages containing various types of information are exchanged between people .humans natural inclination to share information with others in a `` viral '' fashion stems from the need of socializing and seeks to gain reputation , influence , trustworthiness or popularity .such viral dissemination of information through social networks , commonly known as `` word - of - mouth '' ( wom ) , is of paramount importance in our everyday life .in fact , it is known to influence purchasing decisions to the extent that 2/3 of the united states economy is driven by those kind of personal recommendations .wom is also important to understand sales and customer value , opinion formation or rumor spreading in social networks or to determine the influence of each person in its social neighborhood . despite its importance and due to the difficulty ( or inability ) to capture this phenomenon , detailed empirical data on how humans disseminate information are scarce , population aggregated or indirect .moreover , most studies have concentrated on asymptotical stationary properties of information difussion .this has hampered the study of the dynamics of information diffusion and indeed most of its understanding comes from theoretical propagation models running on empirical or synthetic social networks in an approach borrowed from epidemiology . in those models , information diffusion equates to the propagation of virus or diseases that spontaneously pass to others by contagion through the active social connections of the infected ( i.e. informed ) agents .however , information diffusion mechanisms are fundamentally different from those operating in disease spreading . in fact , passing a message along has a perceived transmission cost , its targets are consciously selected among potentially interested individuals , depends on human volition and , ultimately , is executed on the individuals activity schedule . an obvious implication of those peculiarities is that information spreading is bound to depend on the large variability observed both on the volume and frequency of human activities and on the perceived value / cost of transmitting the information .for example , the number of emails sent by individuals per day , the number of telephone calls placed by users , the number of blog entries by user , the number of web page clicks per user , and the number of a person s social relationships or sexual contacts show large demographic stochasticity .in fact these numbers are distributed according to a power - law ( or pareto ) distribution , inconsistent with the mild gaussian or poissonian stochasticity around population - averaged values traditionally assumed in epidemiological models .the same large variability pattern applies to the human activities time dynamics : for example , email response delays , market trading frequencies or inter - event time of web page visits , telephone calls , etc .are well described by power - law or log - normal distributions .recent research has shown that such high variability in human behavior alters substantially the temporal dynamics of information diffusion and does not merely introduce some stochasticity in population - averaged models .thus , it is important to incorporate this human behavior into the models . besides , information diffusion travels through social connections thereby depending on the properties of the social networks where it spreads .for example , simulations on synthetic scale - free networks showed that if information flowed through every social connection the epidemic threshold would be significantly lowered to the extent that it could disappear , so that any rumor , virus or innovation might reach a large fraction of individuals in the population no matter how small the probability of being infected .given the fact that social networks are scale - free those results predict that there is a strong interplay between network structure and the spreading process .however such is not the case for information spreading processes .our daily experience indicates that most rumors , innovations or marketing messages do not reach a significant part of the population . as mentioned earlier , the information transmission perceived cost prevents it from traveling inexpensively through all possible network paths .therefore when participants assess the value of the information being passed , the impact of their social network structure on the diffusion process might be diminished .unfortunately the true extent of such influence remains unknown in general .moreover , the reach of information can be affected by the dynamics of human communication and thus it is important to understand the interplay between the static and dynamical properties of information diffusion . finally , there is an important shortcoming in the data currently available to investigate those questions .the vast majority of the large amount of data collected on information exchanges , for example email , sms , calls or tweets , lacks the details required to follow the dynamics of a specific content item at the individual s level ( see however ) .thus , the behavioral stochasticity of the individuals caused by the message content is masked and observations are limited to people s stochasticity due to the transmission media .a representative example of this difficulty is the study of communication patterns in mobile phone calls in which every communication , regardless of the message , is used to partially discover the social relationships network through which potential messages will spread but is not capable of revealing the specific dynamics of a particular piece of information . in other cases , data is not available at the individual participant level but just as population averaged metrics thereby hiding that different content items elicit diverse task prioritization in a given person or social segment .the situation is clearly unsatisfactory since , to our knowledge and possibly because of privacy concerns or data proprietorship , there are not very many data sets tracing the propagation of a specific piece of content throughout the social network ( see however ) . to overcome those limitations in the understanding of electronic information diffusion , we present here the results of a series of controlled viral marketing campaigns , the commercial form of wom , that we conducted in eleven european countries . in themsubscribers of a business online newsletter received incentives for recommending the newsletter subscription to their acquaintances .the detailed tracking of those recommendations revealed the factors impacting the diffusion dynamics of that particular piece of information at every step and suggested a branching process as the mechanism driving the dynamics of information diffusion .thus the bellman - harris branching model , a generalization of the static percolation model introduced by newman for contagion propagation in networks , accurately describes our viral marketing campaigns . in particular, this branching model explains information diffusion of information in random networks and constitutes the simplest approach incorporating the human behavior high variability patterns both in activity volume and in response time .the rest of this paper is organized as follows : section [ sec : experiments ] introduces our viral marketing campaigns and the information viral diffusion mechanism used in them , while subsections [ sec : data ] and [ sec : results ] , respectively , present the campaigns propagation results data set and analyze the observed diffusion dynamics patterns and social connectivity found in such propagation . section [ sec : model ] follows with the analytical formulation of the bellman - harris branching model which includes detailed discussion of its phase transitions , asymptotic properties and time dynamics while section [ sec : examples ] studies several examples of its application to several scenarios of the response time distribution in the information propagation .we present our conclusions in section [ sec : conclusions ] .finally , appendix a discusses aspects of the substrate social network structure that can be gleaned through the information propagation process .we tracked and measured the `` word - of - mouth '' diffusion of viral marketing campaigns ran in eleven european markets that invited subscribers of an it company online newsletter to promote new subscriptions among friends and colleagues .campaign participants received incentives for spreading the offering through recommendation emails .the campaigns were fully web based .banner ads , emails , search engines and the company web page drove participants to the campaign offering site .there , participants could fill in a referral form with names and email addresses of those to whom they recommended subscribing the newsletter .the submission of this form launched recommendation emails including a link to the campaign main page whose automatically generated url was appended with codes allowing the web server to uniquely assign clicks on it to the sender and receiver of the corresponding email .the form , allowing up to four referrals per submission , checked destination email addresses for syntax correctness and to avoid self - recommendations .cookies prevented multiple recommendations to the same address and improved usability by automatically filling - in sender s data in subsequent visits to the submission form .additionally , the campaign server logged the time stamp of each step of the process ( subscription , recommendation submission ) and removed from records undeliverable recommendations . [ cols="<,^,^,^,^,^,^,^,^,^,^",options="header " , ] on the other hand, it makes sense to assume that the number of recommendations sent by _secondary spreaders _ ( including not sending any ) results from a decision by each message recipient that involves a trade - off between the message forwarding cost and its perceived value . for our campaigns lottery prize for example , andin a population average approach , a reasonable proxy of the perceived value of winning the prize for residents in a given country could be the fraction of the average income of its citizens represented by the prize cost in that market .granted , there may be many other factors at play in the formation of such perception , but there is a very significant correlation ( ) between the average income and the average number of recommendations sent by _secondary spreaders _ in each market which indicates that the expected gain average relative size may be one of them ( see table [ table2 ] ) .additionally , the human intervention in such decision process is at the root of a very unique property of the dynamics of information diffusion . comparing viral campaigns parameters in different markets ( see table [ table2 ] ) ,we observe a wide range of values in their respective information propagation dynamical parameters .since the campaigns execution was identical in all markets , those variations can only be due to a change in perception of the viral offering value and/or of the message forwarding cost by customers in each market .interestingly , variations of the _ transmissibility _( ) and the _ fanout coefficient _ ( ) present a pearson coefficient as evidence of a very strong dependence between them .we proved in that such dependence has the form which reduces to for .this peculiarity of information diffusion processes , not observed in disease epidemics , arises because the decisions of becoming a spreader and of the number of viral messages to send are simultaneously made by each participant which introduces correlation in their averages . in a first approximationwe could analyze information dynamics by studying the basic reproductive number of epidemiology , the average number of secondary cases generated by each virally informed individual , which results from the definition of the dynamical parameters as . however , average quantities like hide the heterogeneous nature of epidemics and also of information diffusion .in fact our campaigns show that most of the observed transmission occurs due to extraordinary events .in particular , we get that the probability distribution function ( pdf ) of the number of recommendations sent is well approximated by the harris discrete distribution where is a normalization constant so that .this function displays a power - law behavior in its tail starting approximately at the cutoff point .table [ table1 ] lists the distribution parameters for _ seeds _ ( ) , _ viral _ ( ) andtotal _ active _ ( ) nodes while fig .[ fig1b ] shows the probability distribution of the recommendations sent by _ active _ nodes in all markets , and the comparison to the probability predicted by a poisson discrete distribution with mean , same as that of the empirical data .the markedly different behavior between both of them indicates the high probability of finding individuals making a large number of recommendations .as noted in the introduction , such high demographic stochasticity , observed in many other human activities , suggests that humans response to a particular task can not be described by close - to - average models where they are all assumed to behave in a similar fashion with some small degree of demographic stochasticity . in sharp contrast with population homogeneous models of information spreading , we found that 2% of the active population in our viral campaigns has suggesting the existence of super - spreading individuals .super - spreading individuals have also been found in non - sexual disease spreading where they significantly increase outbreak sizes . in a similar manner ,the sizes of the information cascades found in our campaigns indicate that super - spreading individuals are responsible for making large viral cascades rarer but more explosive .the probability distribution of the campaigns cascades sizes , represented ( see fig .[ casc ] ) , is also a fat - tailed distribution ( in fact , the tail can be fitted to a power law with ) .in contrast , neglecting the existence of super - spreading individuals but still considering some degree of stochasticity in the number of recommendations by assuming is a poisson distribution with the same average , a cascade like the one in fig . [ cascade ] would have an occurrence probability of approximately once every _ seed _ nodes , a number much larger than the total world population ( see fig . [ casc ] ) .an element to consider in the aforementioned spreading stochasticity is the impact , if any , of the underlying social network heterogeneity in a similar way to that of the connectivity of a computer network on the diffusion of computer viruses .social networks data reveals that humans show large variability in their number of social contacts .thus , the connectivity of email networks whether measured by email traffic or by the users email address books is fat - tailed distributed . in some cases it is power - law distributed like the number of recommendations in our campaigns .large variability in the numbers of social contacts has a deep effect on disease spreading .in fact , disease spreading models on networks show that if information flows with the same probability through any link in a social network , its topological properties can significantly lower the `` tipping - point '' .however , while indiscriminate propagation can happen in computer viruses , diseases or other mechanistic processes , the human handling of information diffusion limits the influence of the social network structure : we expect , in general , the number of recommendations to be small compared to the social connectivity ( ) .while in social networks the `` friendship paradox '' implies that ( with the average number of social contacts of an individual s neighbors and the average number of social contacts of an individual ) , our recommendation network features .if , as supposed in most models , information flows through a fraction of the social contacts of an individual , we should have instead .a way to recover our result is to assume that and are largely independent .our tree - like diffusion cascades lead to a low undirected clustering coefficient of the viral _ cascades network _ ( ) compared to the values reported for email social networks ( ) which supports such assumption .assuming and independent , we get ( app . [ appa ] ) is the average number of social contacts of the neighbors of an individual .in social networks is a large number which leads to a very low clustering coefficient even for processes close to the `` tipping - point '' ( ) .this fact explains the unreasonable effectiveness of tree - based theory to explain information diffusion on networks with clustering . in conclusion ,large heterogeneity of recommendations activity is due to the participants behavior rather than consequence of their connectivity degree which is just the activity upper bound .finally , another important aspect to consider in the dynamics of information diffusion is the nodes reaction to receiving a message : shall they decide to spread it , how long do they take to do so ?, for how long do they remain active ?, and , is their responsiveness correlated in any way to the number of contacts they resend the message to ?the answer to these questions lies in the increasing evidence that the timing of many human activities , ranging from communication to entertainment and work patterns , follow non - poisson statistics , characterized by bursts of rapidly occurring events separated by long periods of inactivity .in fact , our campaigns revealed that most of the active nodes turn inactive right after spreading the information once which means that _ viral _ nodes do not remain as spreaders for a long time .the top panel in fig .[ figti ] shows that for most of the _ viral _ nodes ( actually 97% of them ) , the lapse of time between receiving the message and passing it along equals the interval between receiving the message and the last time it has been resent .the fact that for the most part _ viral _ nodes show just one spreading event means , from a modeling perspective , that diffusion follows an almost pure `` birth and death '' model . besides , the time dynamics of the viral recommendation process is independent from the number of recommendations sent by_ viral _ nodes as was shown in , that is there is no correlation between such number and the response time as evidenced by the pearson correlation coefficient of the two variables ( ) .as we have shown in , the probability distribution function of the _ viral _ nodes response time is a long tailed log - normal in another evidence of the humans large heterogeneity in wom diffusion . in this sense , participants behave like a sir model in which infection and decay to the recovered state happen at the same time .the study of our experimental data leads to a theoretical framework for the process of information diffusion where the dynamics of information viral spreading is explained by tree - like cascades .each information cascade stems from an initial _ seed _ that starts the viral message propagation with a random number of recommendations distributed as and whose average is .the individuals reached by the message become _ secondary spreaders _ with probability thereby giving birth to a new generation of _ viral _ nodes which , in turn , propagate the message further with recommendations distributed by with average . after sending their recommendations individualsbecome inactive and the process continues stochastically through new individuals in successive generations until none of the members of the latest one spread the message . at that pointthe information cascades die out and the propagation ends .this process corresponds to the well known bellman - harris ( bh ) branching model which is the simplest mathematical framework to study the branching dynamics of information diffusion .it generalizes the static and markovian galton - watson model typically used to model information diffusion or , in general , percolation processes in social networks . in the bh model , those two distributions , and ( ) , represent the number of recommendations sent by _ seed _ and _ viral _ nodes respectively .the introduction of two different distributions for the recommendations sent by _seed _ and _ viral _ nodes is not only due to the difference in the average number of recommendations observed in our campaigns ( see table [ table1 ] ) but also because , in general , in social networks the average connectivity of a node s nearest neighbors is higher than the average connectivity of the network nodes themselves . in particular , for completely uncorrelated random networks with distribution of connectivity given by the distribution of the number of connections of the nearest neighbors of a node is .the case in which informed nodes decide not to pass along the information can be incorporated in the recommendations distribution as the case in which the number of messages sent is .thus we can construct a family of probability distributions of the recommendations sent by nodes where from whence one can obtain the average number of recommendations in the new distributions which are related to the primary and secondary reproductive numbers as to formalize the study of the information spreading branching process , we define now the generating functions moments of the distributions can be obtained through derivatives of the generating functions ^ 2\ ] ] where is the variance of the number of recommendations of _ seed _ nodes .we will also assume different cdf of response times ( ) for _ seed _ and _ viral _ nodes which we will denote as and .their means are and respectively .we want to determine the probability distribution of finding nodes active ( i.e. recommending ) at time provided we start with one participant at , i.e. . to do that we use the following self - consistent argument : since the number of recommendations sent by each _viral _ node are random independent processes , the branching process starting from each _ viral _ node after a given recommendation , which we denote ( with ) are independent identically distributed ( iid ) copies of the same process .for example , in fig .[ figbh ] the branching process starting from nodes 1 and 2 are iid copies of the same process . but also , the process starting from 1 and the processes starting from 4 and 5 must be statistically the same .thus we have a self - consistent relationship between the branching process starting at a _ viral _ node and the processes starting from any of its recommendations : where are iid copies of the branching process and assuming that the recommendation event happens at .note that in this self - consistent equation ( the number of recommendations made by a _viral _ node and distributed by ) and the time are both random and independent . to describe the process we use generating functions techniques : we define the generating function for as s^{k} ] to get & = & \sigma_0 ^ 2r_1 ^ 2 + r_0\frac{\sigma_1 ^ 2+r_1 ^ 2}{1-r_1 ^ 2 } \label{vars}\end{aligned}\ ] ] as expected , when we approach the `` tipping - point '' , , the average and variance of the cascade size diverges . with in eq .( [ means ] ) we get the following expression for the average cascade size at infinite time which , using the parameters for all markets in table [ table2 ] , estimates the average cascade size in our campaigns as , very close to the observed value ( ) .not only are average cascade sizes well predicted by the branching model , but their distribution , which can be obtained from the derivatives of is properly replicated as well when the heterogeneity in the number of recommendations is implemented ( see fig .[ casc ] ) .both results show how accurate the model is in predicting the reach of a viral marketing campaign by merely using its dynamic parameters .moreover , since the values of and can be roughly estimated at the campaign early stages , we could have predicted its final reach at the very beginning . in the previous subsection we concentrate in the properties of the cascades in the asymptotic regime .here we come back to the original equations for the dynamics of the nodes ( [ eqf1 ] ) and ( [ eqf0 ] ) to investigate its time dependence . using on them that we get for the dynamics of the average number of infected participants . once again , the equation for depends on the solution of the integral equation for . actually , for we could explicitly write + ( r_0 / r_1 ) [ i_1(t)-1+g(t)]$ ] .however the solution for is not known in general , although we can study its asymptotic behavior using renewal theory .such behavior strongly depends on the existence or not of the so called malthusian parameter ( p.142 ) , i.e. the real solution of the equation if this parameter exists for then behaves asymptotically like for all values of .although always exists above the `` tipping - point '' where , there is a large class of distributions for which does not exist when .this is the so called _ sub - exponential _ class which consists of all distribution functions such that where is the twofold convolution of .all those distributions have tails that decay slower than any exponential , that is , they are heavy - tailed distributions which is the best qualitative description of the sub - exponential class. examples of are power law ( pareto - like ) , stretched exponentials or log - normal distributions . for this class of distributions , the asymptotic behavior of given instead by the tail of the distribution the asymptotic regime is reached for values of such that or , equivalently when .for the cascades size we get from equation ( [ eqs1 ] ) that whose asymptotic behavior , analyzed using renewal theory , gives this section we illustrate two kinds of behavior that we can find in the time dynamics of the viral cascades .specifically we consider the case in which is super - exponential with two significant examples , the poisson process and the gamma process , and the case in which is sub - exponential with application to the log - normal distribution found in section [ subexp ] . when is not sub - exponential the malthusian parameter given by eq .( [ malthusian ] ) always exists and the asymptotic solution is given by equation ( [ solmalthusian ] ) .* poisson process : * most of the literature assumes that is the cdf of the exponential distribution for the response times .thus , if equation ( [ eqf1 ] ) can be derived once to obtain -f_0(s , t ) \}\\\frac{\partial f_1(s , t)}{\partial t}&=&\rho_1 \ { f_1[f_1(s , t)]-f_1(s , t ) \}\end{aligned}\ ] ] and for the moments \\ \frac{di_1}{dt } & = & \rho_1[r_1 - 1]\ , i_1(t)\end{aligned}\ ] ] the solution for the second equation with initial condition is with and then where is the malthusian parameter for .the resonant case can only happen below the `` tipping - point '' where .equations ( [ eqpoisson ] ) are the linear growth markovian models typically used to understand the dynamics of information spreading in social networks . in particular if the number of recommendations depends linearly with the substrate social network connectivity then and thus to recover the result by pastor - satorras and vespignani that the malthusian parameter thus , if the social connectivity has a distribution which is fat tailed then and .moreover we recover the result of in which the malthusian parameter , in that case is , and leads to an exploding exponential that grows very fast in a short time .the poisson process is special , since depends linearly on .thus the value of for social networks influences the total reach of the cascades but also the time dynamics .however this is not always the case , as we will see for other time processes .besides , the poissonian case tells us that the time dynamics of viral cascades is markovian and that human dynamics can be described by differential equations like ( [ eqpoisson ] ) . *gamma process : * in the case in which the distribution of response times is not given by an exponential , the behavior below the `` tipping - point '' is given by eq .( [ solmalthusian ] ) for distributions not in the sub - exponential class . above the `` tipping - point ''the malthusian parameter always exists , but the relationship with can be highly non - linear .for example , in many applications it is found that the response time distribution can be fitted to the cdf of the gamma distribution , whose pdf is where and .in fact , in vzquez et al . found that the email response time is distributed as ( [ pdfgamma ] ) with and days . on the other hand , the gamma distribution is used as simple model for the response time or lifetime since it can accommodate different functional behaviors : a delta function when and fixed , a power - law with exponential cutoff when , or the exponential case when . for and the gamma distribution does not belong to the sub - exponential class .thus the malthusian parameter always exists and moreover it can be calculated exactly as this equations shows the non - trivial entanglement in the time dynamics of the recommendation process between the distribution of recommendations ( ) and the response time distribution ( ) .in particular , it shows that the exponential growth depends not only on the mean response time but also on the variance . to show this, we take the case fixed and we vary to control the variance . figure [ figmalth ] shows that above the `` tipping - point '' diverges when grows and thus propagation happens much more rapidly than in the case of the poissonian approximation .the reason for it is that above the `` tipping - point '' the initial exponential growth of the infinite cascade is triggered by those people with response times below the mean , which in the case of long - tailed distributions are also more abundant than those with large response times .below the `` tipping - point '' , the contrary happens : since all cascades die out , their time dynamics is controlled by few nodes who , in the case of long - tailed distributions , can have large response times halting the branching process and slowing down the propagation of the information . in particular , eq .( [ maltgamma ] ) recovers the result in that with and _ below _ the `` tipping - point '' we get , i.e. the time scale is given by the cutoff in the distribution of response times . however , it is important to note that even in this case , the asymptotic dynamics in the limit is still given by the exponential decay in equation ( [ solmalthusian ] ) which shows that although now depends non - trivially on the moments of the distribution we may describe the dynamics in terms of markovian equations like ( [ eqpoisson ] ) replacing by its actual value . in the case where is sub - exponentialthe malthusian parameter does not exist below the `` tipping - point '' and the process asymptotic dynamics is given by the tail of the distribution as eq .( [ iasym ] ) .in particular , this implies that we can not describe the dynamics of viral cascades by markovian approximations like the differential equations eqs .( [ eqpoisson ] ) a sign for the strong non - markovian character of the process in this situation , which corresponds to our empirical findings. * log - normal process : * we concentrate on the case where is the cdf of the log - normal distribution which we found to be a good model for the response time in our campaigns .specifically assuming its pdf is with mean and variance , then eq . ( [ iasym ] ) tells us that where is the complementary error function . in the large limitwe can replace in eq .( [ decay ] ) by the first term of its asymptotic expansion to obtain which indicates that the decay below the `` tipping - point '' is not exponential and also that it happens in a logarithmic and not in a linear time scale as shown in fig .this in turn implies that the information propagation prevails for much longer times than expected , as was shown in , since the asymptotic dynamics in dying viral cascades can be dominated ( and halted ) by a single individual .however , above the `` tipping - point '' the malthusian parameter always exists and can be calculated . in this casehowever , it can be very different from the poissonian approximation given by eq .( [ malthusianpoissonian ] ) since there is not an analytical solution in closed form for the laplace transform of the log - normal distribution and equation [ malthusian ] must be solved through numerical methods like the one proposed in .finally , an important difference with the super - exponential process is that with sub - exponential cdf s of the response times eq .( [ iasym ] ) shows an asymptotic dynamics for that is always universally given by the cdf of the response ( with a rescaling prefactor dependent on ) .this could be used to measure if no access to individual responses is possible .note however than in the case of sub - exponential distributions , this is not possible since the malthusian parameter in eq .( [ malthusian ] ) depends highly non - trivially on both and .we closely tracked an invariable message propagation in an information diffusion process below the `` tipping - point '' ( i.e. with ) driven by a real viral marketing mechanism run in several european markets .our analysis of the data set of the resulting propagation that reached over 31,000 individuals , reveals the striking diffusion patterns that characterize the dynamics of information diffusion processes as being substantially different from the ones used in the epidemic models traditionally used to explain information propagation .those characteristic patterns affect both the structure of the propagation paths and their dynamics . on the structural side ,the viral propagation cascades are nearly pure trees almost completely devoid of closed loops or cycles and feature a very low clustering coefficient which is almost two orders of magnitude lower than the one typical of the email social networks upon which the viral propagation took place . besides , the recommendations spreading activity of the campaigns active participants is very heterogeneous and its pdf is a long - tailed power - law which explains why most of the observed propagation was due to extraordinary events caused by _ super - spreading _ individuals . on the other hand, the dynamics side of the propagation process shows that a majority of the spreading individuals become inactive right after sending their recommendations in what could be considered a `` birth - and - death '' process . finally , the pdf of the forwarding time for the received recommendations is also a very heterogeneous long - tailed distribution , a log - normal in this case , and the spreaders forwarding time distribution and that of the number of recommendations they sent are independent and uncorrelated . while there exist in the literature a number of studies about the static properties of viral information diffusion none of them explain the peculiar features discovered in the dynamics of our real campaigns . on the one handmost models concentrate only on the static asymptotic properties of the viral dynamics like jurvetson s viral marketing model , the marketing percolation model of goldenberg and libai , or the recommendation propagation model by leskovec et al . which predicts a power - law with exponent for the distribution of the number of recommendations .on the other hand , numerous authors have studied the dynamic stochastic rumors using the daley - kendall ( dk ) or the susceptible - infective - refractory ( sir ) propagation models with markovian differential equations , or the elaborate branching model of van der lans et .al .however those models assume that the response time can be described by an exponential distribution which facilitates the theoretical analysis since markovian and thus viral information diffusion can be explained by differential equations .as we have found , this is not the case for our real experiments and we have described how to model the dynamics of information diffusion by means of the bellman - harris , which is the minimal framework to understand the non - markovian spreading of information on social networks .this model generalizes the branching galton - watson scheme typically used both in information diffusion and general percolation processes in social networks .our main result is that the information diffusion process object of this research shows a branching dynamics with some striking peculiarities that result a ) from the human characteristic patterns when scheduling and prioritizing tasks , b ) from the human decisions on how to select targets for the viral propagation , and , c ) from the negligible influence of the substrate social network when the process runs below the ` tipping - point'. thus , to explain all of them we propose a concise model that considers the large heterogeneity of human behavior but neglects the impact of the email social network underlying the diffusion process .the mathematical description of this approach is a non - markovian , bellman - harris branching model with a sub - exponential ( log - normal ) distribution of the recommendations response time like the one in section [ subexp ] , and two different power - law distributions for the number of referrals for the classes of _ seed _ and _ viral _ nodes , and respectively .since and in our model are both iid random variables , the overall _ a priori _ probability of transmission of the information between two individuals , the _ transmissibility _ , is the average over the distributions and of the transmission probability between any two individuals .thus , per newman , our branching model is equivalent to uniform bond percolation on the same social network and several magnitudes of interest ( cascades size distribution and `` tipping - point '' ) in the infinite time limit can be obtained by mapping it onto a bond percolation model .given the distributions , , and , this model accurately predicts all the magnitudes of interest of the viral information or wom diffusion processes : the dynamic parameters _ transmissibility _ and _ fanout coefficient _ , the cascades size distribution , its average and variance in the asymptotic limit , the _ cascades network _ clustering coefficient , the message propagation `` tipping - point '' or the precise time dynamics in the asymptotic regime .besides , it allows predictions for processes past , but close to , the `` tipping - point '' provided the substrate network of the propagation is large enough to avoid finite - size effects and maintain the assumption of its negligible influence .the accuracy of those predictions , which can be achieved early in the propagation process , make this model a valuable tool for managing information diffusion . finally since most information transmission , sharing and searching in social networks has limited reach ( thus happening below the `` tipping - point '' ) and given the fact that there seems to exists certain universality on both the heterogeneity in the number of actions and the sub - exponential character of human response times , our theoretical model is thus the most basic and general analytical tool to understand processes like rumor spreading , cooperation , opinion formation , cultural dynamics , diffusion of innovations , etc .assuming independence between the degree of a social network node and the number of messages it sends in a diffusion process , the undirected clustering coefficients of the social network and of the _ cascades network _ are correlated .both are defined as where `` triple '' means a node with two edges running to an unordered pair of others .if connected , such pair forms a triangle . in a mean - field approximationthey can be estimated as with and being the average of triangles or triples by node in the social ( _ soc _ ) or cascades ( _ cas _ ) network .the probability of finding a triangle on a given node is the probability of it having a triple times the linking probability of its end nodes where is the existence probability of a link in the open side of the triad . due to the independence of social links and recommendations , the average number of triangles and triples in the _ cascades network _ results which replaced in [ a2 ] , [ a3 ] and combined with [ a4 ] yield since nodes reached by a viral message become active with probability and each resends it in average to of its nearest neighbors ( excluding the ancestor node ) , the probability of closing the triple is whose factor 2 stems from the fact that either of the nodes at the open end of a triple can send the message and close the triangle .replacing in [ a7 ] recovers eq .( [ eq : cvir ] ) which has been verified ( even for ) through simulations on a university email network with .its correlation with the _ cascades network _ clustering coefficient as a function of is shown in fig .[ figclust ] .the low values of explain why our model neglects the substrate network structure in the study of information propagation below the `` tipping - point '' .a. de bruyn and g. l. lilien , intern .j. of research in marketing * 25 * , 151 , ( 2008 ) .r. dye , harvard business rev . * 78 * , 139 ( 2000 ) . v. kumar , j. a. petersen , and r. p. leone , harvard business review r0710j , ( 2007 ) .p. schmitt , b. skiera , and c. van den bulte , journal of marketing * 75 * , 46 , ( 2011 ) .a. barrat , m. barthelemy , and a. vespignani , _ dynamical processes on complex networks _ , ( cambridge university press , 2008 ) . c. castellano , s. fortunato and v. loreto , rev .phys . * 81 * , 591 , ( 2009 ) .d. j. watts , and p. s. dodds , journal of consumer research * 34 * , 441 , ( 2007 ) .d. crandall , d. cosley , d. huttenlocher , j. kleinberg , and s. suri , kdd08,acm ( 2008 ) j. l. iribarren , and e. moro , phys .* 103 * , 038702 , ( 2009 ) .j. phelps , r. lewis , l. mobilio , d. perry , and n. raman , jour . of advertising research* 44 * , 333 , ( 2005 ) c. dellarocas , x. m. zhang , and n. f. awad , jour . of direct marketing* 21 * , 23 , ( 2007 ) .d. liben - nowell , and j. kleinberg , proc .usa * 105 * , 4633 , ( 2008 ) .m. e. j. newman , phys .e * 66 * , 016128 , ( 2002 ) .b. golub , and m. o. jackson , proc .usa * 107 * , 24 , 10833 ( 2010 ) .d. wang , z. wen , h. tong , c. y. lin , c. song , and a .-barabsi , proceedings of the 20th international conference on world wide web , * 735 * , ( 2011 ) .g. pickard , i. rahwan , w. pan , m. cebrian , r. crane , a. madan , and a. pentland , arxiv:1008.3172v1 [ cs.cy ] , ( 2010 ) .y. moreno , m. nekovee , and a. f. pacheco , phys .e * 69 * , 066130 , ( 2004 ) . c. a. hidalgo , a. castro , and c. rodriguez - sickert , new j. phys .* 8 * , 52 , ( 2006 ) .d. j. daley and d. g. kendall , nature * 204 * , 1118 , ( 1964 ) .j. leskovec , l. adamic , and b. huberman , acm transactions on the web , * 1 * , ( 2007 ) .f. wu , b. a. huberman , l.a .adamic , and j. r. tyler , physica a * 337 * , 327 ( 2004 ) .barabsi , nature * 435 * , 207 , ( 2005 ) .w. aiello , f. chung , and l. lu , proc . of the 32nd annual acm symposium of theory of computing , 171 , acm , new york , ( 2000 ) .d. gruhl , r. guha , d. liben - nowell , and a. tomkins , proc .of the 13th int .on www , 491 , ( acm , new york , 2004 ) .j. leskovec , m. mcglohon , and c. faloutsos , proc . of the siam int .conf . on data mining ( sdm07 , 2007 ) .j. e. pitkow , proc . of the 7th www conf .( www7 , 1997 ) . m. gladwell , _ the tipping point _ , ( little , brown and company , new york , 2000 ) . f. liljeros , c.r .edling , l. a. n. amaral , h. e. stanley , and y. aberg , nature * 411 * , 907 ( 2001 ) .r. m. anderson , and r. may , _ infectious diseases of humans : dynamics and control _( oxford university press , 1991 ) .a. vzquez , j. g. oliveira , z. dezs , k. i. goh , i. kondor , and a .-barabsi , phys .e * 73 * , 036127 ( 2006 ) .d. b. stouffer , r. d. malmgren , and l. a. n. amaral , nature * 235 * , ( 2005 ) .m. karsai , m. kivel , r. k. pan , k. kaski , j. kertsz , a .-barabsi , and j. saramki , phys .e * 83 * , 025102 , ( 2011 ) .g. miritello , e. moro , and r. lara , phys .e * 83 * , 045102 ( 2011 ) .r. pastor - satorras , and a. vespignani , phys .lett . * 86 * , 3200 , ( 2001 ) .m. e. j. newman , siam review * 45 * , 167 , ( 2003 ) .d. j. watts , and j. peretti , harvard business rev . , * f0705a * , ( 2007 ) .a. de bruyn , and g. l. lillien , int .jour . of research in marketing 25 , 143 , ( 2008 ) .c. kiss , a. scholz , and m. bichler , proc . of the the 8th ieee int .conf . on e - commerce technology and the 3rd ieee int. conf . on enterprise computing , e - commerce , and e - services , ( 2006 ) .s. jurvetson , and r. draper , netscape m - files , ( 1997 ) .m. e. j. newman , and j. park , phys .e * 68 * , 036122 , ( 2003 ) .h. ebel , l .-mielsch , and s. bornholdt , phys .e * 66 * , 035103 , ( 2002 ) . c. dellarocas , management science * 49 * , 1407 , ( 2003 ) .s. feld , american journal of sociology * 96 * , 1464 , ( 1991 ). j. l. iribarren , and e. moro , soc. netw . * 33 * , 134 , ( 2011 ) .j. o. lloyd - smith , s. j. schreiber , p. e. kopp , and w. w.getz , nature * 438 * , 355 , ( 2005 ) .f. m. bass , management science * 15 * , 215 , ( 1969 ) .m. e. j. newman , s. forrest , and j. balthrop , phys .e * 66 * , 035101 ( r ) , ( 2002 ) .barabsi and e. bonabeau , scientific american , * may * , 55 , ( 2003 ) . s. melnik , a. hackett , m. a. porter , p. j. mucha , and j. p. gleeson , phys . rev .e * 83 * , 036112 , ( 2011 ) .t. e. harris , _ the theory of branching processes _ , ( springer verlag , berlin , 2002 ) .k. b. athreya , and p. e. ney , _ branching processes _ , ( springer verlag , berlin , 1972 ) .r. van der lans , g. van bruggen , j. eliashberg , and b. wierenga , marketing science , * 69 * , 348 , ( 2010 ) . s. n. dorogovtsev , and j. f. f. mendes , _ evolution of networks : from biological nets to the internet and www _ , ( oxford university press , oxford , 2003 ). b. karrer , and m. e. j. newman , phys .e * 82 * , 016101 , ( 2010 ) .w. feller , _ an introduction to probability theory and its applications , vol .i , third ed ._ , ( john wiley & sons , new york , 1967 ). m. barthelemy , a. barrat , r. pastor - satorras , and a. vespignani , jour . of theoretical biology* 235 * , 275 , ( 2005 ) . y. m. kalman and s. rafaeli .proceedings of the 38th annual hawaii international conference on system sciences , 108 ( 2005 ) .a. vzquez , b. rcz , a. lukcs , and a .-barabsi , phys .lett . * 98 * , 158702 ( 2007 ) j. f. shortle , m. j. fischer , d. gross , and d. m. b. masi , jour . of probability and statistical science * 1* , 15 , ( 2003 ) .j. goldenberg , b. libai , and s. solomon , physica a * 284 * 335 , ( 2000 ) .j. gani , environmental modelling & software * 15 * , 721 , ( 2000 ) .j. zhou , z. liu , and b. li , physics letters a * 368 * , 458 , ( 2007 ) .r. guimer , l. danon , a. daz - guilera , f. giralt , and a. arenas , phys .e * 68 * , 065103 , ( 2003 ) .
|
despite its importance for rumors or innovations propagation , peer - to - peer collaboration , social networking or marketing , the dynamics of information spreading is not well understood . since the diffusion depends on the heterogeneous patterns of human behavior and is driven by the participants decisions , its propagation dynamics shows surprising properties not explained by traditional epidemic or contagion models . here we present a detailed analysis of our study of real viral marketing campaigns where tracking the propagation of a controlled message allowed us to analyze the structure and dynamics of a diffusion graph involving over 31,000 individuals . we found that information spreading displays a non - markovian branching dynamics that can be modeled by a two - step bellman - harris branching process that generalizes the static models known in the literature and incorporates the high variability of human behavior . it explains accurately all the features of information propagation under the `` tipping - point '' and can be used for prediction and management of viral information spreading processes .
|
for solving non - convex optimization problems , a tool that is becoming more important is * generalized conjugation*. in this paper we introduce a family of coupling functions , the g - coupling functions , which will allow us to see in a different way duality schemes .the usual properties found in the literature ( , ) are related to a fixed coupling function , but here we consider ( for a specified function ) a family of coupling functions .these coupling functions are motivated by gap functions .it is interesting to point out , that many of these ( gap ) functions have similar properties .however , in some cases they are functions of one vector and it is important , since they are linked to specified optimization problems , that those functions have zeros .g - coupling functions will be defined as functions in two variables and they might not have zeros .even more , given a specified proper function , it is shown that a sub - family of this family of coupling functions satisfies many interesting properties . in section 2 , we describe how many gap functions have similar properties , which are useful for the definition of g - coupling functions . in section 3 , it is found the definition of g - coupling function with properties related to generalized conjugation using this family of functions and a fixed proper function . in section 4 , it can be seen how these ideas can be applied in the equilibrium problem .in several works already published , there can be found definitions of gap functions for particular problems .now we present 3 concrete examples . in , it is consider the following variational inequality problem : where is a maximal monotone correspondence ( i.e. , for every with and if there exists , such that , for all and for all , then ) .it is found as a gap function the following one : where and is a non - empty closed convex set .this function happens to be non - negative and convex , and it is equal to zero only in solutions of . the theory of the equilibrium problem begun with the paper written by blum and oettli : where is a non - empty closed convex set and is a function that satisfies : 1 . , for all .2 . is convex and l.s.c . is u.s.c .the gap function is defined as ( , ) : in this case , the function is non - negative , convex and l.s.c . andif it vanishes at , then is a solution of . in ,the extended pre - variational inequality problem is considered : where , and . in this work ,the gap function is ,\ ] ] which is non - positive and it only reaches the value zero in solutions of . in all these examples ,gap functions are used to transform an special equilibrium problem ( for example the vip is a particular class of ep ) into a minimization problem .now our attention is focused in using coupling functions that could be related , at least in some general aspect , to gap functions .therefore these functions must link both primal and dual variables . since these coupling functions must be related to a sense of gap " , we consider these functions as non - negative and with 2 arguments .let us remember that for the minimization problem , the convex conjugation theory allows us to generate a dual problem and there is implicit another concept of gap function ( see ) : consider . \qquad( p)\ ] ] define a function , where , satisfying then will be called a perturbation function and the function defined by will be called the marginal function .observe that considering now , the convex bi - conjugate ( see ) of one has : where .\ ] ] then , making , one has is called dual problem of and in general we have .it is said that there is no duality gap whenever .it is easy to prove that , and if we define the function by , then .this analysis is summarized in the following scheme : if is proper and convex , a necessary and sufficient condition for ensuring that there will be no duality gap ( ) is that be l.s.c . at 0( in general l.s.c . does not imply that would be l.s.c . ) .further more , if is convex , l.s.c . and , then and the dual problem has at least one optimal solution , and if is an optimal solution of and , then consider now the function defined by : this function vanishes at if and only if solves the primal problem and solves the dual one .in addition , this function is non - negative and if the first variable is kept fixed , the function is convex and l.s.c .it is clear now , which properties are satisfied for many gap functions .a non - negative function will be called a g - coupling function , if there exists a non - empty closed set such that : 1 . , which means that and for every .2 . .define .not every g - coupling function has zeros : * example : * define on ` ` then ( ) is continuous and it does not have any zeros .[ pr1 ] let and be the non - empty closed set linked to .the following statements hold 1 . if , with and is l.s.c. then 2 . if is lsc on and there exists such that is bounded , then 1 .it is well known ( see ) that is equivalent to the fact that all the level sets are bounded .the statement follows from the lsc of .2 . it follows from the lsc of .let us turn our attention now to how the family of functions will allow us to establish duality schemes for , at least , the minimization problem .it is important to point out that in the following we consider an unusual type of duality : is kept fixed and , for a given , is variable .consider a proper function .for a given take .define and as follows ( for example see , and references therein ) : where is the closed set linked to . in some cases , it would be better to consider a which satisfies : 1 . is convex and is a convex and l.s.c .function for each in . with this, we have the following : let be a proper function and given . if , then which implies furthermore , if satisfies , then is a convex l.s.c function .unless it is mentioned , not every satisfies . it would be interesting to know which condition either a g - coupling function or the function must satisfy in order that the function be proper , because with this one would have a non - trivial function related to .the following lemma ensures the existence of such a function for any , taking as a starting point a natural condition on which must be imposed if is the objective function of a minimization problem .let be as before .then is bounded from below if and only if , for every , there exists such that is proper . 1 .suppose that , then for a fixed , consider as follows : ( ) thus which is clearly a proper function and since was fixed arbitrarily , the result is satisfied for every .2 . take and such that is proper .let us suppose that , from we can see that this implies that . then : \right)\geq\ ] ] \right)\geq \sup_{x^*\in c } ( -f^g(x^ * ) ) = -\inf_{x^*\in c}f^g(x^*),\ ] ] which means .then , which implies that is not proper and we have a contradiction .therefore , .notice that this proof also states , in particular , that there exists for every which satisfies and is proper .henceforth , consider only functions such that and for some fixed , will be such that is proper .let it be and + defined by : with , and .define observe that might not be in , since can take the value for some . is non - empty for all .given , define by : it is easy to check that belongs to with ( this example also proves that functions can be found in which satisfy ) .now consider with .taking , define the dual problem related to : since .\ ] ] this means that there is no duality gap between the primal problem and its dual for every .+ + the next theorem states that given , the correspondence defined by is a closed correspondence ( see ) .take and the non - empty closed convex set .if there exist , , sequences of functions ( ) , such that converges uniformly to ( in ) , satisfies for every and converges uniformly to a function ( in ) , then and it satisfies ( extend to by only taking whenever ) .let us prove first that .since converges uniformly to , given , there exists such that if then taking ( remember that for all ) : then . and since is arbitrary , one has that . now we prove that is convex and l.s.c . for all . is convex : let be fixed arbitrarily . since for all , is convex , one has that given and ] & x\in { i\!\!r}^n_+ , \\ \displaystyle\inf_{y\in { i\!\!r}^n } f(y ) & x\notin { i\!\!r}^n_+ , \end{array}\right.\ ] ] \right) ] as before , the functions are convex increasing and l.s.c for every , therefore is a convex increasing and l.s.c . function . with this, one has that is closed , convex and if is non - negative then for every , for all ] is defined by .\ ] ] secondly , if we consider where , ( here also ) , then is a l.s.c .increasing convex - along - rays ( icar ) function ( ) and we can find in chapter 3 , section 3 of several results about this kind of function including a condition which guarantees that in some points , the general sub - differential of the function is nonempty .* remark : * the previous results are valid for every which satisfies . let be a typical minimization problem , where remember that ( see ) the following is the well known dual problem : .moreover , is a solution of and is a solution of if and only if is a saddle point of the lagrangian function , given by which means , define now , as follows : it can be seen that , with .let then .calculating with and which means and thus the classical lagrangian duality is recovered .let , where is a non - empty closed convex set , be such that 1 . is a convex l.s.c .function for every .2 . .the equilibrium problem is defined as follows : pseudo - monotone functions , which are defined as follows , are consider in .a function would be called pseudo - monotone if for every with , we have .let and be the non - empty closed set linked to . if is pseudo - monotone , then . since , then and .therefore if is pseudo - monotone , it can be said that and .observe that this proposition affirms that the only pseudo - monotone g - coupling function is the null function .take now and .consider now for every : where for every ( is the non - empty closed set linked to ) and it would be interesting to know if there exist and a which satisfy the _ zero duality gap property _( zdgp ) : where ( notice that if is empty , would have no solutions . )if there exists such a , the following are equivalent : 1 . is a solution of .2 . .there exists such that [ lemmexistzdgp ] there exists which satisfies the zdgp .define by : where , in this case .calculate , with , : it is clear that but for every , one has that , which implies that we always have .therefore finally , there exists which satisfies the zdgp .let us give now another function which also satisfies the zdgp . in this case , this will generate a duality scheme which has been already studied in .let be defined by and , the effective domain of .( since is a concave u.s.c function , then is a closed convex set . )define then : in this case .calculate now for : \\ & = & \displaystyle \sup_{y\in k}[\langle x^*,y\rangle -i_k(x^*)-f(x , y ) ] \\ & = & \displaystyle \sup_{y\in k}[\langle x^*,y\rangle -f(x , y)]-i_k(x^ * ) , \end{array}\ ] ] but =\sup_{y\in k } [ \langle x^*,y\rangle - f(x , y)]$ ] , then the function defined above , satisfies the zdgp .similar to lemma [ lemmexistzdgp ] . in very interesting result can be found .[ jemlws ] is a solution of if and only if , there exists such that .this result does not only say that , but that the dual problem has a solution . in order to prove this , we need first the following lemma . for every and , one has : from fenchel s inequality , we have for every fixed then , taking : where the last inequality occurs , since .this means , . * of theorem [ jemlws ] : * 1 .if is a solution of , then then , there exists ( see ) ( where stands for the normal cone of at ) and thus which means . but thanks to the previous lemma , this implies that .2 . take and suppose that there exists such that , then and thus , is a solution of .a particular case of the equilibrium problem is the complementarity problem , which is defined as follows : where is a closed convex cone and is a continuous function .considering with and in , the solution set of the is equal to the solution set of related to .let us take defined as in the beginning of this section : calculate ( ) : but is equivalent to the statement that and this inclusion is true since is a closed convex cone and .then therefore , calculate now the set : 1 . if is such that then which means .2 . if is such that then there exists satisfying . thus which means .all these imply that .+ + then , given , there exists ( for example ) such that + and : but was chosen arbitrarily , therefore finally , there exists such that in the is considered , when and is an affine operator , in other words , the case of the linear complementarity problem . for studying this problem ,they propose the following : + + is a solution of , if and only if satisfies : it is immediate to see that this proposal is identical to ( [ nec&suf cond for ( cp ) ] ) , therefore by using this ( the one used at the beginning of this section ) we generate a dual problem of which has been treated in other works .this work gives a basis for a new theory , which we called g - coupling functions .logically there are many things to explore , by example : 1 . using g - coupling functions for a perturbation theory .2 . using g - coupling functions for generating primal , dual and primal - dual algorithms .3 . analyze , using g - coupling functions , the variational inequality problem for the case of non - monotone operators .4 . given for a g - coupling function such that is abstract convex with respect to the class of elementary function induced g - coupling function ( see ) .
|
gap functions are useful for solving optimization problems , but the literature contains a variety of different concepts of gap functions . it is interesting to point out that these concepts have many similarities . here we introduce g - coupling functions , thus presenting a way to take advantage of these common properties . _ keywords _ : general conjugation theory ; non convex optimization ; equilibrium problems ; gap functions . _ 2000 mathematics subject classifications _ :
|
a crucial challenge towards a safe and efficient operation of iter consists in the need of reducing the dangerous effects of runaway electrons ( re ) during disruptions . re are considered to be potentially intolerable for iter when exhibiting currents larger than 2ma .one of the most popular strategies to address this task is based mainly on re suppression by means of massive gas injection ( mgi ) of high - z noble gas before the thermal quench ( tq ) , which possesses the additional advantage of reducing the localized heat load .however , mgi leads to long recovery time , requires effective disruption predictors , and may lead to hot tail re generation or high mechanical loads if the current quench ( cq ) does not occur in a suitable time interval .nevertheless , in the circumstances in which such suppression strategy is not effective , for instance due to a delayed detection of the disruption and/or to a failure of the gas valves or of disruption avoidance techniques ( ecrh ) , an alternative strategy consisting of the re beam energy and population dissipation by means of a re active beam control may be pursued , as noted in .alternative mitigation techniques exploit magnetic ( resonant ) fluctuations / perturbations to displace re ; their effect on re beam dissipation have been studied in .resonant magnetic perturbation techniques can be also used at the cq to prevent large avalanche effects .however they require specific active coils that are not available at ftu .+ the method proposed in this paper achieves stabilization of a disruption generated re beam by minimizing its interaction with the plasma facing components ( pfc ) .the re energy dissipation is obtained by reducing the re beam current via the central solenoid ( inductive effects ) .similar techniques have been investigated in diii - d .in particular , the focus here is on those re that survive the cq . when the re beam position is stabilized , further techniques , not studied in the present paper , such as high - z gas injection to increase re beam radiative losses could be exploited .+ in the last years , experiments on re active control have been carried out in diii - d , tore supra , ftu , jet , and initial studies have been carried out also at compass . in tore supra attempts of re termalization via mgi ( he ) have been investigated . in diii - d disruptions have been induced by injecting either argon pellets or mgi while the ohmic coil current feedback has been left active to maintain constant current levels or to follow the desired current ramp - down .diii - d also studied the current beam dissipation rate by means of mgi with a final termination at approximatively 100ka .similar results on mgi mitigation of re have been obtained at jet .the present work goes along similar lines but re beam dissipation is obtained only by inductive effects , via central solenoid as in , combined with a new dedicated tool of the plasma control system ( pcs ) .this scheme yields a re beam current ramp - down and position control .effectiveness of the novel approach is measured in term of reduced interaction of highly energetic runaways with the pfc .furthermore , as in , we consider the re beam radial position obtained by the co/co scanning interferometer , showing that is also in agreement with neutron diagnostics and the standard real - time algorithm based on magnetic measurements that estimate the plasma boundary .a brief list of ftu diagnostics correlated with this work is given below .further details are given in .* fission chamber ( fc ) : * a low sensitivity fission chamber manufactured by centronic , with a coating of 30 / of operated in pulse mode at 1 ms time resolution and calibrated with a source .this diagnostic is essential in the analysis of the sequence of events occurring during the re current plateau phase since the standard hard x - ray ( hxr ) and neutron monitors are typically constantly saturated after the cq . during the re plateau phasethis detector measures photoneutrons and photofissions induced by gamma rays with energy higher than 6 mev ( produced by bremsstrahlung of the re interacting with the metallic plasma facing components ) . * soft - x ( sxr ) : * the multichannel bolometer detects rays emitted at the magnetic center of the toroidal camera ( major radius equal to 0.96 m ) in the range of 5ev to 10kev . within this rangealso re collisions with plasma impurities can be detected .* hard - x ( hxr ) : * the x - rays are monitored by two systems : * nai scintillator detector sensitive to hard - x rays with energy higher than 200kev mainly emitted by re hitting the vessel ( labeled as hxr in the figures ) . *the neu213 detector sensitive both to neutron and to gamma rays and cross calibrated with a bf3 neutron detector in discharges with no re .this detector is used to monitor the formation of re during the discharge , however at the disruption and during the re plateau its signal is usually saturated and therefore the gamma monitoring is replaced by the fission chamber .* interferometer : * the co/co scanning interferometer can provide the number of electrons measured on several plasma vertical chords ( lines of sight , los ) intercepting the equatorial plane at different radii ranging from 0.8965 m to 1.2297 m with a sampling time of 62.5 .detailed information related to mounting position and specific features are given in .* mhd sensors : * the amplitude of the mirnov coil signal considered is directly related to helical deformations of the plasma resulting from mhd instabilities , having in most cases n=1 ( m=2 ) toroidal ( poloidal ) periodicity .dedicated ftu plasma discharges have been performed to test two novel real - time ( rt ) re control algorithms , named pcs - ref1 and pcs - ref2 .such algorithms have been implemented within the framework of the ftu plasma control system ( pcs ) for position and ramp - down control of disruption - generated re .the active coils used to control the position and the current of the plasma are shown in figure [ fig : coils ] .the pcs of ftu , extensively described in , exploits the current flowing within the t coil , called the central solenoid , to impose the plasma current via inductive effect .the t coil current is regulated via a feedback control scheme based on a proportional - integral - derivative ( pid ) regulator , which is driven by the plasma current error plus a preprogrammed signal .the horizontal position of the plasma is controlled by means of an additional pid regulator that is fed with the horizontal position error .the latter error is obtained by on - line processing of a series of pick - up coil signals to determine the plasma boundary ( last closed magnetic surface ) which is compared along the equatorial plane to the desired plasma internal and external radii , see for further details . the current flowing in the f coil ( ) , by geometrical construction , allows us also to modify the plasma elongation .the current on the v coil ( ) , which allows us to produce a vertical field similar to f but with a slower rate of change , is modified by a specific controller named current allocator in order to change at run - time and maintain unchanged the vertical field .in such a way , the plasma radial position is left unchanged and meanwhile it is possible to steer the value away from saturation levels .the current redistribution ( reallocation ) between and is performed by the current allocator at a slower - rate than the changes imposed on by the pid regulator ( pid - f ) for plasma horizontal stabilization ( two time scale feedback system ) .+ the pcs safety rules impose that whenever the hxr signal takes value above a given safety threshold ( 0.2 ) for more than 10 ms , indication that harmful re are present , the discharge has to be shut - down . in the previous shut - down policythe reference was exponentially decreased down to zero and the desired and where left unchanged .+ the new controller _ pcs - ref1 _ has been specifically designed for re beam dissipation and comprises two different phases . in the first phase , specific algorithms described in are employed to detect the cq and the re beam plateau by processing the and the hxr signal . at the same time , the current allocator steers the values of away from saturation limits , to ensure that a larger excursion is available for the control of the re beam position . in the second phase ,once the re beam event has been detected ( cq or hxr level ) , the reference is ramped - down in order to dissipate the re beam energy by means of the central solenoid . in particular , a scan of the initial values and slope of the updated reference for re suppression ( current ramp - down ) , that substitutes the original reference when the re beam is detected , have been performed .at the same time the desired ( reference ) external radius is reduced linearly with different slopes down to predefined constant values . however , the updated reference is such that , below 1.1 m , it is constrained to be not smaller than m to avoid large position errors that might induce harmful oscillations of the re beam due to the action of the pid - f position controller .the has been reduced in order to compensate for a large outward shift of the re beam , hence to preserve the low field side vessel from re beam impacts .the reduction of the external plasma radius reference can be considered the way of finding the re beam radial position that provides minimal re beam interaction with pfc , similar findings have been discussed in and the re beam position with minimal pfc interaction is called the `` safe zone '' . in all the experiments ,the internal radius is not changed since we are operating in ( internal ) limiter configuration .nevertheless , the control system has the objective to maintain the plasma within the desired horizontal and vertical radii , avoiding the plasma impact with the vessel ( both side ) . + a second novel controller _ pcs - ref2_ has been designed with the same objective of re beam control and energy suppression .the main difference between this second controller and pcs - ref1 consists in an alternative profile for , when the latter is ramped - down . in this casethe updated reference of is ramped - down to a specific constant value , within the range ] m , is computed in real - time by processing the measured and fc signals according to the extremum seeking technique ( similar to a gradient algorithm discussed in ) in order to minimize the real - time fc signal .furthermore , the ramp - down slope selected for pcs - ref2 is about three times smaller than pcs - ref1 .note that due to the current amplifiers limitations the control system is not expected to be effective in position and current ramp - down control within ms of the cq detection .the new re control architecture has been applied in low - density plasma discharges . in a _ first scenario _ a significant re population is generated during the ramp - up / flat top at 360ka by selecting low gas prefill and ( low ) density reference of m , followed by an injection of ne gas to induce a disruption .the sudden variation of the resistivity and the increased loop voltage at the disruption accelerate the pre - existing re population and lead , in some cases , to the formation of a re current plateau which is the target scenario of these experiments .note that this is not a method to create runaways but to turn an existing seed population of re in a runaway plateau at the disruption .the discharge is run with an initial low gas prefill in such a way that early in the discharge ramp up a runaway population is established .the controller pcs - ref1 has been tested in the above scenario whereas the controller pcs - ref2 has been tested in a _second scenario _ that differs with respect to the previous one in terms of an extremely low gas prefill that causes spontaneous disruption during the ramp - up .this leads , in some cases , to a re plateau .note that since ne gas is not injected to induce disruptions , the is generally less than the first scenario .the characterization of the different phases of a disruption with the generation of a re plateau is given in figure [ fig : ipllunga_1 ] for the discharge , which may be considered a typical instance of the first scenario .after ne gas injection , the plasma density slightly increases during the pre - disruptive phase p1 ( grey box ) .the tq ( phase p2 , orange box ) , lasting few milliseconds ( 1 - 2ms ) , in which the plasma confinement is lost and the thermal energy is released to the vessel combined with the high electrical field , produces a large increase of the electron density .the cq phase p3 ( green box ) follows : it is characterized by a sudden drop of the plasma current and a high self - induced parallel electric field ( ) that further accelerates the preexisting re and possibly increases their number . if the re beam survives during the cq to collisionality drag , loss of position control , mhd induced expulsion ( to mention only few re loss phenomena ) , then the re plateau phase ( p4 ) is started . in this specific re scenario ,generally the latter phase p4 can be in turn divided into three sub - phases : during phase p4.1 the re beam current exponentially replaces a large fraction of the ohmic current ( see , this process starts with the onset of the cq ) ; subsequently part of such current can be lost due to instabilities ( p4.2 ) , while the rest of the beam can survive ( further plateau in phase p4.3 ) before the final loss ( phase p5 ) . in figure[ fig : ipllunga_1 ] , beside the time traces of the plasma / re beam current ( solid black ) and loop voltage ( dashed blue ) , the time trace of the total number of electrons estimated by fitting the scanning interferometer data with a gaussian function ( solid red ) are shown .note that the vertical lines of sight of the co interferometer are placed only in the central and lower field side of the torus ( from 0.8965 m to 1.2297 m ) .we have fitted the los electrons radial profile with gaussian functions , whose parameters have been obtained exploiting least square algorithms , in order to estimate also the electron density for major radius belonging to the range ] s in discharge shown in figure [ fig : ipllunga_1 ] , it is possible to infer that most likely the reduction is directly related to the loss of re ( that carry most of the current ) in the low field - side of the vacuum chamber .moreover , the fc detector reveals that ( a percentage of ) re have energy higher than about 6 mev . in the time interval ]e , assuming ]e , assuming ] m against the standard 1.23 m. this reduction is in agreement with the re beam outward shift given by the approximated formula where is the averaged safety factor ] mev is the re energy , is the toroidal field yielding $ ] cm .this new constant values have been used for the controller pcs - ref2 , tested in a second scenario , whose results are shown in figure [ fig : control_spontaneous ] . in the latter casealso the real - time fc signal is exploited in order to slightly modify and minimize the fc signal : see the dashed lines in figure [ fig : control_spontaneous].(f ) that suddenly ramp down to 1.11 m and 1.13 m for and , respectively , and then slightly changes in time .although the number of the available discharges in the second scenario was not sufficient to optimally tune the gains of the extremum seeking policy , the results are encouraging .discharges of the second scenario characterized by a sudden increase of the fc signal at cq have been found on the ftu database and some of them have been reported in the left column of fig .[ fig : control_spontaneous ] .the discharges with active re beam control and show a reduction of the fc signal down to zero while the is slowly ramped - down and the reference of is reduced . on the contrary the discharges , , , without the active control , show a substantial increase of the fc signal slightly before and during the final loss . in the discharges and the two hard x - rays monitors neu213 and hxr shown in the figure [ fig : control_spontaneous ] ( c , g ) are saturated from the cq throughout the ramp - down indicating that energetic re are present . in the discharge at the end of the current ramp - down , about 30 ms before the final loss , the neu213 have small drops below the saturation value . in the left column of fig .[ fig : control_spontaneous ] it is shown the shot that is an example of shot where only a ramp - down current is performed and where the current allocator is not active .it can been seen that after the cq there is the usual compression against the inner wall but then , due to the control system that try to reestablish the desired m , the beam moves outward and the discharges terminate due to the collision of the beam with the outer limiter .+ it has to be noted that in the second scenario the new controller is activated not by the detection of the cq or current plateau onset , as for the first scenario , but because the hxr signal is above 0.2 for more than 10 ms .this safety condition triggers the new current ramp - down and references in the shots and .the decreasing of reference before the plateau onset , although not clearly visible , might be associated with a larger initial loss of the runway electrons in the high - field side of the vessel and this will be take into consideration for future controller design . furthermore , in order to further reduce re beam interaction with the inner vessel , we could even consider the diii - d approach activating a saturated control to decrease as fast as possible the vertical magnetic field produced by the active coils ( f and v ) whenever the cq is detected .the selection of the saturated control time duration would require detailed analysis on ftu .the pcs - ref1/2 results seem to suggest the importance of reducing the external radius reference to minimize the re interaction with the vessel , confirming similar results discussed in . as additional evidenceobserve the rise of the fc signal in correspondence of the increase of in the discharges run with the old controller of figure [ fig : control_spontaneous ] ( left column ) .a further interesting feature is that despite the re beam final current loss in the pcs - ref2 ( right column of figure [ fig : control_spontaneous ] ) is larger , the corresponding final fc peaks are noticeably smaller if compared with the final peaks obtained in figures [ fig : control_neon ] and [ fig : control_spontaneous ] .it is interesting to note also that in the shot ( slightly in the ) the hxr signal drops below the saturation threshold before the final loss unlike all the other discharges at ftu : since the current drop at final loss is about 100 ka , this might suggest that a considerable re beam energy has been dissipated during the ramp - down .these facts are signs that certain degree of re energy dissipation has been obtained reducing the slope of the ramp - down and the reference .+ we proceed now to the analysis of the time evolution of the interferometer radial profiles .figure [ fig:35965_3d ] shows the time interval between the re plateau onset and the final loss ( ) , for discharges and obtained with the old controller in the first experimental scenario .unfortunately we do not have the scanning interferometer data for the second scenario experiments with the old controller .figure [ fig:36574_3d ] shows two experiments with the new controllers : a ) pcs - ref1 ( discharge , first plasma scenario ) b ) pcs - ref2 ( discharge , second plasma scenario .[ fig:35965_3d ] indicates that the re plateau termination is triggered by mhd instabilities that suddenly move the background plasma / re beam inward .further studies are necessary in order to better understand the instability type , able to induce such re beam displacements , shown in fig .[ fig:35965_3d ] for discharges and . by looking at the fig .[ fig:36574_3d ] it is evident that the new control system is able to avoid the large outer oscillation shown in fig .[ fig:35965_3d ] as well as in the left columns of fig .[ fig : control_spontaneous ] and [ fig : control_neon ] .+ finally , in figures [ fig : allshots_v2a ] and [ fig : allshots_v2b ] we show a comparison of 52 disruption generated re beam plateaus , retrieved by ad - hoc algorithm on the ftu database among 35000 discharges , subdivided as follow : 2 shots with pcs - ref2 in the second scenario ( blue cross ) , 5 shots with pcs - ref1 in the first scenario ( blue circle ) , 5 shots with only a linear current ramp - down without redefinition nor the current allocator active in the second scenario ( blue square ) and with the old controller 16 shots in the first scenario ( red circle ) and 24 shots in the second scenario ( red cross ) . a black diamond is superimposed to the shots where the reached the saturation threshold .+ the definition is considered , where the is the onset of the current drop and it is assumed to be the time corresponding to the last fc spike before drops below 40 ka , and generally individuates the knee of the time traces at final loss .the fc integral is evaluated as the sum of all counts from the beginning of the cq up to the end of the shot .+ in the top plot of the figure [ fig : allshots_v2a ] the fc final peak values are shown with respect to the re beam plateau duration .the values obtained by pcs - ref2 are the smallest ( blue crosses ) .low levels are also obtained by pcs - ref2 ( blue circles ) , whereas the only current ramp - down without the redefinition and current allocator ( blue square ) are slightly above the former . for the shots in which the current of the coil reaches the saturation , leading to plasma radial loss of confinement ,a black diamond is superimposed .the saturation of the coil f is not observed in the shots where the current allocator , that modifies the current in order to maintain far from saturation thresholds , is used ( pcs - ref1,pcs - ref2 ) .the reduction of the external radius goes along the same direction of reducing excursions .it is interesting to note that the final fc peaks of shots with only a current ramp - down ( blue square ) are higher than pcs - ref1 and pcs - ref2 : a possible motivation of this difference , as well as the plateau duration , is the redefinition of the reference and the use of the curren allocator .+ in the bottom plot of the figure [ fig : allshots_v2a ] the fc integral is shown with respect to the re beam plateau duration . in this case , the mean value of blue circles is slightly below the mean value of the fc integral evaluated with respect to the uncontrolled shots ( red cross and circles ) meanwhile the blue crosses have approximatively the same value .it is the worth of mentioning that the integral of the fc x - ray monitor is proportional to the total energy absorbed by the vessel during the shot , whereas the fc values are proportional to the power released by the re beam onto the vessel .since we are not able to reconstruct the re beam impact surfaces , we can not evaluate the actual power deposition .+ given that the fc integral mean values of blue circles and crosses are about the same of red ones , whereas the final fc peaks are much smaller for the newly controlled re beam , we could infer that slow current ramp - down and redefinition allow to decrease the re beam energy and possibly reduce its interaction with the pfc .+ in the top plot of the figure [ fig : allshots_v2b ] the fc final loss integral , evaluated summing up all the counts of the fc camera from to the end of the shot versus the is shown .the onset of the final loss defined is estimated as .this figure has been provided since the ratio between the fc tail integral and should be related to percentage of the current ( energy and number ) still carried by the runaways before the final loss onset .+ in the bottom plot of the same figure , the fc final peak value is shown with respect to the decaying rate of , evaluated as the ration between and , where is the onset of the re beam plateau .it has to be noted that from this 2d picture it is not possible to see the value of and the shots with high fc final peaks that seem to have a small current decaying rate are indeed shots with premature loss of confinement , leading to small and than a small decaying rate . to better show this dependence we add in the figure [ fig:3dcomare ] the dependence by . from these last two pictures it seems clear that to have small and fc peaksthe current decaying rate is below 2ma / s .two algorithms for the control of disruption generated re have been implemented at ftu .the two algorithms redefine in real - time the and ramp - down references , exploiting the magnetic and gamma - rays signals .the ramp - down is performed via the central solenoid and the current in the poloidal coils is changed to control the position of the re beam as determined by the magnetic measurements .+ we have found that the external plasma radius evaluated by magnetic moments at ftu can be used to estimate the re beam radial position when the current profiles are not heavily peaked as shown by the interferometer data .it has been shown that modifying the plasma current reference ( ramp - down ) and reducing the the gamma signal , provided by the fc chamber , decreases as an indication of re beam energy suppression and reduced interactions with the vessel ( especially the low - field side ) .a ( slow ) current decay rate of about 0.5 ma / s has been found to provide a better re beam confinement and consequently a controlled energy dissipation .to further and quantitatively corroborate this fact , we have analyzed a considerable amount of post - disruption re beam discharges at ftu showing that fc peaks at final loss decrease when slowly ramped - down .this is in accord with experimental findings in , where slow ( 1ma / s ) current ramp - down have seen to provide better re beam confinement .although further experiments are necessary to better refine the optimal external radius reference during the re beam current ramp - down , possibly defined as a function of and re beam energy in future work , a constant value of approximatively m seems to significantly help in reducing the re impacts with the pfc .this corresponds to an external major radius reduction of approximatively of the flat - top value ( m ) and a reduction of to the plasma minor radius that is equal to 0.305 m in ftu .the interferometer signal analysis has shown that strong mhd induced instabilities , which displace a large percentage of the re beam , arise when the ( cold ) electron profiles are highly peaked as in . now that we have found suitable plasma / re beam current and position references , the current and position controllers pid - t and pid - f will be re - designed to further improve their performances specifically in the re control phase . the novel controllers will be based on a re beam dynamical model identification that is under development .this work was supported by the eu horizon 2020 research and innovation program ( project wp14-mst2 - 9 : runaway electron studies in ftu ) - enea - eurofusion .+ l. boncagni and riccardo vitelli and d. carnevale at al . , ``an overview of the software architecture of the plasma position , current and density real - time controller of the ftu '' , fusion engineering and design , vol .9 , n. 3 , 2014 .a. astolfi and l. boncagni and d. carnevale al ., `` adaptive hybrid observer of the plasma horizontal position at ftu '' , mediterranean conference of control and automation , pages 1088 - 1093 , 2014 ( doi 10.1109/med.2014.6961519 ) .l. boncagni and y. sadeghi and d. carnevale et al .`` first steps in the ftu migration towards a modular and distributed real - time control architecture based on marte '' , ieee transactions on journal nuclear science , vol .58 , n. 4 , pp . 17781783 , 2011 .h m smith et al ., `` runaway electron generation in tokamak disruptions '' , plasma phys .fusion 51 , 124008 , 2009 . b. paradkar et al ., `` runaway - loss induced negative and positive loop voltage spikes in the aditya tokamak '' , physics of plasmas , 17 , 092504 , 2010 .d. carnevale and l. zaccarian and a. astolfi and s. podda `` extremum seeking without external dithering and its application to plasma rf heating on ftu '' , ieee conference on decision and control , pp .3151 - 3156 , 2008 j. r. martin - solis et al ., `` experimental observation of increased threshold electric field for runaway generation due to synchrotron radiation losses in the ftu tokamak '' , phys .105 , 185002 , 2010 .
|
experimental results on the position and current control of disruption generated runaway electrons ( re ) in ftu are presented . a scanning interferometer diagnostic has been used to analyze the time evolution of the re beam radial position and its instabilities . correspondence of the interferometer time traces , radial profile reconstructed via magnetic measurements and fission chamber signals are discussed . new re control algorithms , which define in real - time updated plasma current and position references , have been tested in two experimental scenarios featuring disruption generated re plateaus . comparative studies among 52 discharges with disruption generated re beam plateaus are presented in order to assess the effectiveness of the proposed control strategies as the re beam interaction with the plasma facing components is reduced while the current is ramped - down . _ keywords _ : runaway , plasma control
|
the vehicle routing problem ( vrp ) is one of the classical optimization problems known from operations research with numerous applications in real world logistics . in brief , a given set of customers has to be served with vehicles from a depot such that a particular criterion is optimized .the most comprehensive model therefore consists of a complete graph , where denotes a set of vertices and denotes the connecting arcs .the depot is represented by , and vehicles are stationed at this location to service the customers .each customer demands a nonnegative quantity of goods and service results in a nonnegative service time . traveling on a connecting arc results in a cost or travel time .the most basic vehicle routing problem aims to identify a solutions that serves all customers , not exceeding the maximum capacity of the vehicles and their maximum travel time while minimizing the total distances / costs of the routes .various extensions have been proposed to this general problem type .most of them introduce additional constraints to the problem domain such as time windows , defining for each customer an interval $ ] of service . while arrival before results in a waiting time , arrival after is usually considered to be infeasible . in other approaches ,the times windows may be violated , leading to a tardy service at some customers .some problems introduce multiple depots as opposed to only a single depot in the classical case . along with thissometimes comes the additional decision of open routes , where vehicles do not return to the place they depart from but to some other depot .also , different types of vehicles may be considered , leading to a heterogeneous fleet in terms of the abilities of the vehicles .unfortunately , most problems of this domain are -hard . as a result, heuristics and more recently metaheuristics have been developed with increasing success . in order to improve known results , more andmore refined techniques have been proposed that are able to solve , or at least approximate very closely , a large number of established benchmark instances . withthe increasing specialization of techniques goes however a decrease in generality of the resolution approaches .while the optimality criterion of minimizing the total traveled distances is the most common , more recent approaches recognize the vehicle routing problem as a multi - objective optimization problem . here , the overall problem lies in identifying a pareto - optimal solution that is most preferred by a decision maker .as the relevant objective functions are often of conflicting nature , a whole set of potential pareto - optimal solutions exists among which this choice has to be made . in the current article , a framework for interactive multi - objective vehicle routing is presented that aims to address two critical issues : ( i ) the necessary generality of resolution approaches when trying to solve a range of problems of different characteristics , and ( ii ) the integration of multiple objectives in the resolution process .independent from the precise characteristics of the particular vrp , two types of decisions have to be made when solving the problem . 1 . assignment of customers to vehicles ( clustering ) .2 . construction of a route for a given set of customers ( sequencing ) .it is well - known that both types of decisions influence each other to a considerable extent .the here presented framework therefore proposes the use of a set of elements to handle this issue with upmost generality .figure [ fig : framework ] gives an overview about the elements used .sketch of the framework ] * the _ marketplace _ represents the element where orders are offered for transportation . * _ vehicle agents _ place bids for orders on the marketplace .these bids take into consideration the current routes of the vehicles and the potential change when integrating an additional order . * an_ ontology _ describes the precise properties of the vehicles such as their capacity , availability , current location , etc .this easily allows the consideration of different types of vehicles . *a _ decider _ communicates with the human decision maker via a graphical user interface ( gui ) and stores his / her individual preferences .the decider also assigns orders to vehicles , taking into consideration the bids placed for the specific orders .a solutions is constructed by placing the orders on the marketplace , collecting bids from the vehicle agents , and assigning orders to vehicles while constantly updating the bids .route construction by the vehicle agents is done in parallel using local search heuristics so that a route can be identified that maximizes the preferences of the decision maker .the framework has been prototypically implemented in a computer system . in the first experiments ,two objective functions are considered , the total traveled distances and the total tardiness caused by vehicles arriving after the upper bound of the time window .the preferences of the decision maker are represented introducing a weighted sum of both objective functions . using the relative importance of the distances , the overall utility utility of a particular solution can be computed as given in expression [ eqn : utility ] . the vehicle agents are able to modify the sequence of their orders using four different local search neighborhoods .* inverting the sequence of the orders between positions and . while this may be beneficial with respect to the distances, it may pose a problem for the time windows as usually orders are served in the sequence of their time windows . * exchanging the positions and of two orders . *moving an order from position and reinserting it at position , ( forward shift ) . * moving an order from position and reinserting it at position , ( backward shift ) . in each step of the local search procedure ,a neighborhood is randomly picked from the set of neighborhoods and a move is computed and accepted given an improvement .bids for orders on the marketplace are generated by the vehicle agents , taking into consideration all possible insertion points in the current route .the sum of the weighted increase in distance dist and tardiness tardy gives the prize for the order .the decider assigns orders to vehicles such that the maximum regret when _ not _ assigning the order to a particular vehicle , and therefore having to assign it to some other vehicle , is minimized .it also analyzes the progress of the improvement procedures .given no improvement for a certain number of iterations , the decider forces the vehicle agents to place back orders on the market such that they may be reallocated .the optimization framework has been tested on a benchmark instance taken from .the instance comprises 48 customers that have to be served from 4 depots , each of which possesses two vehicles .we simulated a decision maker changing the relative importance during the optimization procedure .first , a decision maker starting with a and successively decreasing it to 0 , second a decision maker starting with a and increasing it to 1 , and third a decision maker starting with a , increasing it to 1 and decreasing it again to 0 .between adjusting the values of in steps of 0.1 , enough time for computations has been given to the system to allow a convergence to ( at least ) a local optimum .figure [ fig : testrun ] plots the results obtained during the test runs .results of the test runs ] the first decision maker starts with , and moves to , while the second starts with , and moves to , .clearly , the first strategy outperforms the second .while an initial value of allows the identification of a solution with zero tardiness , it tends to construct routes that , when decreasing the relative importance of the tardiness , turn out to be hard to adapt . in comparison to the strategy starting with a , the clustering of orders turns out the be prohibitive for a later improvement . when comparing the third strategy of starting with a , it becomes obvious that this outperforms both other ways of interacting with the system . here, the solutions start with , , go to , , and finally to , . apparently , starting with a compromise solutionis beneficial even for both extreme values of and .a framework for the interactive resolution of multi - objective vehicle routing problems has been presented .the concept has been prototypically implemented in a computer system .preliminary results on a benchmark instance have been reported .first investigations indicate that the concept may successfully solve vehicle routing problems under multiple objectives and complex side constraints . in this context, an interaction with the system is provided by a graphical user interface .the relative importance of the objective functions can be modified by means of a slider bar , resulting in different solutions which are computed in real time by the system , therefore providing an immediate feedback to the user .figure [ fig : interface ] shows two extreme solutions that have been interactively obtained by the system .two screenshots of the graphical user interface . on the left , a short solution with high tardiness , on the right , a solution with low tardiness but long traveling distances.,title="fig:",width=302]two screenshots of the graphical user interface . on the left , a short solution with high tardiness , on the right , a solution with low tardiness but long traveling distances.,title="fig:",width=302 ] future developments are manifold .first , other ways of representing preferences than a weighted sum approach may be beneficial to investigate . while the comparable easy interaction with the gui by means of a slider bar enables the user to directly change the relative importance of the objective functions, it prohibits the definition of more complex preference information , e.g. involving aspiration levels .second , different and improved ways of implementing the market mechanism have to be investigated .first results indicate that the quality of the solutions is biased with respect to the initial setting of the relative importance of the optimality criteria .it appears as if more complex reallocations of orders between vehicles are needed to address this issue .finally , more investigations on benchmark instances will be carried out . apart from test cases known from literature we aim to address particularly problems with unusual , complex side constraints and multiple objectives .an additional use of the system will be the resolution of dynamic vrps .the market mechanism provides a platform for the matching of offers to vehicles without the immediate need of accepting them , yet still obtaining feasible solutions and gathering a prize for acceptance of offers which may be reported back to the customer .nicolas jozefowiez , frdric semet , and el - ghazali talbi .parallel and hybrid models for multi - objective optimization : application to the vehicle routing problem . in j.j .merelo guervs et al . , editor , _ parallel problem solving from nature vii _ ,volume 2439 of _ lecture notes in computer science _ , pages 271280 , berlin heidelberg , 2002 .springer - verlag .tadahiko murata and ryota itai .multi - objective vehicle routing problems using two - fold emo algorithms to enhance solution similarity on non - dominated solutions . in c.a .coello coello et al . , editor , _ evolutionary multi - criterion optimization , third international conference , emo 2005 _ , volume 3410 of _ lecture notes in computer science _ , pages 885896 , berlin heidelberg , 2005 .springer - verlag .malek rahoual , boubekeur kitoun , mohamed - hakim mabed , vincent bachelet , and fthia benameur .multicriteria genetic algorithms for the vehicle routing problem with time windows . in desousa , pages 527532 .
|
the article presents a framework for the resolution of rich vehicle routing problems which are difficult to address with standard optimization techniques . we use local search on the basis on variable neighborhood search for the construction of the solutions , but embed the techniques in a flexible framework that allows the consideration of complex side constraints of the problem such as time windows , multiple depots , heterogeneous fleets , and , in particular , multiple optimization criteria . in order to identify a compromise alternative that meets the requirements of the decision maker , an interactive procedure is integrated in the resolution of the problem , allowing the modification of the preference information articulated by the decision maker . the framework is prototypically implemented in a computer system . first results of test runs on multiple depot vehicle routing problems with time windows are reported . user - guided search , interactive optimization , multi - objective optimization , multi depot vehicle routing problem with time windows , variable neighborhood search .
|
due to the inherent scarcity of frequency spectrum and increasing wireless traffic demands nowadays , frequency reuse has become an essential key technological issue associated with contemporary wireless communication systems .frequency reuse intrinsically causes interference between wireless links in not only homogeneous but also heterogeneous systems using the same frequency .accordingly , the state of the aggregate interference at an arbitrary position in the random node topology has become of great importance . in this paper, we are interested in the interference of carrier sense multiple access / collision avoidance ( csma / ca ) networks . in particular , we analyze the aggregate interference in randomly deployed ieee 802.11 distributed coordination function ( dcf ) networks . from an understanding of the distribution of the aggregate interference at the protocol level , we can control this interference using the relationships discovered among the protocol parameters . to the best of our knowledge , there has been no massive test at the simulator level for the aggregate interference of csma / ca networks .thus , we test as well as analyze the interference at the protocol level .consequently , our goal in this study is to obtain the statistical inference of the aggregate interference and verify the results via simulations .most of the work previously done in this area focused on aloha - like systems in which the aggregate interference can be analyzed by assuming that transmitting nodes have independent locations and behaviors , .although broadband cellular systems such as lte , lte - a , wcdma or its femto cell networks can also be modeled using this transmission - independent behavior , this is not a realistic assumption for csma / ca networks in which a certain interference level always needs to be maintained in a distributed manner . in a network of csma / ca nodes ,every communication entity first senses the ongoing transmission in the channel and then determines when to start transmitting .the inappropriateness of the independent model in such a scenario was noted in , in which the authors proposed the alternative dependent point process to mimic real csma / ca networks .however , this proposed point process still can not describe the dcf operation , in which collision and idle time , as well as successful transmission , can occur even in the exclusion area by carrier sense .compared with these previous research efforts , our work is the first to investigate the exact distribution of aggregate interference in csma / ca networks and to validate it by means of massive simulations .our paper has the following notable results : * * owing to the possibility of concurrent transmission incurred by dcf operation occurring within an exclusion area , csma / ca random networks can only be modeled by the poisson point process ( ppp ) , not the dependent point process . *section [ sec : deppp ] addresses the difference between the dependent and independent point processes , and explains how real csma / ca networks can be dealt with . * * we derive the effective node density reflecting csma / ca mac layer operation .* section [ sec : mod_density ] explains this .this section is the core of our work since all the cross - layer parameters are coupled to model the network behavior .further , this is used in ppp shot noise analysis , which is the result of the first item .* * aggregate interference in ppp shot noise analysis using our derived effective node density is verified using the ns-2 network simulator and matlab simulations .* unlike other related theoretical analyses of stochastic geometry researches , we verify exactly how much our model reflects the ns-2 simulation results .we also compare our results with the matlab simulation on dependent point processes and thereby show that the ppp model with our new effective node density performs the best in modeling the aggregate interference .related issues are outlined in section [ sec : simul ] . ** the aggregate interference is either normal nor log - normal distributions * , as contrary to the most common assumption that the aggregate interference follows normal ( in dbm unit ) or log - normal ( in unit ) distributions .in this section , we focus on determining which type of point process is suitable for modeling csma / ca networks . a generic wireless network consisting of multiple randomly deployed nodes can be described via a point process . in the point process , a mark ( a scalar or a vector ) can be assigned to each point independently , which is useful for modeling node - oriented properties such as transmission power , medium access delay . in particular , the case where the number of nodes in a network is poisson - distributed and their positions at a given time instant are independent of each other , is adequately explained by means of the ppp .the method to derive the aggregate power emitted from points at an arbitrary position under independent marked ppp was previously studied as _ shot noise field _ , which was originally used to model the noise in electronic circuits in the time domain .however , the ppp approach as it currently exists may be insufficient to model the csma / ca .the reason is that the carrier sensing philosophy is not reflected in it .ppp is a typical _ independent _ point process in which the points are deployed independently of each other . on the other hand , in the carrier sensing operation , a sensing node always senses the shared medium and it delays its transmission once it senses that the medium is busy .the result is that active nodes are affected by each other , which means that the process is not independent .let us now consider the dependent point process as a possible alternative . here, the dependent point process means that some initially deployed points are discarded or selected by the metric related to the other points marks or locations .there are two dependent point processes most related to the modeling of csma / ca networks , : the matern hardcore ( mhc ) process and the simple sequential inhibition ( ssi ) point process . in ,the authors are also motivated by the inappropriateness of ppp , i.e. , the independence of points . in the paper , they compared the aggregate power distributions of ppp , mhc , and ssi with simulations , and concluded that ssi is the most appropriate one for modeling csma networks . however , their result is not fully acceptable because the operation they used in the simulation was not the real one but a modified version of a dependent point process based on ieee 802.15.4 phy parameters .more specifically , they neither considered any details of practical mac layer parameters nor channel characteristics . leaving this result aside , their mhc and ssiare fundamentally based on the hard exclusion area ; nevertheless , they contain ambiguities in the determination of this area .to further illustrate the issue , let us look at our simulation results .we simulated a realistic csma / ca network using ns-2 in order to observe the concurrent transmission behavior .[ fg : num_tx_exam ] shows our simulation topology and the effect on the distribution of the number of concurrent transmitting nodes . in the grid topology, the black dot represents the transmitter and the corresponding receiver is located at 5 ( m ) right and 5 ( m ) up away from its transmitter ; the receiver is omitted from the figure . in the simulation ,a 500 b payload was given to each transmitter and the traffic was saturated .the distance between the two nearest black dots was 50 ( m ) , and the carrier sensing ( cs ) threshold was tuned so that the resulting cs range was 70 ( m ) , derived using equation ( [ eq : effect_r ] ) .all other parameters were the same as those in table [ simul_ns2 ] .the large circle including set c nodes in the center denotes a cs area of the white dot ( one of the transmitters ) , while the smaller circle including set b nodes has a radius that is one half of the cs range .we depicts the relative time durations on the number of concurrent transmissions in each set in the bar plot . in the graph ,two things are of note : first , there is a time period in which two nodes are concurrently transmitting when all transmitters are even in each other s cs area ( see the set b in the bar plot ) .second , there is a time period in which the medium is idle in the full cs area ( set c ) .the first case occurs due to cs failure or collision in real situation .mhc and ssi are fully _ dependent thinning _ of ppp with the exclusion area , and they can not model these events . the resulting effective node density of the dependent point process is likely to be lower than that of the real one .these approaches may work well in collision - less csma / ca networks where slot time is zero and backoff time is a continuous random variable ( rv ) , rather than real situations .the second case occurs due to the idle time from the binary exponential backoff ( beb ) and the dcf .this waste of time resource is the intrinsic cost of the distributed random access mac . in the dependent point process ,any point having none of the other points in its exclusion area always survives .the resulting effective node density of this process is likely to be higher than that of the real one . _ as a result , the real operation of a csma / ca network has both factors having higher and lower effective node density than that of the dependent point process ._ this difference is from the lack of mac layer operation modeling in the dependent point process .these collision events occur with a certain probability in real situations .this means that the concurrent transmission in an exclusion area occurs with some probability , not with deterministic patterns .this stochastic characteristic of real networks is appropriately modeled using the independent point process . therefore , we believe that a possible way to model a csma / ca network is again to use the independent ppp , but with a _new _ effective active node density reflecting mac layer operations .this is notable since recent research efforts such as and literatures therein point out that csma / ca can be modeled with mhc point process , which is different from our conclusion . at the very least, the aggregate power at an arbitrary position can be elaborated more when using ppp with a new effective node density rather than pure ppp or mhc / ssi . for our analysis, we consider the infinite planar where the transmitting nodes are deployed randomly at positions specified by a poisson distribution with its intensity .each node transmits with a constant power .the radio channel attenuates with the pass - loss exponent and rayleigh fading .then we have cdf and pdf of the aggregate interference at an arbitrary receiver as follows , : where is a complementary error function .our idea is to use the above pdf and cdf again for calculating the aggregate interference of the csma / ca network , but with a new density , called _ effective active node density _ reflecting all the csma parameters .section iii is devoted to describing how we obtain , and its verification by massive ns-2 simulations is contained in section iv . for the readers who are more interested in our results , please directly jump to section iv .let us introduce a cs range so that a sensing node can sense any on - going transmission in this range .then within the disk of radius , every node senses each other .we set this disk as _ sharing area_. cs is based on the threshold , i.e. , if the sensed power level at a sensing node is greater or lower than , a sensing node regards the channel is busy or idle , respectively .assuming there is only one interferer near the sensing node , the cs probability versus the distance to this interferer is calculated as followings : =\mathbb{p}[\frac{p_i}{r^4}+\nu \ge \gamma],\ ] ] where is a random variable ( rv ) representing the product of the fading effect and the constant transmission power from a typical node , is the distance between the sensing node and the interferer , and is the receiver noise power . considering rayleigh fading , follows with a constant transmission power .consequently , with the cs range , we convert the stochastic cs to a deterministic one .first , the average sensing area is calculated by integrating the parts of the circumference , of which the radius and the center are and the sensing node , respectively .the cs probability of a point on this circumference is from : { \rm{d}}r \nonumber\\ & = & \int_0^\infty 2\pi r e^{-\frac{1}{p}(\gamma-\nu)r^4 } { \rm{d}}r \nonumber\\ & = & \frac{\pi^{3/2}}{2\sqrt{\frac{\gamma-\nu}{p}}}\end{aligned}\ ] ] assuming that the deterministic cs region should have the same average sensing area ( sensing resolution ) as the stochastic cs , the following equation is derived : finally , we get the cs distance as follows : by this deterministic cs distance , which we will call _ effective carrier sensing range _ , the interference is regarded as boolean at a given distance rather than stochastic .let us consider an infinite plane with the nodes randomly deployed .suppose an arbitrary disk having of radius in the plane ( sharing area ) , where every nodes in this area can sense other nodes transmissions by the definition of cs range . by ppp ,the number of deployed nodes in the sharing area follows poisson distribution with the parameter as follows : =\frac{\{\lambda\pi ( \frac{r}{2})^2\}^n}{n!}\exp(-\lambda\pi ( \frac{r}{2})^2)\ ] ] once we know ] and the power distribution at an arbitrary time instant in the sharing area .these will be explained in the following subsections . for the purpose ,let us define the following probability : * definition 1 .* is the probability that there are on - going transmissions in a given sharing area at a certain time .consider the ieee 802.11 dcf protocol for csma / ca mac .if all the transmitting nodes can sense each other in a sharing area and the given traffic to each node is a saturated one , we know the steady - state behavior in this area . as shown in and subsequent research efforts , the backoff stage of each node in the network is random at a certain time and this can be elaborated through a two - dimensional markov chain .we have two main quantities for addressing this : is the probability that the collision happens conditioned on the transmission of each node , and is the transmission probability of a node at a randomly chosen time slot .these two quantities are derived by finding the steady - state solution of the discrete time markov chain . by following the notations of , we have the beb dynamics with maximum backoff stage , maximum retry limit and initial window size . the probability that a node transmits in a randomly chosen time slotis : l[eq : tau ] = \{+-}^-1 .again , is obtained through : where is the number of active ( contending ) nodes .we can solve the system dynamics by solving two independent equations and and the existence of this solution is guaranteed by the fixed point theorem .since we know , we can obtain the steady state power density .the probability that nodes transmit simultaneously at an arbitrary time slot , given that transmitting nodes are deployed in a sharing area is : =\binom{a}{m}\tau^m(1-\tau)^{a - m}\nonumber\\ m={0,\dots , a}.\end{aligned}\ ] ] each transmitting node s operation in a sharing area is synchronized since the medium is sensed perfectly and every node has the inner clock .thus , idle time is also segmented into multiple slot times ( ) .therefore , all events ( idle time slot , successful and collision time slot ) can be distinguished by their durations . at an arbitrary time, the sharing area medium is in one of three events and we call this random time slot as the _ virtual time slot_. the virtual time slot has the random duration .we assume that the payload size is the same as for all nodes . in basic mode ,l t_v= + , & + , + t_s^bas(=phy+t_s+sifs+ack & + + difs ) , , + t_c^bas(=phy+t_s+difs ) , & + , + where , , , are the durations for phy header , sifs time , ack packet , difs time respectively . and , and are the mac header size , symbol rate and symbol duration respectively .once the transmission starts , irrespective of success or not , the packet of size is transmitted first .and then the remaining parts ( or ) are determined according to the existence of collision . of course, in rts - cts mode , successful slot time and collision slot time will be changed into and respectively . and are the duration of rts and cts packet , respectively . has the pmf induced from such as , , and are for idle , successful transmission , and collision events , respectively .we derive the mean virtual time slot , ] is obtained using this distribution in the basic mode ( equation ( [ eq : pow_dist_bas ] ) ) and rts mode ( equation ( [ eq : pow_dist_rts ] ) ) .the probability of busy channel in a sharing area is .}\cdot \begin{cases } \sigma p_a(0)+(sifs+difs ) p_a(1)+difs ( 1-p_a(0)-p_a(1 ) ) , & j=0\\ ( phy+\lceil\frac{(mac+pay)}{r_s}\rceil t_s+ack)p_a(1 ) , & j=1\\ ( phy+\lceil\frac{(mac+pay)}{r_s}\rceil t_s)p_a(j ) , & 2\le j \le a. \end{cases}\\ b_a^{rts}(j ) = \frac{1}{\mathbb{e}[t_v^{rts}]}\cdot \begin{cases}\label{eq : pow_dist_rts } \sigma p_a(0)+(3sifs+difs ) p_a(1)+difs ( 1-p_a(0)-p_a(1 ) ) , & j=0\\ ( rts+cts+phy+\lceil\frac{(mac+pay)}{r_s}\rceil t_s+ack)p_a(1 ) , & j=1\\ rts\cdot p_a(j ) , & 2\le j \le a. \end{cases}\end{aligned}\ ] ] consider a given sharing area .sensing area _be defined as the sensing node s cs area excluding .( see the asymmetric donut in fig .[ fg : sharingarea ] for the relation between the sensing and sharing areas . ) the active node is defined here as the sensing node that has no on - going transmissions in its sensing area .the cs results for each sensing node in is random .therefore , is a rv that varies within ] as in , through section [ subsec : effect_node_cs ] .the transmission probability of a node is derived in section [ subsec : dcf ] . based on these derivations , the power distributions in the sharing area , andcan be calculated in section [ subsec : powerdist ] .we derived the number of active nodes in a sharing area , $ ] as in in section [ subsec : activenode ] .as seen from ( 14 ) , these results are all based on the value of , the probability that the sharing area is busy .we can get the value of , by finding the intersection of the right hand side and left hand side of , which we will call .details on derivation of are given in appendix .if we obtain , the distribution of the number of transmitting nodes in the sharing area can be derived as in , where the number of actual transmitting nodes ( active and non - frozen ) in the area is denoted by .the expected number of transmitting nodes is derived from this result : =\sum_{z=0}^\infty z\cdot \mathbb{p}[z = z]\ ] ] the effective active node density is defined as the average number of transmitting nodes per unit area .thus , we finally obtain the effective active node density as follows : }{\pi ( \frac{r}{2})^2}.\ ] ] this is used in the cumulative distribution function ( cdf ) and the probability density function ( pdf ) of the aggregate interference .we plot the resulting cdf and pdf for varying network parameters and compare these with the simulation results in section [ sec : simul ] .=\sum_{n = z}^\infty \left ( \frac{\{\lambda\pi(\frac{r}{2})^2\}^n}{n!}e^{-\lambda\pi(\frac{r}{2})^2 } \sum_{a = z}^n \mathbb{p}[n_a = a|n = n ] \right ) b_a(z),\text { for } z\in \{0,1,\dots\}\end{aligned}\ ] ]in this section , we first , plot the analyzed ( ) and , and plot cdf and pdf of the aggregate interference using this .next , we compare these derived results with those of ns-2 simulations and two other dependent point process simulations ( mhc and ssi ) . for all simulation scenarios , the exclusion radius in dependent processesare given to 70 ( m ) .in the ns-2 simulation , mac / phy parameters and channel model are given so that the cs radius is determined to be between 50 and 100 ( m ) for investigation of and .we later change these parameters for to be equal to 70 ( m ) for the comparison with the dependent point process .we deployed the points using mhc and ssi processes , explained in , with matlab .the parameters used in the simulations are listed in table [ simul_monte ] . for a given number of nodes , the aggregated power was measured at the center of ball with radius or for mhc or ssi respectively .the number of deployed nodes was generated using cdf of poisson distribution with the parameter where is the initial node density and is the area of ball .we deployed these points uniformly and measured the aggregate power at .we repeated this procedure for more than 100,000 iterations .c|c + transmission power & 0.001(w ) + background plane & circle ( ) + radius of background plane ( ) & 282 ( m ) + exclusion ball radius ( ) & 70 ( m ) + number of iterations ( ) & 100000 + channel model & , + + node density & + to verify the analysis results , we conducted simulations using ns-2 .unlike the previous version , ns-2 version 2.34 ( released june 2009 ) , includes wireless phy and mac layer patches for the realistic ieee 802.11 dcf standard .this enabled us to realistically simulate the phy and mac stack of ieee 802.11 dcf .the simulation parameters , which are the default ones for ieee 802.11a phy and mac , are listed in table [ simul_ns2 ] .they are from the previous research . the overall simulation procedure consisted of genuine ns-2 simulations , pre - processing of the scenario , and post - processing of the data .generating the number of transmitting nodes using poisson distribution , deploying them uniformly , attaching designated receiving nodes to each transmitting node , and generating traffic for each transmitting node were done in the pre - processing stage .for the saturated traffic situation , we obtained the time duration from to .we call this the time window . finding the time window , measuring the received power at the measuring node , recording the lasting time of each received power value , and accumulating all the measurements were done in the post - processing stage . in attaching the receivers to transmitters, we fixed the relative location of each receiver at 5 ( m ) right and 5 ( m ) up from its transmitter . to measure the aggregate power , we put the measuring node in the center of the simulation grid .this node then reports on the received power level and we recorded the lasting time and power level of each received signal .the simulation conducted in this paper is full - scaled , which takes a long time to collect meaningful results because of two reasons : first , each simulation per geometry scenario takes a long time .this time includes the simulation time in ns-2 and the post - processing time for handling received power instances .ns-2 traces all of the packet - level transactions with the received power recorded at every receiver . in post - processing stage ,calculation of the received power from all of the on - going transmissions at a measuring node takes computation time .moreover , the simulation time itself ( not the computation time ) has to be long enough to reflect the steady - state behavior , which theoretically requires infinite investigation time .we consider at least 30 seconds per scenario as the simulation time ( this takes about 2 hours in real time using a quad core i7 processor computer ) .second , to get the sound statistical inference of ppp , we repeat per - geometry simulation many times .we do 50 repetitions , since all the resulting pdfs of aggregate interference obtained from the simulations show the convergence before 30 repetitions .the simulation time for all 50 scenarios takes approximately four days on average .we repeat this process for each combination of phy and mac layer parameters .c|c + background grid & regular rectangular + grid size & 500(m)*500(m ) + transmission power & 0.001(w ) + initial window size & 16 + maximum backoff stage & 6 + cw min / max & 15/1023 + slot time ( ) & 9(us ) + sifs & 16(us ) + difs & sifs+2=34 ( us ) + short retry limit & 7 + long retry limit & 4 + plcp preamble duration & 16 ( us ) + plcp header duration except service field & 20(us ) + ofdm symbol duration & 4(us ) + ifq length & 50 + rts mpdu + service + tail field & 182(bits ) + cts mpdu + service + tail field & 134(bits ) + ack mpdu + service + tail field & 134(bits ) + data rate & 6(mbps ) + control rate & 1(mbps ) + modulation & bpsk + code rate & 1/2 + carrier frequency & 5.18 ( ghz ) + preamble capture threshold & 2.5118 + data capture threshold & 100 + noise floor & ( w ) + data type & cbr + cbr rate & ( packets per sec ) + number of packets in the application queue & 3000 + channel model & , + + payload size & 500 or 1000 ( bytes ) + rts threshold & 0 or 10000 + node density ( ) & + effective cs range ( ) & 50 , 70 , 100(m ) + the saturated traffic was assigned to all transmitters so that there was no idle time by the traffic itself during the simulations .the background grid for all the simulation scenarios was always the same : a square 500 ( m ) by 500 ( m ) in size .the transmission times for rts , cts , ppdu ( phy+mac+pay ) with 500 b ( or 1000 b ) of payload and ack were 52 , 44 , 728 ( or 1396 ) , and 44 ( ) , respectively ( see table [ simul_ns2 ] and ) .we ignored the propagation delay , even though this exists in the simulator , since the value was quite small compared to the other transmission times .a generated traffic of 5,000,000 packets per second ( pps ) was given to all the transmitters .this means a value of 0.2 ( ) for inter - arrival time from the application layer to the physical layer , which is less than the whole transmission time for one successful packet , i.e. , , which is enough for the traffic to be saturated .all in this subsection refer to , the final solution of for simplicity of expression . in fig .[ fg : pon ] , are shown for various combinations of mac parameters .the factor that affects the most is the effective cs distance , followed by , the initial node density . as increases , naturally increases due to the increased congestion level with semi - log style trend .if there is only the reduced idle time factor without increasing collision probability , as in s ideal csma , the rate of increase of must be a log function style .however , our model reflects the increased collision probability and reduced idle time simultaneously , as obtained in real situations .the resulting shows a mixture of linear and log functions . within the same ,the combination of mode and payload size that has the lowest is the rts mode and short payload . in other words ,the order of is for lower . in general , rts - cts mode has a lower congestion level than the basic mode since the system cost was only the rts - cts packet collision and the waiting time for retransmission compared to the basic mode .a large payload in the basic mode makes for a higher congestion level .however , in the cases of some values , rts - cts packets were small enough to compensate for the payload size .therefore basic-500b might have higher than rts-1000b . in this case ,the effect of mode was weaker than that of the payload size .this was used in the new density and we plotted this as shown in fig .[ fg : new_density ] .this figure shows that the smaller makes for a higher , which is the opposite case to .this is understandable since a smaller signifies more insensitivity to the interference around .therefore , we expected the result of , the aloha system , to approach the line in the figure .this line also represents wireless access systems that have no mac , such as macro or femto cellular systems .the curve is a version of filtered by the csma / ca and beb mechanism . by showing the line and the curves together , fig .[ fg : new_density ] also addresses how large the gap between these two node densities is and how effectively csma / ca mac operates .the bold curves are from : where is the exclusion distance with an ambiguity . for comparison, we put into in this figure .this expression is the approximated node density for modeling mhc adopted in and .as can be seen in the figure , the variation of is higher than for varying .since is used for modeling dependent point processes , it can not trace the real operation .as shown in the next section , our aggregate power distribution adopting this is the most elaborate among other point processes . therefore , fig .[ fg : new_density ] shows the gap in aggregate interference between simplified mhc and the real situation .this is significant because this simplified mhc expression is widely used in academia , , , .\sum_{a=0}^n \mathbb{p}[n_a = a|n = n]\sum_{j=1}^ab_a(j)\stackrel{(a)}{=}\sum_{n=1}^\infty \mathbb{p}[n = n]\sum_{a=1}^n \mathbb{p}[n_a = a|n = n]\sum_{j=1}^ab_a(j)\nonumber\\ & \stackrel{(b)}{=}&\sum_{n=1}^\infty \frac{\{\lambda\pi(\frac{r}{2})^2\}^n}{n!}e^{-\lambda\pi(\frac{r}{2})^2}\sum_{a=1}^n\sum_{\eta}p_{n , a,\eta}p_\eta \sum_{j=1}^ab_a(j)\nonumber\\ & \stackrel{(c)}{=}&\sum_{n=1}^\infty \frac{\{\lambda\pi(\frac{r}{2})^2\}^n}{n!}e^{-\lambda\pi(\frac{r}{2})^2}\sum_{a=1}^n\sum_{\eta}p_{n , a,\eta } \{\sum_d o_\eta^d p_{on}^d(1-p_{on})^{8-d}\ } \sum_{j=1}^ab_a(j)\end{aligned}\ ] ] using this and shot noise analysis , we plotted the distribution of aggregate interference .the pdf and cdf of the analysis in each node density showed high correlations with those of the ns-2 simulations , as seen in fig .[ fg : pdf1 ] and [ fg : cdf1 ] .we depict all the pdfs in a log scale .although at first glance they resemble a log - normal distribution , they are asymmetric based on the main lobe .therefore , they are definitely neither normal nor log - normal distributions .this is notable as some research efforts in the signal processing field assume that the aggregate interference follows normal ( in dbm unit ) or log - normal ( in unit ) distributions . for the other features , the higher the mean of the aggregate power , the lower the probability of that mean value .therefore , low - mean high - probability and high - mean low - probability patterns are shown in all the results .this is because the total sum of the probability is fixed to 1 and the x - axis is log - scaled and not linear .compared with dependent point processes , at any given value , our analysis is the closest one to the simulation results , as depicted in fig .[ fg : pdf0001 ] and fig .[ fg : pdf0005 ] . the interference of mhc ( matern)is always less than that of ssi since mhc is the lower bound of ssi , which is also commented on in .however , these two are not as sensitive as our model to variations in node density .moreover , these two do not have sufficient mac and phy layer parameters to reflect the real situation , while our analysis can model any combination of the system parameters , as in fig .[ fg : pdf_mac ] . as shown in the figures so far, our model of the interference has slightly lower values than that of the simulation in each case .this is mainly because the simulator allows the _ capture _ situation . in our analysis ,the collision between transmitters is regarded as a failure of transmission and this increases each collided nodes backoff stages . in contrast, there might be a successful transmission even when multiple nodes in a cs area are transmitting at the same time .this is because if the ratio of one incoming signal to the others is guaranteed to be higher than a certain threshold , this stronger incoming signal can always be decoded . in our analysis, we ignored that situation in order to simplify the analysis .nevertheless , our result is competitive . from these results , we learned the following lessons :if the network is required to maintain a lower interference than a certain level , there are multiple combinations of parameters that need to be controlled . since the pdf of the aggregate interference is a function of , there are multiple combinations of parameters that can result in the same value of .those controllable parameters are , transmission mode , payload size , etc .this can be used for the interference management in uncontrolled interference limited systems such as cognitive radio networks .in this paper , we analyzed the aggregate interference from randomly deployed csma / ca nodes .csma / ca mac operations on the stochastic geometry makes the understanding of a node s behavior difficult .however , the homogeneity of ppp provides a clue to solving the problem .we derived the effective active node density by spatially quantizing the infinite space and analyzing the steady - state power distribution on this quantization unit .verification is also nontrivial since much repetition is needed in order to get sound statistical inference from the random topology .although the exact closed form expression of the interference distribution can not be obtained , the analysis framework proposed in this paper shows high applicability to many related problems in modern wireless networks where the amount of interference in a random node geometry significantly affects system performance .the power distributions of and are conditioned on , while is conditioned on .therefore , the marginal pmf of is obtained by summing all of the probabilities on conditioned variables , as in .the condition on is eliminated using . the condition on eliminated by the homogeneity of ppp , i.e. , every geometric subset satisfies poisson distribution on the number of points in that subset , i.e. , . in, is from the fact that the possibility of a sharing area being on " is zero when there is no initial deployed nodes and active nodes . is from and . is from . in this equation , includes term , which is also the marginal pmf from the unconditional event of channel busyness .therefore , equation has the unknown variable on both sides of the equation and the solution can be obtained by solving the weighted eighth order polynomial of , where the weight is the product form of poisson and binomial probabilities .more specifically , is the eighth - order polynomial of the unknown and all other terms are known , and is a constant from poisson distribution for the given .therefore , summing the product of these two terms for varying from to also results in an eighth - order polynomial .since there is no general solution for polynomials with order higher than four and goes to infinity , we can not derive the closed form expression of , the solution of .however , we can still get a solution by finding the intersection of the right hand side and left hand side of , which we will call .
|
in this paper , we investigate the cumulative distribution function ( cdf ) of the aggregate interference in carrier sensing multiple access / collision avoidance ( csma / ca ) networks measured at an arbitrary time and position . we assume that nodes are deployed in an infinite two - dimensional plane by poisson point process ( ppp ) and the channel model follows the singular path loss function and rayleigh fading . to find the effective active node density we analyze the distributed coordinate function ( dcf ) dynamics in a common sensing area and obtain the steady - state power distribution within a spatial disk of radius , where is the effective carrier sensing distance . the results of massive simulation using network simulator-2 ( ns-2 ) show a high correlation with the derived cdf . aggregate interference , csma / ca , dcf , poisson point process , ns-2
|
a common approach to specifying computation systems is via deductive systems . those are used to specify and reason about various logics , as well as aspects of programming languages such as operational semantics , type theories , abstract machines .such specifications can be represented as logical theories in a suitably expressive formal logic where _ proof - search _ can then be used to model the computation . a logic used as a specification languageis known as a _ logical frameworks _ , which comes equipped with a representation methodology .the encoding of the syntax of deductive systems inside formal logic can benefit from the use of _ higher - order abstract syntax _ ( hoas ) a high - level and declarative treatment of object - level bound variables and substitution . at the same time , we want to use such a logic to reason over the _ meta - theoretical _ properties of object languages , for example type preservation in operational semantics , soundness and completeness of compilation or congruence of bisimulation in transition systems .typically this involves reasoning by ( structural ) induction and , when dealing with infinite behavior , co - induction . the need to support both inductive and co - inductive reasoning and some form of hoas requires some careful design decisions , since the two are prima facie notoriously incompatible .while any meta - language based on a -calculus can be used to specify and animate hoas encodings , meta - reasoning has traditionally involved ( co)inductive specifications both at the level of the syntax and of the judgements which are of course unified at the type - theoretic level .the first provides crucial freeness properties for datatypes constructors , while the second offers principles of case analysis and ( co)induction . this is well - known to be problematic , since hoas specifications may lead to non - monotone ( co)inductive operators , which by cardinality and consistency reasons are not permitted in inductive logical frameworks .moreover , even when hoas is weakened so as to be made compatible with standard proof assistants such as hol or coq , the latter suffer the fate of allowing the existence of too many functions and yielding the so called _ exotic _ terms .those are canonical terms in the signature of an hoas encoding that do not correspond to any term in the deductive system under study .this causes a loss of adequacy in hoas specifications , which is one of the pillar of formal verification , and it undermines the trust in formal derivations . on the other hand , logics such as lf that are weak by design in order to support this style of syntax are not directly endowed with ( co)induction principles .the contribution of this paper lies in the design of a new logic , called ( for a logic with -terms , induction and co - induction),-quantifier the eponymous logic in tiu s thesis . ] which carefully adds principles of induction and co - induction to a higher - order intuitionistic logic based on a proof theoretic notion of _ definition _ , following on work ( among others ) by lars hallns , eriksson , schroeder - heister and mcdowell and miller .definitions are akin to logic programs , but allow us to view theories as `` closed '' or defining fixed points .this alone permits to perform case analysis independently from induction principles .our approach to formalizing induction and co - induction is via the least and greatest solutions of the fixed point equations specified by the definitions .the proof rules for induction and co - induction make use of the notion of _ pre - fixed points _ and _ post - fixed points _ respectively . in the inductive case, this corresponds to the induction invariant , while in the co - inductive one to the so - called simulation .judgements are encoded as definitions accordingly to their informal semantics , either inductive or co - inductive . the simply typed language and the notion of free equality underlying , enforced via ( higher - order ) unification in an inference rule ,make it possible to reason _ intensionally _ about syntax .in fact , we can support hoas encodings of constants and we can _ prove _ the freeness properties of those constants , namely injectivity , distinctness and case exhaustion , although they can not be the constructors of a ( recursive ) datatype . can be proved to be a conservative extension of and a generalization ( with a term language based on simply typed -calculus ) of martin - lf first - order theory of iterated inductive definitions . moreover , to the best of our knowledge , it is the first sequent calculus with a syntactical cut - elimination theorem for co - inductive definitions . in recent years , several logical systems have been designed that build on the core features of . in particular ,one interesting , and orthogonal , extension is the addition of the -quantifier , which allows one to reason about the intentional aspects of _ names and bindings _ in object syntax specifications ( see , e.g. , ) .the cut elimination proof presented in this paper can be used as a springboard towards cut elimination procedures for more expressive ( conservative ) extensions of .in fact , the possibility of adapting the cut elimination proof for to various extensions of with is one of the main reasons to introduce a _ direct _syntactic cut elimination proof .we note that there are at least a couple of indirect methods to prove cut elimination in a logic with inductive and/or co - inductive definitions .the first of such methods relies on encodings of inductive and co - inductive definitions as second - order ( or higher - order ) formulae .this approach is followed in a recent work by baelde and miller where a logic similar to is considered .cut elimination in their work is proved indirectly via an encoding into higher - order linear logic . however , in the presence of , the existence of such an encoding is presently unknown .the second approach is via semantical methods .this approach is taken in a recent work by brotherston and simpson , which provide a model for a classical first - order logic with inductive definitions , hence , cut elimination follows by the semantical completeness of the cut free fragment .it is not obvious how such semantical methods can be adapted to prove cut elimination for extensions of with .this is because the semantics of itself is not yet very well understood , although there have been some recent attempts , see .the present paper is an extended and revised version of . in the conference paper ,the co - inductive rule had a technical side condition that is restrictive and unnatural .the restriction was essentially imposed by the particular cut elimination proof technique outlined in that paper .this restriction has been removed in the present version , and the ( co-)induction rules have been generalized . for the latter ,the formulation of the rules is inspired by a second - order encoding of least and greatest fixed points .consequently , we now develop a new cut elimination proof , which is radically different from the previous proof , using a reducibility - candidate technique , which is influenced by girard s strong normalisation proof for system f .this paper is concerned only with the cut elimination proof of . for examples and applications of and its extensions with , we refer the interested reader to .the rest of the paper is organized as follows .section [ sec : linc ] introduces the sequent calculus for the logic .section [ sec : drv ] presents two transformations of derivations that are essential to the cut reduction rules and the cut elimination proof in subsequent sections .section [ sec : cut - elim ] is the heart of the paper : we first ( subsection [ sec : reduc ] ) give a ( sub)set of reduction rules that transform a derivation ending with a cut rule to another derivation .the complete set of reduction can be found in appendix [ app : reduc ] .we then introduce the crucial notions of _ normalizability _ ( subsection [ sec : norm ] ) and of _ parametric reducibility _ after girard ( subsection [ sec : red ] ) .detailed proofs of the main lemma related to reducibility candidates are in appendix [ app : red ] .the central result of this paper , i.e. , cut elimination , is proved in details in subsection [ sec : ceproof ] .section [ sec : lrel ] surveys the related work and concludes the paper .{{c\longrightarrow c } } { } \quad \infer[{\hbox{\sl c}{\cal l}}]{{b,\gamma\longrightarrow c } } { { b , b,\gamma\longrightarrow c } } \quad \infer[{\hbox{\sl w}{\cal l}}]{{b,\gamma\longrightarrow c}}{{\gamma\longrightarrow c } } } \\ \\\multicolumn{2}{c } { \infer[\begin{array}{l } { \hbox{\sl mc } } , \mbox{where } n > 0 \end{array } ] { { \delta_1,\dots,\delta_n , \gamma\longrightarrow c } } { { \delta_1\longrightarrow b_1 } & \cdots & { \delta_n\longrightarrow b_n } & { b_1,\dots , b_n , \gamma\longrightarrow c } } } \\\\ \infer[{\bot{\cal l}}]{{\bot,\gamma\longrightarrow b}}{\rule{0pt}{6pt } } & \infer[{\top{\cal r}}]{{\gamma\longrightarrow \top } } { } \\ \\\infer[{\land{\cal l } } , i \in \{1,2\}]{{b_1 \land b_2,\gamma\longrightarrow d } } { { b_i,\gamma\longrightarrow d } } & \infer[{\land{\cal r}}]{{\gamma\longrightarrow b \land c } } { { \gamma\longrightarrow b } & { \gamma\longrightarrow c } } \\ \\ \infer[{\lor{\cal l}}]{{b \lor c,\gamma\longrightarrow d } } { { b,\gamma\longrightarrow d } & { c,\gamma\longrightarrow d } } & \infer[{\lor{\cal r } } , i \in \{1,2\}]{{\gamma\longrightarrow b_1 \lor b_2 } } { { \gamma\longrightarrow b_i } } \\ \\ \infer[{{\supset}{\cal l}}]{{b { \supset}c,\gamma\longrightarrow d } } { { \gamma\longrightarrow b } & { c,\gamma\longrightarrow d } } & \infer[{{\supset}{\cal r}}]{{\gamma\longrightarrow b { \supset}c } } { { b,\gamma\longrightarrow c } } \\ \\ \infer[{\forall{\cal l}}]{{\forall x.b\,x,\gamma\longrightarrow c } } { { b\,t,\gamma\longrightarrow c } } & \infer[{\forall{\cal r}}]{{\gamma\longrightarrow \forall x.b\,x } } { { \gamma\longrightarrow b\,y } } \\ \\ \infer[{\exists{\cal l}}]{{\exists x.b\,x,\gamma\longrightarrow c } } { { b\,y,\gamma\longrightarrow c } } & \infer[{\exists{\cal r}}]{{\gamma\longrightarrow \exists x.b\,x } } { { \gamma\longrightarrow b\,t } } \end{array}\ ] ] + [ [ equality - rules ] ] equality rules + + + + + + + + + + + + + + { { s = t , \gamma\longrightarrow c } } { \{{\gamma\rho\longrightarrow c\rho}~\mid~s\rho = _ { \beta\eta } t\rho \ } } \qquad \infer[{{\rm eq}{\cal r } } ] { { \gamma\longrightarrow t = t } } { } \ ] ] + [ [ induction - rules ] ] induction rules + + + + + + + + + + + + + + + { { \gamma , p\,\vec{t}\longrightarrow c } } { { b\,s \ , \vec{y}\longrightarrow s\,\vec{y } } & { \gamma , s\,\vec{t}\longrightarrow c } } \ ] ] { { \gamma\longrightarrow p\,\vec{t } } } { { \gamma\longrightarrow b\,x^p\,\vec{t } } } \qquad\qquad \infer[{{\rm i}{\cal r}_p } , p\,\vec x { \stackrel{\mu}{=}}b\,p\,\vec x ] { { \gamma\longrightarrow x^p\,\vec{t } } } { { \gamma\longrightarrow b\,x^p\,\vec{t}}}\ ] ] + [ [ co - induction - rules ] ] co - induction rules + + + + + + + + + + + + + + + + + + { { p\,\vec{t } , \gamma\longrightarrow c } } { { b\,x^p\,\vec{t } , \gamma\longrightarrow c } } \qquad\qquad \infer[{{\rm ci}{\cal l}_p } , p\,\vec x { \stackrel{\nu}{=}}b\,p\,\vec x ] { { x^p\,\vec{t } , \gamma\longrightarrow c } } { { b\,x^p\,\vec{t } , \gamma\longrightarrow c } } \ ] ] { { \gamma\longrightarrow p\,\vec{t } } } { { \gamma\longrightarrow s\,\vec{t } } & { s\,\vec{y}\longrightarrow b\,s\,\vec{y } } } \ ] ] the logic shares the core fragment of , which is an intuitionistic version of church s simple theory of types .we shall assume that the reader is familiar with church s simply typed -calculus ( with both and rules ) , so we shall recall only the basic syntax of the calculus here .a simple type is either a _ base type _ or a compound type formed using the function - type constructor .types are ranged over by , and .we assume an infinite set of typed variables , written , , etc .the syntax of -terms is given by the following grammar : to simplify presentation , in the following we shall often omit the type index in variables and -abstraction .the notion of free and bound variables are defined as usual .following church , we distinguish a base type to denote formulae , and we shall represent formulae as simply typed -terms of type .we assume a set of typed constants that correspond to logical connectives .the constants and denote ` true ' and ` false ' , respectively .propositional binary connectives , i.e. , , , and , are assigned the type .quantifiers are represented by indexed families of constants : and , both are of type .we also assume a family of typed equality symbols .although we adopt a representation of formulae as -terms , we shall use a more traditional notation when writing down formulae .for example , instead of writing , we shall use an infix notation .similarly , we shall write instead of . again, we shall omit the type annotation when it can be inferred from the context of the discussion .the type in quantifiers and the equality predicate are restricted to those simple types that do not contain occurrences of .hence our logic is essentially first - order , since we do not allow quantification over predicates . as we shall often refer to this kind of restriction to types , we give the following definition : a simple type is _ essentially first - order _ ( efo ) if it is generated by the following grammar : where is a base type other than . for technical reasons ( for presenting ( co-)inductive proof rules ) ,we introduce a notion of _ parameter _ into the syntax of formulae .intuitively , they play the role of eigenvariables ranging over the recursive call in a fixed point expression .more precisely , to each predicate symbol , we associate a countably infinite set , called the _parameter set for . elements of are ranged over by , , , etc , and have the same type as .when we refer to formulae of , we have in mind simply - typed -terms of type _ in -long normal form_. thus formulae of the logic can be equivalently defined via the following grammar : where is an efo - type .we shall omit the type annotation in when it is not important to the discussion .a _ substitution _ is a type - preserving mapping from variables to terms .we assume the usual notion of capture - avoiding substitutions .substitutions are ranged over by lower - case greek letters , e.g. , , and .application of substitution is written in postfix notation , denotes the term resulting from an application of substitution to .composition of substitutions , denoted by , is defined as .the whole logic is presented in the sequent calculus in figure [ fig : linc ] , including rules for equality and fixed points , as we discuss in section [ ssec : eq ] and [ ssec : coind ] .a sequent is denoted by where is a formula in -long normal form and is a multiset of formulae , also in -long normal form .notice that in the presentation of the rule schemes , we make use of hoas , e.g. , in the application it is implicit that has no free occurrence of .similarly for the ( co)induction rules .we work modulo -conversion without further notice . in the and rules, is an eigenvariable that is not free in the lower sequent of the rule .the rule is a generalization of the cut rule that simplifies the presentation of the cut - elimination proof .whenever we write a sequent , it is assumed implicitly that the formulae are well - typed : the type context , i.e. , the types of the constants and the eigenvariables used in the sequent , is left implicit as well as they can be inferred from the type annotations of the ( eigen)variables . in some inference rules , reading them bottom up , new eigenvariables and parameters may be introduced in the premises of the rules , for instance , in and , as typical in sequent calculus .however , unusually , we shall also allow , and to possibly introduce new eigenvariables ( and new parameters , in the case of ) , again reading the rules bottom - up .thus the term in the premise of the -rule may contain a free occurrence of an eigenvariable not already occuring in the conclusion of the rule .the implication of this is that is provable for any type ; in other words , there is an implicit assumption that all types are non - empty .hence the quantifiers in our setting behave more classically than intuitionistically .the reason for this rather awkward treatment of quantifiers is merely a technical convenience .we could forgo the non - emptiness assumption on types by augmenting sequents with an explicit signature acting as a typing environment , and insisting that the term in to be well - formed under the typing environment of the conclusion of the rule .however , adding explicit typing contexts into sequents introduces another layer of bureaucracy in the proof of cut elimination , which is not especially illuminating . andsince our primary goal is to show the central arguments in cut elimination involving ( co-)induction , we opt to present a slightly simplified version of the logic so that the main technical arguments ( which are already quite complicated ) in the cut elimination proof , related to ( co-)induction rules , can be seen more clearly .the cut elimination proof presented in the paper can be adapted to a different presentation of with explicit typing contexts ; see for an idea of how such an adaptation may be done .we extend the logical fragment with a proof theoretic notion of equality and fixed points .the right introduction rule for equality is reflexivity , that is , it recognizes that two terms are syntactically equal .the left introduction rule is more interesting .the substitution in is a _ unifier _ of and .note that we specify the premise of as a set , with the intention that every sequent in the set is a premise of the rule .this set is of course infinite ; every unifier of can be extend to another one ( e.g. , by adding substitution pairs for variables not in the terms ) .however , in many cases , it is sufficient to consider a particular set of unifiers , which is often called a _complete set of unifiers ( csu ) _ , from which any unifier can be obtained by composing a member of the csu set with a substitution . in the case where the terms are first - order terms , or higher - order terms with the pattern restriction , the set csu is a singleton , i.e. , there exists a most general unifier ( mgu ) for the terms . our rules for equalityactually encompasses the notion of _ free equality _ as commonly found in logic programming , in the form of clark s equality theory : injectivity of function symbols , inequality between distinct function symbols , and the `` occur - check '' follow from rule -rule .for instance , given a base type ( for natural numbers ) and the constants ( zero ) and ( successor ) , we can derive as follows : { { \longrightarrow \forall x.\ z = ( s~x ) { \supset}\bot } } { \infer[{{\supset}{\cal r } } ] { { \longrightarrow z = ( s~y ) { \supset}\bot } } { \infer[{{\rm eq}{\cal l } } ] { { z = ( s~y)\longrightarrow \bot } } { } } } \ ] ] since and are not unifiable , the rule above has empty premise , thus concluding the derivation .a similar derivation establishes the occur - check property , e.g. , .we can also prove the injectivity of the successor function , .this proof theoretic notion of equality has been considered in several previous work by schroeder - heister , and mcdowell and miller .one way of adding induction and co - induction to a logic is to introduce fixed point expressions and their associated introduction and elimination rules , using the and operators of the ( first - order ) -calculus .this is essentially what we shall follow here , but with a different notation . instead of using a `` nameless '' notation with and to express fixed points, we associate a fixed point equation with an atomic formula .that is , we associate certain designated predicates with a _definition_. this notation is clearer and more convenient as far as our applications are concerned . for a proof system using nameless notation for ( co)inductive predicates ,the interested reader is referred to recent work by baelde and miller .[ def : def - clause ] an _ inductive definition clause _ is written , where is a predicate constant . the atomic formula is called the _ head _ of the clause , and the formula , where is a closed term containing no occurrences of parameters , is called the _body_. similarly , a _co - inductive definition clause _ is written .the symbols and are used simply to indicate a definition clause : they are not a logical connective .we shall write to denote a definition clause generally , i.e. , when we are not interested in the details of whether it is an inductive or a co - inductive definition . a _ definition _ is a finite set of definition clauses .a predicate may occur only at most once in the heads of the clauses of a definition .we shall restrict to _ non - mutually recursive _ definitions .that is , given two clauses and in a definition , where , if occurs in then does not occur in , and vice versa .note that the above restriction to non - mutual recursion is immaterial , since in the first - order case it is well known how one can easily encode mutually recursive predicates as a single predicate with an extra argument .the rationale behind that restriction is merely to simplify the presentation of inference rules and the cut elimination proof .were we to allow mutually recursive definitions , the introduction rules and for a predicate would have possibly more than two premises , depending on the number of predicates which are mutually dependent on ( see for a presentation of introduction rules for mutually dependent definitions ) . for technical convenience we also bundle up all the definitional clause for a given predicate in a single clause , following the same principles of the _ iff - completion _ in logic programming .further , in order to simplify the presentation of rules that involve predicate substitutions , we denote a definition using an abstraction over predicates , that is where is an abstraction with no free occurrence of predicate symbol and variables .substitution of in the body of the clause with a formula can then be written simply as .when writing definition clauses , we often omit the outermost universal quantifiers , with the assumption that free variables in a clause are universally quantified .for example even numbers are defined as follows : where in this case is of the form .the left and right rules for ( co-)inductively defined atoms are given at the bottom of figure [ fig : linc ] . in rules and , the abstraction is an invariant of the ( co-)induction rule .the variables are new eigenvariables and is a new parameter not already occuring in the lower sequent . for the induction rule , denotes a pre - fixed point of the underlying fixed point operator .similarly , for the co - induction rule , can be seen as denoting a post - fixed point of the same operator . here, we use a characterization of induction and co - induction proof rules as , respectively , the least and the greatest solutions to a fixed point equation .notice that the right - introduction rules for inductive predicates and parameters ( dually , the left - introduction rules for co - inductive predicates and parameters ) are slightly different from the corresponding rules in -like logics ( see remark [ rem : unf ] ) .these rules can be better understood by the usual interpretation of ( co-)inductive definitions in second - order logic ( to simplify presentation , we show only the propositional case here ) : then the right - introduction rule for inductively defined predicate will involve an implicit universal quantification over predicates . as standard in sequent calculus ,such a universal quantified predicate will be replaced by a new eigenvariable ( in this case , a new parameter ) , reading the rule bottom up .note that if we were to follow the above second - order interpretation literally , an alternative rule for inductive predicates could be : { { \gamma\longrightarrow p } } { { b\,x^p { \supset}x^p , \gamma\longrightarrow x^p } } \ ] ] then there would be no need to add the -rule since it would be derivable , using the clause in the left hand side of the sequent .( this , of course , is true only when such an instance appears above an instance for . )our presentation has the advantage that it simplifies the cut - elimination arguments in the subsequent sections .the left - introduction rule for co - inductively defined predicate can be explained dually .a similar encoding of ( co-)inductive definitions as second - order formulae is used in , where cut - elimination is indirectly proved by appealing to a _ focused _ proof system for higher - order linear logic .a similar approach can be followed for , but we prefer to develop a direct cut - elimination proof , since such a proof can serve as the basis of cut - elimination for extensions of , for example , with the -quantifier .[ rem : unf ] a commonly used form of introduction rules for definitions , or fixed points , uses an unfolding of the definitions .this form of rules is followed in several related logics , e.g. , , and -mall .the right - introduction rule for inductive definitions , for instance , takes the form : { { \gamma\longrightarrow p\,\vec t } } { { \gamma\longrightarrow b\,p\,\vec t } } \ ] ] that is , in the premise , the predicate is replaced with the body of the definition .the logic , like , imposes a stratification on definitions , which amounts to a strict positivity condition : the head of a definition can only appear in a strictly positive position in the body , i.e. , it never appears to the left of an implication . let us call such a definition a _stratified definition_. for stratified definitions , the rule can be derived as follows : { { \gamma\longrightarrow p\,\vec t } } { { \gamma\longrightarrow b\,p\,\vec t } & \infer[{{\rm i}{\cal r } } ] { { b\,p\,\vec t\longrightarrow p\,\vec t } } { \infer [ ] { { b\,p\,\vec t\longrightarrow b\,x^p\,\vec t } } { \deduce{\vdots } { \infer[{{\rm i}{\cal l } } ] { { p\,\vec u\longrightarrow x^p\,\vec u } } { \infer[{{\rm i}{\cal r}_p } ] { { b\,x^p\,\vec x\longrightarrow x^p\,\vec x } } { \infer[init ] { { b\,x^p\,\vec x\longrightarrow b\,x^p\,\vec x } } { } } & \infer[init ] { { x^p\,\vec u\longrightarrow x^p\,\vec u } } { } } } } } } \ ] ] where the ` dots ' are a derivation composed using left and right introduction rules for logical connectives in .notice that all leaves of the form can be proved by using the rule , with as the inductive invariant .conversely , given a stratified definition , any proof in using that definition can be transformed into a proof of simply by replacing with .note that once is shown admissible , one can also prove admissibility of unfolding of inductive definitions on the left of a sequent ; see for a proof . since a defined atomic formula can be unfolded via its introduction rules , the notion of size of a formula as simply the number of connectives in it would not take into account this possible unfolding .we shall define a more general notion assigning a positive integer to each predicate symbol , which we refer to as its _ level_. a similar notion of level of a predicate was introduced for .however , in , the level of a predicate is only used to guarantee monotonicity of definitions .[ def : level ] to each predicate we associate a natural number , the _ level _ of . given a formula , its _ size _ is defined as follows :1 . , for any and any .2 . .3 . .4 . .5 . . note that in this definition , we do not specify precisely any particular level assignment to predicates .we show next that there is a level assignment that has a property that will be useful later in proving cut elimination .[ lm : level ] given any definition , there is a level assignment to every predicate occuring in such that if is in , then for every parameter .let be a binary relation on predicate symbols defined as follows : iff occurs in the body of the definition clause for .let be the reflexive - transitive closure of .since we restrict to non - mutually recursive definitions and there are only finitely many definition clauses ( definition [ def : def - clause ] ) , it follows that is a well - founded partial order .we now compute a level assignment to predicate symbols by induction on .this is simply done by letting , if is undefined , and , for some parameter , if note that in the latter case , by induction hypothesis , every predicate symbol , other than , in has already been assigned a level , so is already defined at this stage .note also that it does not matter which we choose since all parameters have the same size .we shall assume from now on that predicates are assigned levels satisfying the condition of lemma [ lm : level ] , so whenever we have a definition clause of the form , we shall implicitly assume that in , a notion of stratification is used to rule out non - monotone ( or in halns terminology _ partial _ ) definitions , such as , , for which cut - elimination is problematic .in fact , from the above definition both and are provable , but there is no direct proof of .this can be traced back to the fact that unfolding of definitions in and is allowed on both the left and the right hand side of sequent . in ,inconsistency does not arise even allowing a non - monotone definition as above , due to the fact that arbitrary unfolding of fixed points is not permitted . instead, only a limited form of unfolding is allowed , i.e. , in the form of unfolding of inductive parameters on the right , and co - inductive parameters on the left . as a consequence of this restrictive unfolding , in can not reason about some well - founded inductive definitions which are not stratified .for example , consider the non - stratified definition : if this definition were to be interpreted as a logic program ( with negation - as - failure ) , for example , then its least fixed point is exactly the set of even natural numbers .however , the above encoding in is incomplete with respect to this interpretation , since not all even natural numbers can be derived using the above definition .for example , it is easy to see that is not derivable , since this would require a derivation of , for some inductive parameter , which is impossible because no unfolding of inductive parameter is allowed on the left of a sequent .the same idea prevents the derivability of given the definition .so while inconsistency in the presence of non - monotone definitions is avoided in , its reasoning power does not extend that of significantly .we now discuss some properties of derivations in which involve instantiations of eigenvariables and parameters .these properties will be used in the cut - elimination proof in subsequent sections . before we proceed, it will be useful to introduce the following derived rule in : { { \gamma\longrightarrow c } } { \ { { \gamma\theta\longrightarrow c\theta}\}_\theta } \ ] ] this rule is just a ` macro ' for the following derivation : { { \gamma\longrightarrow c } } { \infer[{{\rm eq}{\cal r } } ] { { \longrightarrow t = t } } { } & \infer[{{\rm eq}{\cal l } } ] { { t = t,\gamma\longrightarrow c } } { \{{\gamma\theta\longrightarrow c\theta}\}_\theta } } \ ] ] where is some arbitrary term .the motivation behind the rule is purely technical ; it allows us to prove that a derivation transformation ( i.e. , substitutions of eigenvariables in derivations in section [ sec : subst ] ) commutes with cut reduction ( see lemma [ lm : reduct_subst ] ) .since the rule hides a simple form of cut , to prove cut - elimination of , we have to show that , in addition to , is admissible . in the following, denotes the identity substitution , i.e. , for every variable .[ lm : subst - elimination ] for every and , if the sequent is ( cut - free ) derivable in with then it is ( cut - free ) derivable in without . given a derivation of with occurrences of ,obtain a -free derivation by simply replacing any subderivation in of the form : { { \delta\longrightarrow b } } { \left\ { \raisebox{-1.3ex } { \deduce{{\delta\theta\longrightarrow b\theta}}{\pi^\theta } } \right\}_\theta } \ ] ] with its premise .following , we define a _ measure _ which corresponds to the height of a derivation : [ def : mu ] given a derivation with premise derivations , for some index set , the measure is the least upper bound .note that given the possible infinite branching of rule , these measures can in general be ( countable ) ordinals .therefore proofs and definitions on those measures require transfinite induction and recursion . however , in most of the proofs to follow , we do case analysis on the last rule of a derivation . in such a situation, the inductive cases for both successor and limit ordinals are basically covered by the case analysis on the inference figures involved , and we shall not make explicit use of transfinite principles . with respect to the use of eigenvariables and parameters in a derivation , there may be occurrences of the formers that are not free in the end sequent .we refer to these variables and parameters as the _ internal variables and parameters _ , respectively .we view the choices of those variables and parameters as arbitrary and therefore identify derivations which differ on the choice of internal variables and parameters . in other terms , we quotient derivations modulo injective renaming of internal eigenvariables and parameters .the following definition extends eigenvariable substitutions to apply to derivations .since we identify derivations that differ only in the choice of internal eigenvariables , we will assume that such variables are chosen to be distinct from the variables in the domain of the substitution and from the free variables of the range of the substitution .thus applying a substitution to a derivation will only affect the variables free in the end - sequent .[ def : subst ] if is a derivation of and is a substitution , then we define the derivation of as follows : 1 .suppose ends with the rule {{s = t,\gamma'\longrightarrow c } } { \left\{\raisebox{-1.5ex } { \deduce{{\gamma'\rho\longrightarrow c\rho } } { \pi^{\rho } } } \right\}_{\rho } } \enspace\ ] ] where each satisfies .observe that any unifier for the pair can be transformed to another unifier for , by composing the unifier with .thus is {{s\theta = t\theta,\gamma'\theta\longrightarrow c\theta } } { \left\{\raisebox{-1.5ex } { \deduce{{\gamma'\theta\rho'\longrightarrow c\theta\rho ' } } { \pi^{\theta\circ\rho ' } } } \right\}_{\rho ' } } \enspace , \ ] ] where .if ends with with premise derivations then also ends with the same rule and has premise derivations .if ends with any other rule and has premise derivations , then also ends with the same rule and has premise derivations . among the premises of the inference rules of ( with the exception of ) , certain premises share the same right - hand side formula with the sequent in the conclusion .we refer to such premises as major premises .this notion of major premise will be useful in proving cut - elimination , as certain proof transformations involve only major premises .[ def : major - premise ] given an inference rule with one or more premise sequents , we define its major premise sequents as follows . 1 .if is either or , then its rightmost premise is the major premise 2 .if is then its left premise is the major premise .3 . otherwise , all the premises of are major premises .a _ minor premise _ of a rule is a premise of which is not a major premise .the definition extends to derivations by replacing premise sequents with premise derivations .the proofs of the following two lemma are straightforward from definition [ def : subst ] and induction on the height of derivations .[ lm : subst ] for any substitution and derivation of , is a derivation of .[ lm : subst - height ] for any derivation and substitution , .[ lm : subst - drv - comp ] for any derivation and substitutions and , the derivations and are the same derivation .[ def : param subst ] a _ parameter substitution _ is a partial map from parameters to pairs of proofs and _ closed _ terms such that whenever then has the same type as and either one of the following holds : * , for some and , and is a derivation of , or * , for some and , and is a derivation of .the _ support _ of is the set we consider only parameter substitutions with finite support .we say that is _ fresh for _ , written , if for each , and does not occur in whenever .we shall often enumerate a parameter substitution using a similar notation to ( eigenvariables ) substitution , e.g. , \ ] ] denotes a parameter substitution with support and .given a formula and a parameter substitution as above , we write to denote the formula .\ ] ] let be a derivation of and let be a parameter substitution . define the derivation of by induction on the height of as follows : * suppose for some such that and ends with , as shown below left .then is as shown below right .{ { \gamma\longrightarrow x^p \vec t } } { \deduce{{\gamma\longrightarrow b\,x^p \vec t}}{\pi ' } } \qquad \infer[mc ] { { \gamma\theta\longrightarrow s\,\vec t } } { \deduce{{\gamma\theta\longrightarrow b\,s\,\vec t}}{\pi'\theta } & \deduce{{b\,s\,\vec t\longrightarrow s\,\vec t}}{\pi_s[\vec t/\vec x ] } } \ ] ] * similarly , suppose ends with on and : { { x^p \vec t , \gamma'\longrightarrow c } } { \deduce{{b\,x^p\vec t , \gamma'\longrightarrow c}}{\pi ' } } \ ] ] where and .then is { { s\,\vec t , \gamma'\theta\longrightarrow c\theta } } { \infer[mc ] { { s\,\vec t\longrightarrow b\,s\,\vec t } } { \infer[init ] { { s\,\vec t\longrightarrow s\,\vec t } } { } & \deduce{{s\,\vec t\longrightarrow b\,s\,\vec t}}{\pi_s[\vec t/\vec x ] } } & \deduce{{b\,s\,\vec t , \gamma'\theta\longrightarrow c\theta}}{\pi'\theta } } \ ] ] * in all other cases , suppose ends with a rule with premise derivations for some index set . since we identify derivations up to renaming of internal parameters , we assume without loss of generality that the internal eigenvariables in the premises of ( if any ) do not appear in .then ends with the same rule , with premise derivations . notice that the definition of application of parameter substitution in derivations in definition [ def : param subst ] is asymmetric in the treatment of inductive and co - inductive parameters , i.e. , in the cases where ends with and . in the latter case , the substituted derivation uses a seemingly unnecessary cut { { s\,\vec t\longrightarrow b\,s\,\vec t } } { \infer[init ] { { s\,\vec t\longrightarrow s\,\vec t } } { } & \deduce{{s\,\vec t\longrightarrow b\,s\,\vec t}}{\pi_s[\vec t/\vec x ] } } \ ] ] the reason behind this is rather technical ; in our main cut elimination proof , we need to establish that ] , hence reducibility of would follow from reducibility of the above cut .however , according to our cut reduction rules ( see section [ sec : reduc ] ) , the above cut does not necessarily reduce to ] , so it is not necessary to introduce explicitly this cut instance in the case involving inductive parameters .it is possible to define a symmetric notion of parameter substitution , but that would require different cut reduction rules than the ones we proposed in this paper . another possibility would be to push the asymmetry to the definition of _ reducibility _ ( see section [ sec : cut - elim ] ) .we have explored these alternative options , but for the purpose of proving cut elimination , we found that the current definition yields a simpler proof . the following lemma states that the derivation is well - formed .[ lm : param subst ] let be a parameter substitution and a derivation of . then is a derivation of . note that since parameter substitutions replace parameters with closed terms , they commute with ( eigenvariable ) substitutions .[ lm : param subst commutes ] for every derivation , substitution , parameter substitution , the derivation is the same as the derivation . in the following ,we denote with ] for every and .the central result of our work is cut - elimination , from which consistency of the logic follows .gentzen s classic proof of cut - elimination for first - order logic uses an induction on the size of the cut formula .the cut - elimination procedure consists of a set of reduction rules that reduces a cut of a compound formula to cuts on its sub - formulae of smaller size . in the case of ,the use of induction / co - induction complicates the reduction of cuts .consider for example a cut involving the induction rules : { { \delta , \gamma\longrightarrow c } } { \infer[{{\rm i}{\cal r } } ] { { \delta\longrightarrow p\,\vec t } } { \deduce{{\delta\longrightarrow b\,x^p\,\vec t}}{\pi_1 } } & \infer[{{\rm i}{\cal l } } ] { { p\,\vec t , \gamma\longrightarrow c } } { \deduce{{b\,s\,\vec y\longrightarrow s\,\vec y}}{\pi_b } & \deduce{{s\,\vec t , \gamma\longrightarrow c}}{\pi } } } \ ] ] there are at least two problems in reducing this cut .first , any permutation upwards of the cut will necessarily involve a cut with that can be of larger size than , and hence a simple induction on the size of the cut formula will not work .second , the invariant does not appear in the conclusion of the left premise of the cut .the latter means that we need to transform the left premise so that its end sequent will agree with the right premise .any such transformation will most likely be _ global _ , and hence simple induction on the height of derivations will not work either .we shall use the _ reducibility _ technique to prove cut elimination . more specifically, we shall build on the notion of reducibility introduced by martin - lf to prove normalization of an intuitionistic logic with iterative inductive definition .martin - lf s proof has been adapted to sequent calculus by mcdowell and miller , but in a restricted setting where only natural number induction is allowed .since our logic involves arbitrary stratified inductive definitions , which also includes iterative inductive definitions , we shall need different , and more general , cut reductions .but the real difficulty in our case is in establishing cut elimination in the presence of co - inductive definitions , for which there is no known direct cut elimination proof ( prior to our work on which this article is based on ) , at the best of our knowledge , as far as the sequent calculus is concerned .the main part of the reducibility technique is a definition of the family of reducible sets of derivations . in martin - lf s theory of iterative inductive definition, this family of sets is defined inductively by the `` type '' of the derivations they contain , i.e. , the formula in the right - hand side of the end sequent in a derivation .extending this definition of reducibility to is not obvious . in particular , in establishing the reducibility of a derivation of type ending with a ruleone must first establish the reducibility of its premise derivations , which may have larger types , since could be any formula. therefore a simple inductive definition based on types of derivations would not be well - founded .the key to properly `` stratify '' the definition of reducibility is to consider reducibility under parameter substitutions .this notion of reducibility , called _ parametric reducibility _ , was originally developed by girard to prove strong normalisation of system f , i.e. , in the interpretation of universal types . as with strong normalisation of systemf , ( co-)inductive parameters are substituted with some `` reducibility candidates '' , which in our case are certain sets of derivations satisfying closure conditions similar to those for system f , but which additionally satisfy certain closure conditions related to ( co-)inductive definitions .the remainder of this section is structured as follows . in section [ sec : reduc ] we define a set of cut reduction rules that are used to elimination the applications of the cut rule . for the cases involving logical operators , the cut - reduction rules used to prove the cut - elimination for are the same as those of .the crucial differences are , of course , in the reduction rules involving induction and co - induction rules , where we use the transformation described in definition [ def : param subst ] .we then proceed to define two notions essential to our cut elimination proof : _ normalizability _ ( section [ sec : norm ] ) and _ parametric reducibility _( section [ sec : red ] ) .these can be seen as counterparts for martin - lf s notions of normalizability and _ computability _ , respectively .normalizability of a derivation implies that all the cuts in it can be eventually eliminated ( via the cut reduction rules defined earlier ) .reducibility is a stronger notion , in that it implies normalizability .the main part of the cut elimination proof is presented in section [ sec : ceproof ] , where we show that every derivation is reducible , hence it can be turned into a cut - free derivation .we now define a reduction relation on derivations ending with .this reduction relation is an extension of the similar cut reduction relation used in mcdowell and miller s cut elimination proof . in particular , the reduction rules involving introduction rules for logical connectives are the same .the main differences are , of course , in the reduction rules involving induction and co - induction rules .there is also slight difference in one reduction rule involving equality , which in our case utilises the derived rule .therefore in the following definition , we shall highlight only those reductions that involve ( co-)induction and equality rules .the complete list of reduction rules can be found in appendix [ app : reduc ] . to ease presentation, we shall use the following notations to denote certain forms of derivations .the derivation { { \delta_1 , \ldots , \delta_n , \gamma \longrightarrow c } } { \deduce{{\delta_1\longrightarrow b_1}}{\pi_1 } & \cdots & \deduce{{\delta_n\longrightarrow b_n}}{\pi_n } & \deduce{{\gamma\longrightarrow c}}{\pi } } \enspace\ ] ] is abbreviated as .whenever we write we assume implicitly that the derivation is well - formed , i.e. , is a derivation ending with some sequent and the right - hand side of the end sequent of each is a formula .similarly , we abbreviated as the derivation { { b\longrightarrow b}}{}\ ] ] and denotes a derivation ending with the rule with premise derivations .[ def : reduct ] we define a _ reduction _ relation between derivations .the redex is always a derivation ending with the multicut rule {{\delta_1,\ldots,\delta_n,\gamma\longrightarrow c } } { \deduce{{\delta_1\longrightarrow b_1 } } { \pi_1 } & \cdots & \deduce{{\delta_n\longrightarrow b_n } } { \pi_n } & \deduce{{b_1,\ldots , b_n,\gamma\longrightarrow c } } { \pi } } \enspace\ ] ] we refer to the formulas produced by the as _ cut formulas_. if , reduces to the premise derivation .for we specify the reduction relation based on the last rule of the premise derivations .if the rightmost premise derivation ends with a left rule acting on a cut formula , then the last rule of and the last rule of together determine the reduction rules that apply . following mcdowell and miller , we classify these rules according to the following criteria : we call the rule an _ essential _ case when ends with a right rule ; if it ends with a left rule or , it is a _ left - commutative _ case ; if ends with the rule , then we have an _ axiom _ case ; a _ multicut _ case arises when it ends with the rule . when does not end with a left rule acting on a cut formula , then its last rule is alone sufficient to determine the reduction rules that apply .if ends with or a rule acting on a formula other than a cut formula , then we call this a _ right - commutative _ case .structural _ case results when ends with a contraction or weakening on a cut formula .if ends with the rule , this is also an axiom case ; similarly a multicut case arises if ends in the rule .for simplicity of presentation , we always show .we show here the cases involving ( co-)induction rules .[ [ essential - cases ] ] essential cases : + + + + + + + + + + + + + + + + suppose and are {{\delta_1\longrightarrow s = t } } { } \qquad\qquad\qquad \infer[{{\rm eq}{\cal l}}]{{s = t , b_2,\ldots , b_n,\gamma\longrightarrow c } } { \left\{\raisebox{-1.5ex } { \deduce{{b_2\rho,\ldots , b_n\rho,\gamma\rho\longrightarrow c\rho } } { \pi^\rho } } \right\}_\rho } \enspace\ ] ] note that in this case , in ranges over all substitution , as any substitution is a unifier of and .let be the derivation . in this case, reduces to { { \delta_1,\delta_2,\ldots,\delta_n,\gamma\longrightarrow c } } { \deduce{{\delta_2,\ldots,\delta_n,\gamma\longrightarrow c}}{\xi_1}}\ ] ] we use the double horizontal lines to indicate that the relevant inference rule ( in this case , ) may need to be applied zero or more times .suppose and are , respectively , { { \delta_1\longrightarrow p\,\vec t } } { \deduce{{\delta_1\longrightarrow d\,x^p\,\vec t}}{\pi_1 ' } } \qquad \infer[{{\rm i}{\cal l } } ] { { p\,\vec{t } , b_2,\dots , b_n,\gamma\longrightarrow c } } { \deduce{{d\,s\,\vec{y}\longrightarrow s\,\vec{y}}}{\pi_s } & \deduce{{s\,\vec{t } , b_2,\dots , b_n , \gamma\longrightarrow c}}{\pi ' } } \ ] ] where and is a new parameter .then reduces to , \pi_s[\vec t/\vec y ] ) , \pi_2,\ldots,\pi_n,\pi').\ ] ] suppose and are { { \delta_1\longrightarrow p\,\vec{t } } } { \deduce{{\delta_1\longrightarrow s\,\vec{t}}}{\pi_1 ' } & \deduce{{s\,\vec{y}\longrightarrow d\,s\,\vec{y}}}{\pi_s } } \qquad \qquad \infer[{{\rmci}{\cal l } } ] { { p\,\vec{t } , \dots , \gamma\longrightarrow c } } { \deduce{{d\,x^p\,\vec{t},\dots , \gamma\longrightarrow c}}{\pi ' } } \ ] ] where and is a new parameter .then reduces to ) , \pi_2,\ldots,\pi_n,\pi'[(\pi_s , s)/x^p]).\ ] ] [ [ left - commutative - cases ] ] left - commutative cases : + + + + + + + + + + + + + + + + + + + + + + + in the following , we suppose that ends with a left rule , other than , acting on . : : suppose is { { p\,\vec{t } , \delta_1'\longrightarrow b_1 } } { \deduce{{d\,s\,\vec{y}\longrightarrow s\,\vec{y}}}{\pi_s } & \deduce{{s\,\vec{t } , \delta_1'\longrightarrow b_1}}{\pi_1 ' } } \ ] ] where . let . then reduces to { { p\,\vec{t } , \delta_1',\dots,\delta_n\longrightarrow c } } { \deduce{{d\,s\,\vec{y}\longrightarrow s\,\vec{y}}}{\pi_s } & \deduce{{s\,\vec{t } , \delta_1',\dots,\delta_n,\gamma\longrightarrow c}}{\xi_1 } } \ ] ] [ [ right - commutative - cases ] ] right - commutative cases : + + + + + + + + + + + + + + + + + + + + + + + + : : suppose is { { b_1,\dots , b_n , p\,\vec{t},\gamma'\longrightarrow c } } { \deduce{{d\,s\,\vec{y}\longrightarrow s\,\vec{y}}}{\pi_s } & \deduce{{b_1,\dots , b_n , s\,\vec{t } , \gamma'\longrightarrow c}}{\pi ' } } \enspace , \ ] ] where .let . then reduces to { { \delta_1,\dots,\delta_n , p\,\vec{t},\gamma'\longrightarrow c } } { \deduce{{d\,s\,\vec{y}\longrightarrow s\,\vec{y}}}{\pi_s } & \deduce{{\delta_1,\dots,\delta_n , s\,\vec{t } , \gamma'\longrightarrow c}}{\xi_1 } } \enspace\ ] ] : : suppose is { { b_1,\dots , b_n,\gamma\longrightarrow p\,\vec{t } } } { \deduce{{b_1,\dots , b_n,\gamma\longrightarrow s\,\vec{t}}}{\pi ' } & \deduce{{s\,\vec{y}\longrightarrow d\,s\,\vec{y}}}{\pi_s } } \enspace , \ ] ] where .let . then reduces to { { \delta_1,\dots,\delta_n,\gamma\longrightarrow p\,\vec{t } } } { \deduce{{\delta_1,\dots,\delta_n,\gamma\longrightarrow s\,\vec{t}}}{\xi_1 } & \deduce{{s\,\vec{y}\longrightarrow d\,s\,\vec{y}}}{\pi_s } } \enspace\ ] ] it is clear from an inspection of the inference rules in figure [ fig : linc ] and the definition of cut reduction ( see appendix [ app : reduc ] ) that every derivation ending with a multicut has a reduct . notethat since the left - hand side of a sequent is a multiset , the same formula may occur more than once in the multiset . in the cut reduction rules, we should view these occurrences as distinct so that no ambiguity arises as to which occurrence of a formula is subject to the rule .the following lemma shows that the reduction relation is preserved by eigenvariable substitution . the proof is given in appendix [ app : red ] .[ lm : reduct_subst ] let be a derivation ending with a and let be a substitution . if reduces to then there exists a derivation such that and reduces to .[ def : norm ] we define the set of _ normalizable _ derivations to be the smallest set that satisfies the following conditions : 1 .if a derivation ends with a multicut , then it is normalizable if every reduct of is normalizable .if a derivation ends with any rule other than a multicut , then it is normalizable if the premise derivations are normalizable .the set of all normalizable derivations is denoted by .each clause in the definition of normalizability asserts that a derivation is normalizable if certain ( possibly infinitely many ) other derivations are normalizable .we call the latter the _ predecessors _ of the former. thus a derivation is normalizable if the tree of its successive predecessors is well - founded .we refer to this well - founded tree as its _normalization_. since a normalization is well - founded , it has an associated induction principle : for any property of derivations , if for every derivation in the normalization , holds for every predecessor of implies that holds for , then holds for every derivation in the normalization .we shall define explicitly a measure on a normalizable derivation based on its normalization tree .[ def : deg - norm ] let be a normalizable derivation .the _ normalization degree of _ , denoted by , is defined by induction on the normalization of as follows : the normalization degree of is basically the height of its normalization tree .note that can be an ordinal in general , due to the possibly infinite - branching rule .[ lm : norm - cut - free ] if there is a normalizable derivation of a sequent , then there is a cut - free derivation of the sequent . similarly to . in the proof of the main lemma for cut elimination ( lemma [ lm : comp ] ) we shall use induction on the normalization degree , instead of using directly the normalization ordering .the reason is that in some inductive cases in the proof , we need to compare a ( normalizable ) derivation with its instances , but the normalization ordering does not necessarily relate the two , e.g. , and may not be related by the normalization ordering , although their normalization degrees are ( see lemma [ lm : norm - degree ] ) .later , we shall define a stronger ordering called _ reducibility _ , which implies normalizability . in the cut elimination proof for , in one of the inductive cases , an implicit reducibility ordering is assumed to hold between derivation and its instance .as the reducibility ordering in their setting is a subset of the normalizability ordering , this assumption may not hold in all cases , and as a consequence there is a gap in the proof in . the next lemma states that normalization is closed under substitutions .[ lm : subst - norm ] if is a normalizable derivation , then for any substitution , is normalizable . by induction on . 1if ends with a multicut , then also ends with a multicut . by lemma [ lm : reduct_subst ]every reduct of corresponds to a reduct of , therefore by induction hypothesis every reduct of is normalizable , and hence is normalizable .2 . suppose ends with a rule other than multicut and has premise derivations . by definition [ def : subst ]each premise derivation in is either or . since is normalizable, is normalizable , and so by the induction hypothesis is also normalizable .thus is normalizable .the normalization degree is non - increasing under eigenvariable substitution .[ lm : norm - degree ] let be a normalizable derivation .then for every substitution .by induction on using definition [ def : subst ] and lemma [ lm : reduct_subst ] . note that can be smaller than because substitution may reduces the number of premises in , , if ends with an acting on , say ( which are unifiable ) , and is a substitution that maps and to distinct constants then ends with with empty premise . in the following, we shall use the term `` type '' in two different settings : in categorizing terms and in categorizing derivations . to avoid confusion, we shall refer to the types of terms as _ syntactic types _ , and the term `` type '' is reserved for types of derivations .our notion of a type of a set of derivations may abstract from particular first - order terms in a formula .this is because our definition of reducibility ( candidates ) will have to be closed under eigenvariable substitutions , which is in turn imposed by the fact that our proof rules allow instantiation of eigenvariables in the derivations ( i.e. , the and the rules ) .[ def : type - of - drv ] we say that _ a derivation has type _ if the end sequent of is of the form for some .let be a term with syntactic type , where each is a syntactic efo - type .are always efo - types . ]a set of derivations is said to be _ of type _ if every derivation in has type for some terms . given a list of terms and a set of derivations of type , we denote with the set [ def : candidates ] let be a _ closed term _ having the syntactic type .a set of derivations of type is said to be a _reducibility candidate of type _ if the following hold : cr0 : : if then , for every .cr1 : : if then is normalizable .cr2 : : if and reduces to then .cr3 : : if ends with and all its reducts are in , then .cr4 : : if ends with , then .cr5 : : if ends with a left - rule or , then all its minor premise derivations are normalizable , and all its major premise derivations are in , then .we shall write to denote a reducibility candidate of type .the conditions * cr1 * and * cr2 * are similar to the eponymous conditions in girard s definition of reducibility candidates in his strong normalisation proof for system f ( see , chapter 14 ) .girard s * cr3 * is expanded in our definition to * cr3 , cr4 * and* cr5*. these conditions deal with what girard refers to as `` neutral '' proof term ( or , in our setting , derivations ) .neutrality corresponds to derivations ending in , , , or a left rule .the condition * cr0 * is needed because our cut reduction rules involve substitution of eigenvariables in some cases ( i.e. , those that involve permutation of and in the left / right commutative cases ) , and consequently , the notion of reducibility ( candidate ) needs to be preserved under eigenvariable substitution .let be a set of derivations of type and let be a set of derivations of type .then denotes the set of derivations such that if and only if ends with a sequent such that and for every , we have .let be a closed term .define to be the set it can be shown that is a reducibility candidate of type .[ lm : norm red ] let be a term of syntactic type . then the set is a reducibility candidate of type . *cr0 * follows from lemma [ lm : subst - norm ] , * cr1 * follows from the definition of , and the rest follow from definition [ def : norm ] .[ def : candidate - subst ] a _ candidate substitution _ is a partial map from parameters to triples of reducibility candidates , derivations and closed terms such that whenever , we have * has the same syntactic type as , * is a reducibility candidate of type , and * either one of the following holds : * * and is a normalizable derivation of , or * * and is a normalizable derivation of .we denote with the _ support _ of , i.e. , the set of parameters on which is defined .each candidate substitution determines a unique parameter substitution , given by : we denote with the parameter substitution obtained this way .we say that a parameter is _fresh for _ , written if .[ [ notation ] ] notation + + + + + + + + since every candidate substitution has a corresponding parameter substitution , we shall often treat a candidate substitution as a parameter substitution .in particular , we shall write to denote and to denote .we are now ready to define the notion of parametric reducibility .we follow a similar approach for , where families of reducibility sets are defined by the _ level _ of derivations , the size of the types of derivations . in defining a family ( or families ) of sets of derivations at level , we assume that reducibility sets at level are already defined .the main difference with the notion of reducibility for , aside from the use of parameters in the clause for ( co)induction rules ( which do not exist in ) , is in the treatment of the induction rules .[ def : param red ] let be the set of all formula of size , , ] where . otherwise , = \nm_{x^p}\,\vec u ] if it is normalizable and one of the following holds : p2 : : ends with , and all its reducts are in ] .p4 : : ends with , i.e. , { { \gamma\longrightarrow p\,\vec t } } { \deduce{{\gamma\longrightarrow b\,x^p\,\vec t}}{\pi'}}\ ] ] without loss of generality , assume that : for every reducibility candidate , where is a closed term of the same syntactic type as , for every normalizable derivation of , if for every the following holds : \in ( \red_{(b\,x^p\,\vec u)}[\omega , ( \sscr , \pi_i , i)/x^p ] \rightarrow \sscr~\vec u)\ ] ] then , \pi_i[\vec t/\vec y ] ) \in \sscr\,\vec t\ ] ] p5 : : ends with , i.e. , { { \gamma\longrightarrow p\,\vec t } } { \deduce{{\gamma\longrightarrow i\,\vec t}}{\pi ' } & \deduce{{i\,\vec y\longrightarrow b\,i\,\vec y}}{\pi_i } } \ ] ] and there exist a parameter such that and a reducibility candidate such that and \in ( \sscr\,\vec u \rightarrow \red_{b\,x^p\,\vec u}[\omega , ( \sscr , \pi_i , i)/x^p ] ) \ \hbox{for every . } \ ] ] p6 : : ends with any other rule and its major premise derivations are in the parametric reducibility sets of the appropriate types .we shall write , instead of ] , but since has smaller size than , this quantification is legitimate and the definition is well - founded .note also the similar quantification in * p4 * and * p5 * , where the parametric reducibility set ] . by lemma [ lm: level ] , so in both cases the set ] , then , \pi_i ) \in \sscr ] i.e. , a set of reducible derivations of type .so , intuitively , can be seen as a higher - order function that takes any function of type ( i.e. , the derivation ) , and turns it into a derivation of type ( i.e. , the derivation , \pi_i) ] then is normalizable .since every ] .[ lm : red - subst ] if ] .[ lm : red vacuous ] let ] if and only if \vec u \vec u ] is a candidate substitution , for some .then }[\omega ] = \red_c[\omega , ( \rscr , \psi , s\omega)/x^p].\ ] ] we shall now show that every derivation is reducible , hence every derivation can be normalized to a cut - free derivation .but in order to prove this , we need a slightly more general lemma , which states that every derivation is in ] be a candidate substitution such that , is definitionally closed , and for every of the same types as , \in \red_{b\,x^p\,\vec u}\ , [ \omega ] \rightarrow \rscr\,\vec u\ ] ] then is definitionally closed .let .suppose .we need to show that \in \red_{b\,y^q\,\vec t}\ , [ \omega ] \rightarrow \sscr\,\vec t\ ] ] for every of the same types as . if then this follows from the assumption of the lemma .otherwise , , and by the definitional closure assumption on , we have \in \red_{b\,y^q\,\vec t}\ , [ \omega ' ] \rightarrow \sscr\,\vec t\ ] ] for every . since ( recall that definition clauses can not contain occurrences of parameters ) , by lemma [ lm : red vacuous ] we have = \red_{b\,y^q\,\vec t}\ , [ \omega] ] be a candidate substitution such that , is definitionally closed , and for every of the same types as , \in \rscr \,\vec u \rightarrow \red_{b\,x^p\,\vec u}\ , [ \omega]\ ] ] then is definitionally closed .analogous to the proof of lemma [ lm : clo - ext - ind ] .we are now ready to state the main lemma for cut elimination .[ lm : comp ] let be a definitionally closed candidate substitution .let be a derivation of , and let where , be derivations in , respectively , , \ldots , \red_{b_n}\ , [ \omega] ] .the proof is by induction on where is the multiset of normalization degrees of to .note that the measure can be well - ordered using the lexicographical ordering .we shall refer to this ordering as simply .note also that is insensitive to the order in which is given , thus when we need to distinguish one of the , we shall refer to it as without loss of generality .the derivation is in ] .[ [ case - i - n-0 ] ] * case i : n = 0 * + + + + + + + + + + + + + + + in this case , reduces to , thus it is enough to show that that ] . by * cr4* we have that , so by the definitional closure of and * cr3 * , we have ) \in \red_{d\,s\,\vec u}\ , [ \omega] ] by the induction hypothesis , we have ] for every , and by lemma [ lm : red - norm ] , each is also normalizable .the latter implies that is normalizable .note that if is a major premise derivation , then for some , and we have .therefore , by * cr5 * , we have that .* suppose ends with : { { \gamma\longrightarrow x^p\vec t } } { \deduce{{\gamma\longrightarrow d\,x^p\,\vec t}}{\pi ' } } \ ] ] where .then ] .this , together with the definitional closure of , implies that is indeed in .[ [ i.2 ] ] * i.2 :* + + + + + + suppose for any parameter and any terms .most subcases follow easily from the induction hypothesis , lemma [ lm : red - norm ] and definition [ def : param red ] .the subcases where ends with a left rule follow the same lines of arguments as in case i.1 above .we show here the non - trivial subcases involving right - introduction rules : [ [ i.2.a ] ] * i.2.a* + + + + + + + suppose ends with , as shown below left .then is as shown below right . { { \gamma\longrightarrow c_1 { \supset}c_2 } } { \deduce{{\gamma , c_1\longrightarrow c_2}}{\pi ' } } \qquad \infer[{{\supset}{\cal r } } ] { { \gamma\omega\longrightarrow c_1\omega { \supset}c_2\omega } } { \deduce{{\gamma\omega , c_1\omega\longrightarrow c_2\omega}}{\pi'\omega } } \ ] ] to show ] .normalizability of then follows immediately from this and lemma [ lm : red - norm ] .it remains to show that statement [ eq : ce1 ] holds : let be a derivation in ] . in other words ,statement [ eq : ce1 ] holds for arbitrary , and therefore by definition [ def : param red ] , ] , we need to show that is normalizable ( as before this easily follows from the induction hypothesis and lemma [ lm : red - norm ] ) and that , \pi_s[\vec t/\vec x ] ) \in \rscr\,\vec t\ ] ] for every candidate and every that satisfies : \in\red_{d\,x^p\,\vec u}\ , [ \omega , ( \rscr,\pi_s , s)/x^p ] \rightarrow \rscr\,\vec u \hbox { for every .}\ ] ] let ] so statement [ eq : casei3-a ] above can be rewritten to ) \in \rscr\,\vec t.\ ] ] by lemma [ lm : clo - ext - ind ] , we have that is definitionally closed .therefore we can apply the induction hypothesis to and , obtaining ] .[ [ i.2.c ] ] * i.2.c* + + + + + + + suppose ends with , as shown below left , where .let .then is as shown below right . { { \gamma\longrightarrow p\,\vec t } } { \deduce{{\gamma\longrightarrow s\,\vec t}}{\pi ' } & \deduce{{s\,\vec x\longrightarrow d\,s\,\vec x}}{\pi_s } } \qquad \infer[{{\rm ci}{\cal r } } ] { { \gamma\omega\longrightarrow p\,\vec t } } { \deduce{{\gamma\omega\longrightarrow s'\,\vec t}}{\pi'\omega } & \deduce{{s'\vec x\longrightarrow d\,s'\,\vec x}}{\pi_s\omega } } \ ] ] note that is normalizable , by the induction hypothesis and lemma [ lm : red - norm ] . to show that ] for a new .let \}. ] , and therefore , by applying the induction hypothesis to ] for every ] is exactly .so the above statement can be rewritten to \in \rscr\,\vec u \rightarrow \red_{d\,s\,\vec u}\ , [ \omega].\ ] ] by lemma [ lm : red param subst ] , = \red_{d\,x^p\,\vec u}\ , [ \omega , ( \rscr , \pi_s\omega , s')/x^p] ] .[ [ case - ii - n-0 ] ] * case ii : * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + to show that ] and that is normalizable .the latter follows from the former by lemma [ lm : red - norm ] and definition [ def : norm ] , so in the following we need only to show the former .note that in this case , we do not need to distinguish cases based on whether is headed by a parameter or not . to see why ,suppose for some parameter . if then to show ] .if , then to show ] . since the applicable reduction rules to are driven by the shape of , and since is determined by , we shall perform case analysis on in order to determine the possible reduction rules that apply to , and show in each case that the reduct of is in the same parametric reducibility set .there are several main cases depending on whether ends with a rule acting on a cut formula or not .again , when we refer to , without loss of generality , we assume . in the following , we say that an instance of is _ trivial _ if it applies to a formula for some , but .otherwise , we say that it is non - trivial .[ [ ii.1 ] ] * ii.1* + + + + + + suppose ends with a left rule , other than , and a non - trivial , on and ends with a right - introduction rule .there are several subcases depending on the logical rules that are applied to .we show here the non - trivial cases : suppose and are { { \delta_1\longrightarrow b_1'\omega { \supset}b_1''\omega } } { \deduce{{\delta_1,b_1'\omega\longrightarrow b_1''\omega}}{\pi_1 ' } } \qquad \infer[{{\supset}{\cal l}}. ] { { b_1'{\supset}b_1'',b_2,\dots , b_n,\gamma\longrightarrow c } } { \deduce{{b_2,\dots,\gamma\longrightarrow b_1'}}{\pi ' } & \deduce{{b_1'',b_2,\dots,\gamma\longrightarrow c}}{\pi '' } } \ ] ] let . then ] , by definition [ def : param red ] , we have \rightarrow \red_{b_1''}[\omega]\ ] ] and therefore the derivation with end sequent is in ] , and therefore , by lemma [ lm : red - norm ] , it is normalizable . by definition [ def : norm ] , this means that is normalizable and by definition [ def : param red ] , ] , by lemma [ lm : red - subst ] we have \in \red_{b_1'[t / x]}[\omega]\ ] ] note that , so we can apply the induction hypothesis to obtain ]. then the reduct of in this case is the derivation : { { \delta_1,\delta_2,\ldots,\delta_n,\gamma\longrightarrow c } } { \deduce{{\delta_2,\ldots,\delta_n,\gamma\longrightarrow c}}{\xi_1 } } \ ] ] which is also in ]. then the reduct of in this case is the derivation since ) } \leq { { \rm ht}(\pi_s ) } < { { \rm ht}(\pi)} \vec u ] .then by lemma [ lm : red candidate ] , is a reducibility candidate of type .moreover , by lemma [ lm : red param subst ] , we have = \red_{d\,x^p\,\vec u}\ , [ \omega , ( \rscr , \pi_s\omega , s')/x^p].\ ] ] this , together with statement [ eq : caseii.1-a ] above , implies that \in \red_{d\,x^p\,\vec u}\ , [ \omega , ( \rscr , \pi_s\omega , s')/x^p ] \rightarrow \rscr \,\vec u\ ] ] for every .since \vec u ] .suppose and are { { \delta_1\longrightarrow p\,\vec{t } } } { \deduce{{\delta_1\longrightarrow s\,\vec{t}}}{\pi_1 ' } & \deduce{{s\,\vec{x}\longrightarrow d\,s\,\vec{x}}}{\pi_s } } \qquad \qquad \infer[{{\rm ci}{\cal l } } ] { { p\,\vec{t } , b_2 , \dots , \gamma\longrightarrow c } } { \deduce{{d\,x^p\,\vec{t},b_2,\dots , \gamma\longrightarrow c}}{\pi'}}\ ] ] where and is a parameter not already occuring in the end sequent of ( and w.l.o.g .assume also and not occuring in or ) . then is { { p\,\vec{t } , b_2\omega , \dots , \gamma\omega\longrightarrow c\omega } } { \deduce{{d\,x^p\,\vec{t},b_2\omega,\dots , \gamma\omega\longrightarrow c\omega}}{\pi'\omega}}\ ] ] since ] . then by lemma [ lm : clo - ext - coind ] , is definitionally closed .let ) ] .the reduct of in this case is the derivation note that since does not occur in or , by lemma [ lm : red vacuous ] , we have that = \red_{b_i}[\omega']\ ] ] for every .therefore , by induction hypothesis , we have that .\ ] ] but since is also new for , we have = \red_c[\omega] ] , it follows from definition [ def : param red ] that is normalizable and ] .the reduct of in this case is the derivation : { { d_1 { \supset}d_2,\delta_1',\delta_2,\ldots,\gamma\omega\longrightarrow c\omega } } { \infer=[{\hbox{\sl w}{\cal l } } ] { { \delta_1',\ldots,\gamma\omega\longrightarrow d_1 } } { \deduce{{\delta_1'\longrightarrow d_1}}{\pi_1 ' } } & \deduce{{d_2,\delta_1',\delta_2,\ldots,\gamma\omega\longrightarrow c\omega } } { \xi_1 } } \ ] ] since is normalizable , by definition [ def : norm ] the left premise derivation of is normalizable , and since reducibility implies normalizability ( lemma [ lm : red - norm ] ) , the right premise is also normalizable , hence is normalizable . now to show ] .* suppose but .then we need to show that is normalizable . butthis follows immediately from the normalizability of both of its premise derivations .* suppose for any parameter and any terms .since ] .suppose is as shown below left .then the reduct of in this case is shown below right , where .{{s = t,\delta_1'\longrightarrow b_1\omega } } { \left\{\raisebox{-1.5ex } { \deduce{{\delta_1'\rho\longrightarrow b_1\omega\rho } } { \pi^{\rho } } } \right\}_{\rho } } \qquad \infer[{{\rm eq}{\cal l } } ] { { s = t,\delta_1',\delta_2,\dots,\gamma\omega\longrightarrow c\omega } } { \left\{\raisebox{-1.5ex } { \deduce{{\delta_1'\rho,\dots,\gamma\omega\rho\longrightarrow c\omega\rho } } { \xi^{\rho } } } \right\}_{\rho}}\ ] ] ] by the definition of parametric reducibility .suppose is { { p\,\vec{t } , \delta_1'\longrightarrow b_1\omega } } { \deduce{{d\,s\,\vec{x}\longrightarrow s\,\vec{x}}}{\pi_s } & \deduce{{s\,\vec{t } , \delta_1'\longrightarrow b_1\omega}}{\pi_1 ' } } \ ] ] since ] .let be the derivation then ] .[ [ ii.3 ] ] * ii.3 * + + + + + + ends with a left rule , other than , and a non - trivial instance of , acting on , and ends with or : these cases follow straightforwardly from the induction hypothesis .[ [ ii.4 ] ] * ii.4* + + + + + + suppose ends with a non - trivial application of on .that is , , for some and some , and is { { x^p\,\vec t , b_2 , \ldots , b_n , \gamma\longrightarrow c } } { \deduce{{d\,x^p\,\vec t , b_2,\ldots , b_n , \gamma\longrightarrow c}}{\pi ' } } \ ] ] where .suppose .then is ) , \pi'\omega ] .note that has exactly one reduct , that is , ).\ ] ] note that also has exactly one reduct , namely , .since = \rscr\,\vec t ] . andsince is the only reduct of , this also means that , by definition [ def : param red ] , ] by the induction hypothesis .[ [ ii.5 ] ] * ii.5* + + + + + + suppose ends with or acting on , or .then also ends with the same rule .the cut reduction rule that applies in this case is either , or . in these cases ,parametric reducibility of the reducts follow straightforwardly from the assumption ( in case of ) , the induction hypothesis and definition [ def : param red ] .[ [ ii.6 ] ] * ii.6* + + + + + + suppose ends with .then also ends with .the reduction rule that applies in this case is the reduction .parametric reducibility of the reduct in this case follows straightforwardly from the induction hypothesis and definition [ def : param red ] .[ [ ii.7 ] ] * ii.7* + + + + + + suppose ends with or a rule acting on a formula other than a cut formula .most cases follow straightforwardly from the induction hypothesis , lemma [ lm : red - norm ] and lemma [ lm : red - subst ] ( which is needed in the reduction case and ) .we show the interesting subcases here : suppose ends with a non - trivial , i.e. , is { { b_1,\ldots , b_n,\gamma\longrightarrow x^p\,\vec t } } { \deduce{{b_1,\ldots , b_n,\gamma\longrightarrow d\,x^p\,\vec t}}{\pi ' } } \ ] ] where and .suppose .then is the derivation ] this , together with the definitional closure of , implies that ] , we first need to show that it is normalizable .this follows straightforwardly from the induction hypothesis ( which shows that \vec u ] .by lemma [ lm : clo - ext - ind ] , is definitionally closed . notethat since we assume that is a fresh parameter not occuring in , we have = \red_{b_i}[\omega'] ] by lemma [ lm : param subst vacuous ] , for every .therefore , by the induction hypothesis we have : = mc(\pi_1,\ldots,\pi_n,\pi'\omega ' ) \in\red_{d\,x^p\,\vec t}\ , [ \omega'].\ ] ] this , together with the definitional closure of , implies that suppose ends with a non - trivial , i.e. , is { { b_1,\ldots , b_n , x^p\,\vec t , \gamma'\longrightarrow c } } { \deduce{{b_1,\ldots , b_n , d\,x^p\,\vec t , \gamma'\longrightarrow c}}{\pi ' } } \ ] ] where and .suppose .then is ),\pi'\omega).\ ] ] let ) ] .the reduct of in this case is which is in ] and ] we must first show that it is normalizablethis follows from immediately from normalizability of and .then we need to find a reducibility candidate such that ( a ) : : , and ( b ) : : \in \rscr\,\vec u \rightarrow \red_{d\,x^p\,\vec u}\ , [ \omega,(\rscr,\pi_s , s)/x^p] ] as in case * i.2.c * , we show , using lemma [ lm : red candidate ] , that is a reducibility candidate of type . by the induction hypothesis , we have , so satisfies * ( a)*. using the same argument as in case * i.2.c * we can show that also satisfies * ( b ) * , by appealing to the induction hypothesis , applied to .every derivation is reducible .the proof follows from lemma [ lm : comp ] , by setting and to the empty candidate substitution .since reducibility implies cut - elimination and since every cut - free derivation can be turned into a -free derivation ( lemma [ lm : subst - elimination ] ) , it follows that every proof can be transformed into a cut - free and -free derivation .[ cor : cut - elimination ] given a fixed definition , a sequent has a derivation in if and only if it has a cut - free and -free derivation .the consistency of is an immediate consequence of cut - elimination . by consistencywe mean the following : given a fixed definition and an arbitrary formula , it is not the case that both and are provable .[ cor : consistency ] the logic is consistent .of course , there is a long association between mathematical logic and inductive definitions and in particular with proof - theory , starting with the takeuti s conjecture , the earliest relevant entry for our purposes being martin - lf s original formulation of the theory of _ iterated inductive definitions _ . from the representation of algebraic types and the introduction of ( co)inductive types in system f , ( co)induction / recursion became mainstream and made it into type - theoretic proof assistants such as coq , first via a primitive recursive operator , but eventually in the let - rec style of functional programming languages , as in gimenez s _ calculus of infinite constructions _ . unlike works in these type - theoretic settings, we put less emphasis on proof terms and strong normalization ; in fact , our cut elimination procedure is actually a form of weak normalization , in the sense that our procedure only guarantees termination with respect to a particular strategy , i.e , by reducing the lowest cuts in a derivation tree .our notion of equality , which internalizes unification in its left introduction rule , departs from the more traditional notion of equality . as a consequence of these differences ,it is not at all obvious that strong normalization proofs for term calculi with ( co-)inductive types can be adapted straightforwardly to our setting .baelde and miller have recently introduced an extension of mulitplicative - additive linear logic with least and greatest fixed points , called . in that work , cut elimination is proved indirectly via a second - order encoding of the least and the greatest fixed point operators into higher - order linear logic and via an appeal to completeness of focused proofs for higher - order linear logic .such an encoding can also be used for proving cut elimination for , but as we noted earlier , our main concern here is to provide a basis for cut elimination for ( orthogonal ) extensions of with the -quantifier , for which there are currently no known encodings into higher - order ( linear ) logic .baelde has also given a direct cut - elimination proof for .the proof uses a notion of orthogonality in the definition of reducibility , defined via classical negation , so it is not clear if it can be adapted straightforwardly to the intuitionistic setting like ours .circular proofs are also connected with the proof - theory of fixed point logics and process calculi , as well as in traditional sequent calculi such as in .the issue is the equivalence between systems with local vs. global induction , that is , between fixed point rules vs. well - founded and guarded induction ( circular proofs ) . in the traditional sequent calculus, it is unknown whether every global inductive proof can be translated into a local one . in higher order logic( co)inductive definitions are usually obtained via the tarski set - theoretic fixed point construction , as realized for example in isabelle / hol . as we mentioned before ,those approaches are at odd with hoas even at the level of the syntax .this issue has originated a research field in its own and we only mention the main contenders : in the twelf system the lf type theory is used to encode deductive systems as judgments and to specify meta - theorems as relations ( type families ) among them ; a logic programming - like interpretation provides an operational semantics to those relations , so that an external check for totality ( incorporating termination , well - modedness and coverage ) verifies that the given relation is indeed a realizer for that theorem .coinduction is still unaccounted for and may require a switch to a different operational semantics for lf .there exists a second approach to reasoning in lf that is built on the idea of devising an explicit ( meta-)meta - logic ( ) for reasoning ( inductively ) about the framework .it can be seen as a constructive first - order inductive type theory , whose quantifiers range over possibly open lf objects . in this calculusit is possible to express and inductively prove meta - logical properties of an object level system . can be also seen as a dependently - typed functional programming language , and as such it has been refined into the _ delphin _ programming language . in a similar vein _ beluga _ is based on context modal logic , which provides a basis for a different foundation for programming with hoas and dependent types . because all of these systems are programming languages , we refrain from a deeper discussion .we only note that systems like delphin or beluga separate data from computations .this means they are always based on eager evaluation , whereas co - recursive functions should be interpreted lazily . using standard techniques such as _ thunks_ to simulate lazy evaluation in such a context seems problematic ( pientka , personal communication ) ._ weak higher - order abstract syntax _ is an approach that strives to co - exist with an inductive setting .the problem of negative occurrences in datatypes is handled by replacing them with a new type .similarly for hypothetical judgments , although _ axioms _ are needed to reason about them , to mimic what is inferred by the cut rule in our architecture .framework embraces this _ axiomatic _ approach extending coq with the `` theory of contexts '' ( toc ) .the theory includes axioms for the the reification of key properties of names akin to _ freshness_. furthermore , higher - order induction and recursion schemata on expressions are also assumed ._ hybrid _ is a -calculus on top of isabelle / hol which provides the user with a _ full _ hoas syntax , compatible with a classical ( co)-inductive setting . improves on the latter on several counts .first it disposes of hybrid notion of _ abstraction _ , which is used to carve out the `` parametric '' function space from the full hol function space .moreover it is not restricted to second - order abstract syntax , as the current hybrid version is ( and as toc can not escape from being ) . finally , at higher types , reasoning via and fixed points is more powerful than inversion , which does not exploit higher - order unification ._ nominal logic _ gives a different foundation to programming and reasoning with _ names_. it can be presented as a first - order theory , which includes primitives for variable renaming and freshness , and a ( derived ) `` new '' freshness quantifier .it is endowed of natural principles of structural induction and recursion over syntax .urban have engineered a _ nominal datatype package _ inside isabelle / hol analogous to the standard datatype package but defining equivalence classes of term constructors .co - induction / recursion on nominal datatypes is not available , but to be fair it is also currently absent from isabelle / hol .we have presented a proof theoretical treatment of both induction and co - induction in a sequent calculus compatible with hoas encodings .the proof principle underlying the explicit proof rules is basically fixed point ( co)induction .however , the formulation of the rules is inspired by a second - order encoding of least and greatest fixed points .we have developed a new cut elimination proof , radically different from previous proofs ( ) , using a reducibility - candidate technique la girard .consistency of the logic is an easy consequence of cut - elimination .our proof system is , as far as we know , the first which incorporates a co - induction proof rule with a direct cut elimination proof .this schema can be used as a springboard towards cut elimination procedures for more expressive ( conservative ) extensions of , for example in the direction of , or more recently , the logic by tiu and the logic by gacek .an interesting problem is the connection with circular proofs , which is particularly attractive from the viewpoint of proof search , both inductively and co - inductively .this could be realized by directly proving a cut - elimination result for a logic where circular proofs , under termination and guardedness conditions completely replace ( co)inductive rules .indeed , the question whether `` global '' proofs are equivalent to `` local '' proofs is still unsettled .* acknowledgements * the logic was developed in collaboration with dale miller .we thank david baelde for his comments to a draft of this paper .10 peter aczel .an introduction to inductive definitions .in j. barwise , editor , _ handbook of mathematical logic _ , volume 90 of _ studies in logic and the foundations of mathematics _ , chapter c.7 , pages 739782 .north - holland , amsterdam , 1977 .simon ambler , roy crole , and alberto momigliano . combining higher order abstract syntax with tactical theorem proving and ( co)induction . in v.a. carreo , editor , _ proceedings of the 15th international conference on theorem proving in higher order logics , hampton , va , 1 - 3 august 2002 _ , volume 2342 of _lncs_. springer verlag , 2002 .franz baader and wayne snyder .unification theory . in johnalan robinson and andrei voronkov , editors , _ handbook of automated reasoning _ , pages 445532 .elsevier and mit press , 2001 .david baelde .least and greatest fixed points in linear logic . ,abs/0910.3383 , 2009 .david baelde , andrew gacek , dale miller , gopalan nadathur , and alwen tiu .the bedwyr system for model checking over syntactic expressions . in frank pfenning ,editor , _ cade _ , volume 4603 of _ lecture notes in computer science _ , pages 391397 .springer , 2007 .david baelde and dale miller .least and greatest fixed points in linear logic . in _ lpar _ ,lecture notes in computer science , pages 92106 .springer , 2007 . c. bohm and a. berarducci. automatic synthesis of typed -programs on term algebras ., 39(2 - 3):135153 , august 1985 .james brotherston and alex simpson .complete sequent calculi for induction and infinite descent . in _ lics _ , pages 5162 .ieee computer society , 2007 .k. l. clark .negation as failure . in j.gallaire and j. minker , editors , _ logic and data bases _ , pages 293322 . plenum press , new york , 1978 .joelle despeyroux and andre hirschowitz .higher - order abstract syntax with induction in coq . in _fifth international conference on logic programming and automated reasoning _ , pages 159173 , june 1994 .lars - henrik eriksson . a finitary version of the calculus of partial inductive definitions . in l .- h .eriksson , l. hallns , and p. schroeder - heister , editors , _ proceedings of the second international workshop on extensions to logic programming _ , volume 596 of _ lecture notes in artificial intelligence _ , pages 89134 .springer - verlag , 1991 .murdoch j. gabbay .fresh logic : proof - theory and semantics for fm and nominal techniques ., 5(2):356387 , 2007 .andrew gacek .the abella interactive theorem prover ( system description ) . in alessandro armando ,peter baumgartner , and gilles dowek , editors , _ ijcar _ , volume 5195 of _ lecture notes in computer science _ , pages 154161 .springer , 2008 .andrew gacek , dale miller , and gopalan nadathur .combining generic judgments with recursive definitions . in _ lics _ , pages 3344 .ieee computer society , 2008 .andrew gacek , dale miller , and gopalan nadathur .reasoning in abella about structural operational semantics specifications ., 228:85100 , 2009 .herman geuvers .inductive and coinductive types with iteration and recursion . in b.nordstrm , k. pettersson , and g. plotkin , editors , _ informal proceedings workshop on types for proofs and programs , bstad , sweden , 812 june 1992 _ , pages 193217 .dept . of computing science ,chalmers univ . of technology and gteborg univ . , 1992eduardo gimnez . .thesis phd 96 - 11 , laboratoire de linformatique du paralllisme , ecole normale suprieure de lyon , december 1996 .jean - yves girard , paul taylor , and yves lafont . .cambridge university press , 1989 .lars hallns .partial inductive definitions . , 87(1):115142 , 1991 .robert harper , furio honsell , and gordon plotkin .a framework for defining logics ., 40(1):143184 , 1993 .furio honsell , marino miculan , and ivan scagnetto .an axiomatic approach to metareasoning on nominal algebras in hoas . in fernandoorejas , paul g. spirakis , and jan van leeuwen , editors , _ icalp _ , volume 2076 of _ lecture notes in computer science _ , pages 963978 .springer , 2001 . .: a package for higher - order syntax in isabelle / hol and coq .hybrid.dsi.unimi.it , 2008 , accessd we d feb 24 2010 .bart jacobs and jan rutten . a tutorial on ( co)algebras and ( co)induction . , 62:222259 , june 1997 .surveys and tutorials . per martin - lf .hauptsatz for the intuitionistic theory of iterated inductive definitions . in j.e. fenstad , editor , _ proceedings of the second scandinavian logic symposium _ , volume 63 of _ studies in logic and the foundations of mathematics _ , pages 179216 .north - holland , 1971 .raymond mcdowell and dale miller .cut - elimination for a logic with definitions and induction ., 232:91119 , 2000 .raymond mcdowell and dale miller .reasoning with higher - order abstract syntax in a logical framework ., 3(1):80136 , january 2002 .raymond mcdowell , dale miller , and catuscia palamidessi . encoding transition systems in sequent calculus ., 294(3):411437 , 2003 . n. p. mendler .recursive types and type constraints in second - order lambda calculus . in _ lics _ , pages 3036 .ieee computer society , 1987 .marino miculan and kidane yemane . a unifying model of variables and names .in vladimiro sassone , editor , _ fossacs _ , volume 3441 of _ lecture notes in computer science _ , pages 170186 .springer , 2005 .dale miller . a logic programming language with lambda - abstraction , function variables , and simple unification . in peter schroeder - heister ,editor , _ extensions of logic programming : international workshop , tbingen _ , volume 475 of _ lnai _ , pages 253281 .springer - verlag , 1991 .dale miller and alwen tiu . a proof theory for generic judgments ., 6(4):749783 , 2005 .alberto momigliano and simon ambler .multi - level meta - reasoning with higher order abstract syntax . in a.gordon , editor , _ fossacs03 _ ,volume 2620 of _ lncs _ , pages 375392 .springer verlag , 2003 .alberto momigliano and alwen tiu .induction and co - induction in sequent calculus . in stefano berardi ,mario coppo , and ferruccio damiani , editors , _ types _ , volume 3085 of _ lecture notes in computer science _ , pages 293308 .springer , 2003 .aleksandar nanevski , frank pfenning , and brigitte pientka .contextual modal type theory . , 9(3 ) , 2008 .nominal isabelle .isabelle.in.tum.de/nominal , 2008 , accessed sun feb 14 2010 .christine paulin - mohring .inductive definitions in the system coq : rules and properties . in m.bezem and j. f. groote , editors , _ proceedings of the international conference on typed lambda calculi and applications _ , pages 328345 ,utrecht , the netherlands , march 1993 .springer - verlag lncs 664 .lawrence c. paulson .mechanizing coinduction and corecursion in higher - order logic ., 7(2):175204 , march 1997 .frank pfenning .logical frameworks . in alan robinson and andrei voronkov , editors , _ handbook of automated reasoning _ , chapter 17 , pages 10631147 .elsevier science publisher and mit press , 2001 .frank pfenning and christine paulin - mohring .inductively defined types in the calculus of constructions . in m.main , a. melton , m. mislove , and d. schmidt , editors , _ proceedings of the fifth conference on the mathematical foundations of programming semantics , tulane university , new orleans , louisiana _ , pages 209228 .springer - verlag lncs 442 , march 1989 .brigitte pientka .verifying termination and reduction properties about higher - order logic programs .34(2):179207 , 2005 . brigitte pientka . a type - theoretic foundation for programming with higher - order abstract syntax and first - class substitutions . in georgec. necula and philip wadler , editors , _ popl _ , pages 371382 .acm , 2008 . andrew m. pitts .nominal logic , a first order theory of names and binding ., 186(2):165193 , 2003 .andrew m. pitts .alpha - structural recursion and induction . , 53(3):459506 , 2006 .adam poswolsky and carsten schrmann . practical programming with higher - order encodings and dependent types . in sophiadrossopoulou , editor , _ esop _ , volume 4960 of _ lecture notes in computer science _ , pages 93107 .springer , 2008 .luigi santocanale . a calculus of circular proofs and its categorical semantics . in mogensnielsen and uffe engberg , editors , _ fossacs _ , volume 2303 of _ lecture notes in computer science _ , pages 357371 .springer , 2002 .ulrich schpp .modelling generic judgements . , 174(5):1935 , 2007 .peter schroeder - heister. rules of definitional reflection . in m.vardi , editor , _ eighth annual symposium on logic in computer science _ , pages 222232 .ieee computer society press , ieee , june 1993 .carsten schrmann . .phd thesis , carnegie - mellon university , 2000 .cmu - cs-00 - 146 .carsten schrmann .the twelf proof assistant . in stefan berghofer ,tobias nipkow , christian urban , and makarius wenzel , editors , _ tphols _ , volume 5674 of _ lecture notes in computer science _ , pages 7983 .springer , 2009 .carsten schrmann and frank pfenning .a coverage checking algorithm for lf . in david a. basin and burkhart wolff , editors , _ tphols_ , volume 2758 of _ lecture notes in computer science _ , pages 120135 .springer , 2003 . c. spenger and m. dams . on the structure of inductive reasoning : circular and tree - shaped proofs in the -calculus . in a.gordon , editor , _ fossacs03 _ ,volume 2620 of _ lncs _ , pages 425440 .springer verlag , 2003 .alwen tiu . .phd thesis , pennsylvania state university , may 2004 .alwen tiu .a logic for reasoning about generic judgments ., 174(5):318 , 2007 .alwen tiu and dale miller .proof search specifications of bisimulation and modal logics for the -calculus ., 11(2):135 , 2010 .[ [ essential - cases-1 ] ] essential cases : + + + + + + + + + + + + + + + + if and are {{\delta_1\longrightarrow b_1 ' \land b_1 '' } } { \deduce{{\delta_1\longrightarrow b_1 ' } } { \pi_1 ' } & \deduce{{\delta_1\longrightarrow b_1 '' } } { \pi_1 '' } } \qquad\qquad \infer[{\land{\cal l}}]{{b_1 ' \land b_1'',b_2,\ldots , b_n,\gamma\longrightarrow c } } { \deduce{{b_1',b_2,\ldots , b_n,\gamma\longrightarrow c } } { \pi ' } } \enspace , \ ] ] then reduces to .the case for the other rule is symmetric .suppose and are {{\delta_1\longrightarrow b_1 ' \lor b_1 '' } } { \deduce{{\delta_1\longrightarrow b_1 ' } } { \pi_1 ' } } \quad \infer[{\lor{\cal l}}]{{b_1 ' \lor b_1'',b_2,\ldots , b_n,\gamma\longrightarrow c } } { \deduce{{b_1',b_2,\ldots , b_n,\gamma\longrightarrow c } } { \pi ' } & \deduce{{b_1'',b_2,\ldots , b_n,\gamma\longrightarrow c } } { \pi''}}\ ] ] then reduces to .the case for the other rule is symmetric .suppose and are {{\delta_1\longrightarrow b_1 ' { \supset}b_1 '' } } { \deduce{{b_1',\delta_1\longrightarrow b_1 '' } } { \pi_1 ' } } \qquad \infer[{{\supset}{\cal l}}]{{b_1 ' { \supset}b_1'',b_2,\ldots , b_n,\gamma\longrightarrow c } } { \deduce{{b_2,\ldots , b_n,\gamma\longrightarrow b_1 ' } } { \pi ' } & \deduce{{b_1'',b_2,\ldots , b_n,\gamma\longrightarrow c } } { \pi '' } } \enspace\ ] ] let .then reduces to { { \delta_1,\ldots,\delta_n,\gamma\longrightarrow c } } { \infer[{\hbox{\sl mc } } ] { { \delta_1,\ldots,\delta_n,\gamma , \delta_2,\ldots,\delta_n,\gamma\longrightarrow c } } { \raisebox{-2.5ex}{\deduce{{\ldots\longrightarrow b_1''}}{\xi_1 } } & \left\{\raisebox{-1.5ex}{\deduce{{\delta_i\longrightarrow b_i}}{\pi_i}}\right\}_{i \in \{2 .. n\ } } & \raisebox{-2.5ex}{\deduce{{b_1'',\{b_i\}_{i \in \{2 .. n\}},\gamma\longrightarrow c}}{\pi '' } } } } \enspace\ ] ] if and are {{\delta_1\longrightarrow \forall x.b_1 ' } } { \deduce{{\delta_1\longrightarrow b_1'[y / x ] } } { \pi_1 ' } } \qquad\qquad\qquad \infer[{\forall{\cal l}}]{{\forall x.b_1',b_2,\ldots , b_n,\gamma\longrightarrow c } } { \deduce{{b_1'[t / x],b_2,\ldots , b_n,\gamma\longrightarrow c } } { \pi ' } } \enspace , \ ] ] then reduces to ,\pi_2,\ldots,\pi_n,\pi' ] .suppose and are , respectively , { { \delta_1\longrightarrow p\,\vec t } } { \deduce{{\delta_1\longrightarrow d\,x^p\,\vec t}}{\pi_1 ' } } \qquad \infer[{{\rm i}{\cal l } } ] { { p\,\vec{t } , b_2,\dots , b_n,\gamma\longrightarrow c } } { \deduce{{d\,s\,\vec{y}\longrightarrow s\,\vec{y}}}{\pi_s } & \deduce{{s\,\vec{t } , b_2,\dots , b_n , \gamma\longrightarrow c}}{\pi ' } } \ ] ] where and is a new parameter . then reduces to , \pi_s[\vec t/\vec y ] ) , \pi_2,\ldots,\pi_n,\pi').\ ] ] suppose and are { { \delta_1\longrightarrow p\,\vec{t } } } { \deduce{{\delta_1\longrightarrow s\,\vec{t}}}{\pi_1 ' } & \deduce{{s\,\vec{y}\longrightarrow d\,s\,\vec{y}}}{\pi_s } } \qquad \qquad \infer[{{\rmci}{\cal l } } ] { { p\,\vec{t } , \dots , \gamma\longrightarrow c } } { \deduce{{d\,x^p\,\vec{t},\dots , \gamma\longrightarrow c}}{\pi ' } } \ ] ] where and is a new parameter .then reduces to ) , \pi_2,\ldots,\pi_n,\pi'[(\pi_s , s)/x^p]).\ ] ] suppose and are {{\delta_1\longrightarrow s = t } } { } \qquad\qquad\qquad \infer[{{\rm eq}{\cal l}}]{{s = t , b_2,\ldots , b_n,\gamma\longrightarrow c } } { \left\{\raisebox{-1.5ex } { \deduce{{b_2\rho,\ldots , b_n\rho,\gamma\rho\longrightarrow c\rho } } { \pi^\rho } } \right\}_\rho } \enspace\ ] ] note that in this case , in ranges over all substitution , as any substitution is a unifier of and .let be the derivation .then reduces to { { \delta_1,\delta_2,\ldots,\delta_n,\gamma\longrightarrow c } } { \deduce{{\delta_2,\ldots,\delta_n,\gamma\longrightarrow c}}{\xi_1}}\ ] ] [ [ left - commutative - cases-1 ] ] left - commutative cases : + + + + + + + + + + + + + + + + + + + + + + + in the following cases , we suppose that ends with a left rule , other than , acting on .suppose is as below left , where is any left rule except , , or .let .then reduces to the derivation given below right . { { \delta_1\longrightarrow b_1 } } { \left\{\raisebox{-1.5ex}{\deduce{{\delta_1^i\longrightarrow b_1 } } { \pi_1^i}}\right\}_i } \qquad \infer[{\bullet{\cal l } } ] { { \delta_1,\delta_2,\ldots,\delta_n,\gamma\longrightarrow c } } { \left\ { \raisebox{-1.3ex } { \deduce{{\delta_1^i,\delta_2,\ldots,\delta_n,\gamma\longrightarrow c}}{\xi^i } } \right\}_i } \ ] ]suppose is {{d_1 ' { \supset}d_1'',\delta_1'\longrightarrow b_1 } } { \deduce{{\delta_1'\longrightarrow d_1 ' } } { \pi_1 ' } & \deduce{{d_1'',\delta_1'\longrightarrow b_1 } } { \pi_1 '' } } \enspace\ ] ] let .then reduces to { { d_1 ' { \supset}d_1'',\delta_1',\delta_2,\ldots,\delta_n,\gamma\longrightarrow c } } { \infer=[{\hbox{\sl w}{\cal l } } ] { { \delta_1',\delta_2,\ldots,\delta_n,\gamma\longrightarrow d_1 ' } } { \deduce{{\delta_1'\longrightarrow d_1'}}{\pi_1 ' } } & \deduce{{d_1'',\delta_1',\delta_2,\ldots,\delta_n,\gamma\longrightarrow c}}{\xi_1 } } \enspace\ ] ] suppose is { { p\,\vec{t } , \delta_1'\longrightarrow b_1 } } { \deduce{{d\,s\,\vec{y}\longrightarrow s\,\vec{y}}}{\pi_s } & \deduce{{s\,\vec{t } , \delta_1'\longrightarrow b_1}}{\pi_1 ' } } \ ] ] where .let .then reduces to { { p\,\vec{t } , \delta_1',\dots,\delta_n\longrightarrow c } } { \deduce{{d\,s\,\vec{y}\longrightarrow s\,\vec{y}}}{\pi_s } & \deduce{{s\,\vec{t } , \delta_1',\dots,\delta_n,\gamma\longrightarrow c}}{\xi_1 } } \ ] ] suppose is as below left .let .then reduces to the derivation given below right .{{s = t,\delta_1'\longrightarrow b_1 } } { \left\{\raisebox{-1.5ex } { \deduce{{\delta_1'\rho\longrightarrow b_1\rho } } { \pi_1^{\rho } } } \right\ } } \qquad \infer[{{\rm eq}{\cal l}}. ] { { s = t,\delta_1',\delta_2,\ldots,\delta_n,\gamma\longrightarrow c } } { \left\ { \raisebox{-1.3ex } { \deduce{{\delta_1'\rho,\delta_2\rho,\ldots,\delta_n\rho,\gamma\rho\longrightarrow c\rho}}{\xi^\rho } } \right\ } } \ ] ] suppose is . then reduces to [ [ right - commutative - cases-1 ] ] right - commutative cases : + + + + + + + + + + + + + + + + + + + + + + + + suppose is as given below left , where where is any left rule other than , , or acting on a formula other than .let .then reduces to the derivation given below right .{{b_1,\ldots , b_n,\gamma\longrightarrow c } } { \left\{\raisebox{-1.5ex}{\deduce{{b_1,\ldots , b_n,\gamma^i\longrightarrow c } } { \pi^i}}\right\}_i } \qquad \infer[{\circ{\cal l } } ] { { \delta_1,\ldots,\delta_n,\gamma\longrightarrow c } } { \left\ { \raisebox{-1.3ex } { \deduce{{\delta_1,\ldots,\delta_n,\gamma^i\longrightarrow c}}{\xi^i } } \right\}_i } \ ] ] suppose is {{b_1,\ldots , b_n , d ' { \supset}d'',\gamma'\longrightarrow c } } { \deduce{{b_1,\ldots , b_n,\gamma'\longrightarrow d ' } } { \pi ' } & \deduce{{b_1,\ldots , b_n , d'',\gamma'\longrightarrow c } } { \pi '' } } \enspace\ ] ] let and let . then reduces to {{\delta_1,\ldots,\delta_n , d ' { \supset}d'',\gamma'\longrightarrow c } } { \deduce{{\delta_1,\ldots,\delta_n,\gamma'\longrightarrow d ' } } { \xi_1 } & \deduce{{\delta_1,\ldots,\delta_n , d'',\gamma'\longrightarrow c } } { \xi_2 } } \enspace\ ] ] suppose is { { b_1,\dots , b_n , p\,\vec{t},\gamma'\longrightarrow c } } { \deduce{{d\,s\,\vec{y}\longrightarrow s\,\vec{y}}}{\pi_s } & \deduce{{b_1,\dots , b_n , s\,\vec{t } , \gamma'\longrightarrow c}}{\pi ' } } \enspace , \ ] ] where .let .then reduces to { { \delta_1,\dots,\delta_n , p\,\vec{t},\gamma'\longrightarrow c } } { \deduce{{d\,s\,\vec{y}\longrightarrow s\,\vec{y}}}{\pi_s } & \deduce{{\delta_1,\dots,\delta_n , s\,\vec{t } , \gamma'\longrightarrow c}}{\xi_1 } } \enspace\ ] ] suppose is as shown below left .let .then reduces to the derivation below right .{{b_1,\ldots , b_n , s = t,\gamma'\longrightarrow c } } { \left\{\raisebox{-1.5ex } { \deduce{{b_1\rho,\ldots , b_n\rho,\gamma'\rho\longrightarrow c\rho } } { \pi^{\rho}}}\right\ } } \qquad \infer[{{\rm eq}{\cal l } } ] { { \delta_1,\ldots,\delta_n , s = t,\gamma'\longrightarrow c } } { \left\ { \raisebox{-1.3ex } { \deduce{{\delta_1\rho,\ldots,\delta_n\rho,\gamma'\rho\longrightarrow c\rho}}{\xi^\rho } } \right\ } } \ ] ] if then reduces to . if is as below left , where where is any right rule except , then reduces to the derivation below right , where .{{b_1,\ldots , b_n,\gamma\longrightarrow c } } { \left\{\raisebox{-1.5ex } { \deduce{{b_1,\ldots , b_n,\gamma^i\longrightarrow c^i } } { \pi^i}}\right\}_i } \qquad \infer[{\circ{\cal r } } ] { { \delta_1,\ldots,\delta_n,\gamma\longrightarrow c } } { \left\ { \raisebox{-1.3ex } { \deduce{{\delta_1,\ldots,\delta_n,\gamma^i\longrightarrow c^i}}{\xi^i } } \right\}_i } \ ] ] suppose is { { b_1,\dots , b_n,\gamma\longrightarrow p\,\vec{t } } } { \deduce{{b_1,\dots , b_n,\gamma\longrightarrow s\,\vec{t}}}{\pi ' } & \deduce{{s\,\vec{y}\longrightarrow d\,s\,\vec{y}}}{\pi_s } } \enspace , \ ] ] where .let .then reduces to { { \delta_1,\dots,\delta_n,\gamma\longrightarrow p\,\vec{t } } } { \deduce{{\delta_1,\dots,\delta_n,\gamma\longrightarrow s\,\vec{t}}}{\xi_1 } & \deduce{{s\,\vec{y}\longrightarrow d\,s\,\vec{y}}}{\pi_s } } \enspace\ ] ] [ [ multicut - cases ] ] multicut cases : + + + + + + + + + + + + + + + if ends with a left rule , other than and , acting on and ends with a multicut and reduces to , then reduces to .suppose is {{b_1,\ldots , b_n,\gamma^1,\ldots,\gamma^m,\gamma'\longrightarrow c } } { \left\{\raisebox{-1.5ex}{\deduce{{\{b_i\}_{i \in i^j},\gamma^j\longrightarrow d^j } } { \pi^j}}\right\}_{j \in \{1 .. m\ } } & \raisebox{-2.5ex}{\deduce{{\{d^j\}_{j \in \{1 .. m\}},\{b_i\}_{i \in i'},\gamma'\longrightarrow c } } { \pi ' } } } \enspace , \ ] ] where partition the formulas among the premise derivations , , , .for let be {{\{\delta_i\}_{i \in i^j},\gamma^j\longrightarrow d^j } } { \left\{\raisebox{-1.5ex}{\deduce{{\delta_i\longrightarrow b_i } } { \pi_i}}\right\}_{i \in i^j } & \raisebox{-2.5ex}{\deduce{{\{b_i\}_{i \in i^j},\gamma^j\longrightarrow d^j } } { \pi^j } } } \enspace\ ] ] then reduces to {{\delta_1,\ldots,\delta_n,\gamma^1,\ldots\gamma^m,\gamma'\longrightarrow c } } { \left\{\raisebox{-1.5ex}{\deduce{{\ldots\longrightarrow d^j } } { \xi^j}}\right\}_{j \in \{1 .. m\ } } & \left\{\raisebox{-1.5ex}{\deduce{{\delta_i\longrightarrow b_i } } { \pi_i}}\right\}_{i \in i ' } & \raisebox{-2.5ex}{\deduce{{\ldots\longrightarrow c } } { \pi ' } } } \enspace\ ] ] [ [ structural - cases ] ] structural cases : + + + + + + + + + + + + + + + + + if is as shown below left , then reduces to the derivation shown below right , where .{{b_1,b_2,\ldots , b_n,\gamma\longrightarrow c } } { \deduce{{b_1,b_1,b_2,\ldots , b_n,\gamma\longrightarrow c } } { \pi ' } } \qquad \infer=[{\hbox{\slc}{\cal l } } ] { { \delta_1,\delta_2,\ldots,\delta_n,\gamma\longrightarrow c } } { \deduce{{\delta_1,\delta_1,\delta_2,\ldots,\delta_n,\delta_n,\gamma\longrightarrow c}}{\xi_1 } } \ ] ] if is as shown below left , then reduces to the derivation shown below right , where .{{b_1,b_2,\ldots , b_n,\gamma\longrightarrow c } } { \deduce{{b_2,\ldots , b_n,\gamma\longrightarrow c } } { \pi ' } } \qquad \infer[{\hbox{\slw}{\cal l } } ] { { \delta_1,\delta_2,\ldots,\delta_n,\gamma\longrightarrow c } } { \deduce{{\delta_2,\ldots,\delta_n,\gamma\longrightarrow c}}{\xi_1 } } \ ] ] [ [ axiom - cases ] ] axiom cases : + + + + + + + + + + + + suppose ends with a left - rule acting on and ends with the rule. then it must be the case that and reduces to .if ends with the rule , then , is the empty multiset , and must be a cut formula , i.e. , .therefore reduces to .lm : reduct_subst let be a derivation ending with a and let be a substitution . if reduces to then there exists a derivation such that and reduces to .observe that the redexes of a derivation are not affected by eigenvariable substitution , since the cut reduction rules are determined by the last rules of the premise derivations , which are not changed by substitution .therefore , any cut reduction rule that is applied to to get can also be applied to .suppose that is the reduct of obtained this way .in all cases , except for the cases where the reduction rule applied is either , , or those involving , it is a matter of routine to check that . for the reduction rules and , we need lemma [ lm : param subst ] which shows that eigenvariable substitution commutes with parameter substitution . we show here the case involving .the only interesting case is the reduction . for simplicity, we show the case where ends with with three premises ; it is straightforward to adapt the following analysis to the more general case .so suppose is the derivation : { { \delta_1 , \delta_2 , \gamma\longrightarrow c } } { \infer[{{\rm eq}{\cal r } } ] { { \delta_1\longrightarrow t = t } } { } & \deduce{{\delta_2\longrightarrow b}}{\pi_2 } & \infer[{{\rm eq}{\cal l } } ] { { t = t , b , \gamma\longrightarrow c } } { \left\ { \raisebox{-1.5ex } { \deduce{{b\rho , \gamma\rho\longrightarrow c\rho}}{\pi^\rho } } \right\}_\rho } } \ ] ] according to definition [ def : subst ] , the derivation is { { \delta_1\theta,\delta_2\theta , \gamma\theta\longrightarrow c\theta } } { \infer[{{\rm eq}{\cal r } } ] { { \delta_1\theta\longrightarrow t\theta = t\theta } } { } & \deduce{{\delta_2\theta\longrightarrow b\theta}}{\pi_2\theta } & \infer[{{\rm eq}{\cal l } } ] { { t\theta = t\theta , b\theta , \gamma\theta\longrightarrow c\theta } } { \left\ { \raisebox{-1.5ex } { \deduce{{b\theta\rho ' , \gamma\theta\rho'\longrightarrow c\theta\rho'}}{\pi^{(\theta\circ \rho ' ) } } } \right\}_{\rho ' } } } \ ] ] let .the reduct of in this case ( modulo the different order in which the weakening steps are applied ) is : { { \delta_1\theta , \delta_2\theta , \gamma\theta\longrightarrow c\theta } } { \deduce{{\delta_2\theta,\gamma\theta\longrightarrow c\theta}}{\psi } } \ ] ] let us call this derivation .let .the above reduct can be matched by the following reduct of ( using the same order of applications of the weakening steps ) : { { \delta_1 , \delta_2 , \gamma\longrightarrow c } } { \deduce{{\delta_2,\gamma\longrightarrow c}}{\psi ' } } \ ] ] let us call this derivation . by definition [ def : subst ] , we have , and obviously , also . by case analysis on . if for some and then , where , hence it is normalizable by definition [ def : candidates ] ( specifically , condition * cr1 * ) .otherwise , is normalizable by definition [ def : param red ] .suppose , for some and some , and suppose . then by definition [ def : param red ] . by definition [ def : candidates ] ( * cr0 * ) we also have .otherwise , suppose . then by definition [ def : param red ] . by lemma [ lm : subst - norm ], we have , therefore ] , and therefore is also in ] , we have that ( * p3 * ) \rightarrow \red_{d\theta}[\omega])\ ] ] for every .we need to show that \rightarrow \red_{d\rho\delta}[\omega]) ], we can use the same reducibility candidate which is used to establish ] iff ] .otherwise , suppose , and ] . in most cases , this follows straightforwardly from the induction hypothesis .we show the interesting cases here : * suppose ends with , i.e. , for some and and is of the form : { { \gamma\longrightarrow b\omega { \supset}d\omega } } { \deduce{{\gamma , b\omega\longrightarrow d\omega}}{\pi ' } } \ ] ] note that since , we have that and . since ] and = \red_{d\rho}[\omega'] ] .* suppose ends with : { { \gamma\longrightarrow q\,\vec t } } { \deduce{{\gamma\longrightarrow d\,y^q\,\vec t}}{\pi'}}\ ] ] where and is a new parameter .since we identify derivations which differ only in the choice of internal variables and parameters , we can assume without loss of generality that .note that since the body of a definition can not contain occurrences of parameters , we also have .suppose is a reducibility candidate of type , for some closed term of the same syntactic type as , and suppose is a normalizable derivation of such that \in ( \red_{(d\,y^q\,\vec u)}[\omega ' , ( \sscr,\pi_i , i)/y^q ] \rightarrow \sscr \,\vec u)\ ] ] for every of the appropriate types . to show that ] ( from the assumption ) , this means that , \pi_i[\vec t/ \vec y ] ) \in \sscr\,\vec t\ ] ] and therefore is indeed in ] , by definition [ def : param red ] ( * p4 * ) , there exist a parameter such that and a reducibility candidate such that and \in ( \sscr\,\vec u \rightarrow \red_{b\,y^q\,\vec u}[\omega , ( \sscr,\pi_i , i)/y^q])\ ] ] for every . to show ] implies \vec u ] is a reducibility candidate of type .suppose for some and suppose .then in this case , we have , so is a reducibility candidate of type by assumption . if but then in this case , and by lemma [ lm : norm red ] , is also a reducibility candidate .otherwise , for any parameter .we need to show that satisfies * cr0 * - * cr5*. * cr0 * follows from lemma [ lm : red - subst ] .* cr1 * follows from lemma [ lm : red - norm ] , and the rest follow from definition [ def : param red ] .lm : red param subst let be a candidate substitution and let be a parameter such that .let be a closed term of the same type as and let \ \hbox{for some } \}.\ ] ] suppose ] iff ] , and by lemma [ lm : red vacuous ] we have }[\omega ] = \red_c[\omega]=\red_c[\omega , ( \rscr , \psi , s\omega)/x^p].\ ] ] so assume that is not vacuous in .let ] .then is normalizable .we show , by induction on , that ] , we have that \rho}[\omega ] \rightarrow \red_{d[s / x^p]\rho}[\omega])\ ] ] for every . by the outer induction hypothesis ( on the size of ), we have \rightarrow \red_{d\rho}[\omega'])\ ] ] hence ] implies }[\omega]$ ] , can be proved analogously .
|
proof search has been used to specify a wide range of computation systems . in order to build a framework for reasoning about such specifications , we make use of a sequent calculus involving induction and co - induction . these proof principles are based on a proof theoretic ( rather than set - theoretic ) notion of _ definition _ . definitions are akin to logic programs , where the left and right rules for defined atoms allow one to view theories as `` closed '' or defining fixed points . the use of definitions and free equality makes it possible to reason intentionally about syntax . we add in a consistent way rules for pre and post fixed points , thus allowing the user to reason inductively and co - inductively about properties of computational system making full use of higher - order abstract syntax . consistency is guaranteed via cut - elimination , where we give the first , to our knowledge , cut - elimination procedure in the presence of general inductive and co - inductive definitions . logical frameworks,(co)-induction , higher - order abstract syntax , cut - elimination , parametric reducibility .
|
the quantum adiabatic algorithm was introduced as a quantum algorithm for finding the minimum of a classical cost function , where .this cost function is used to define a quantum hamiltonian diagonal in the basis : the goal is now to find the ground state of . to thisend a `` beginning '' hamiltonian is introduced with a known and easy to construct ground state .the quantum computer is a system governed by the time dependent hamiltonian where controls the rate of change of .note that and .the state of the system obeys the schrdinger equation , where we choose and run the algorithm for time . by the adiabatic theorem , if is large enough then will have a large component in the ground state subspace of .( note we are not bothering to state the necessary condition on the lack of degeneracy of the spectrum of for , since it will not play a role in the results we establish in this paper . ) a measurement of can then be used to find the minimum of .the algorithm is useful if the required run time is not too large as a function of .there is hope that there may be combinatorial search problems , defined on bits so that , where for certain `` interesting '' subsets of the instances the run time grows subexponentially in . a positive result of this kindwould greatly expand the known power of quantum computers . at the same time it is worthwhile to understand the circumstances under which the algorithm is doomed to fail .in this paper we prove some general results which show that with certain choices of or the algorithm will not succeed if is , that is as , so that improvement beyond grover speedup is impossible .we view these failures as due to poor choices for and , which teach us what not to do when looking for good algorithms .we guarantee failure by removing any structure which might exist in from either or . by structurewe mean that is written as a bit string and both and are sums of terms involving only a few of the corresponding qubits . in sectionii we show that regardless of the form of if is a one dimensional projector onto the uniform superposition of all the basis states , then the quantum adiabatic algorithm fails . hereall the states are treated identically by so any structure contained in is lost in . in section iiiwe consider a scrambled that we get by replacing the cost function by where is a permutation of to . herethe values of and are the same but the relationship between input and output is scrambled by the permutation .this effectively destroys any structure in and typically results in algorithmic failure .the quantum adiabatic algorithm is a special case of hamiltonian based continuous time quantum algorithms , where the quantum state obeys ( [ schrodinger ] ) and the algorithm consists of specifying , the initial state , a run time and the operators to be measured at the end of the run . in the hamiltonian language , the grover problem can be recast as the problem of finding the ground state of where lies between and .the algorithm designer can apply , but in this oracular setting , is not known . in reference the following result was proved .let where is any time dependent `` driver '' hamiltonian independent of .assume also that the initial state is independent of .for each we want the algorithm to be successful , that is .it then follows that the proof of this result is a continuous - time version of the bbbv oracular proof .our proof techniques in this paper are similar to the methods used to prove the result just stated .in this section we consider a completely general cost function with . the goal is to use the quantum adiabatic algorithm to find the ground state of given by ( [ problemham ] ) with given by ( [ adiabaticham ] ) .let be the uniform superposition over all possible values .if we pick and , then the adiabatic algorithm fails in the following sense : let be diagonal in the basis with a ground state subspace of dimension .let let be the projector onto the ground state subspace of and let be the success probability , that is , . then keeping fixed , we introduce additional beginning hamiltonians as follows . for let be a unitary operator diagonal in the basis with andlet so that the form an orthonormal basis .note also that we now define with corresponding evolution operator . note that above is with the corresponding evolution operator . for each evolve with from the ground state of , which is .note that and .let .for each the success probability is , which is equal to since commutes with .the key point is that if we run the hamiltonian evolution with backwards in time , we would then be finding , that is , solving the grover problem .however , this should not be possible unless the run time is of order .let be the evolution operator corresponding to an -independent reference hamiltonian let be the normalized component of in the ground state subspace of .we consider the difference in backward evolution from with hamiltonians and , and sum on , clearly , and now where is orthogonal to .since we have where for each , and are normalized states with orthogonal to . since commutes with , is an element of the -dimensional ground state subspace of .we have \\ & \geq & 2n - 2\sqrt{b } \sum_x { \big|\!{\left\langle x | i_x \right\rangle}\!\big| } - 2n\sqrt{1-b}.\end{aligned}\ ] ] choosing a basis for the dimensional ground state subspace of and writing gives thus we will use the schrdinger equation to find the time derivative of : \\ & = & -i \sum_x { \langle g_x |}u_x(t , t)[h_x(t)-h_r(t)]u_r^{\dagger}(t , t ) { | g_x \rangle } + c.c .\\ & = & -2\,\textrm{im } \sum_x ( 1-t / t)e { \langle g_x |}u_x(t , t ) { | x \rangle}{\langle x | } u_r^{\dagger}(t , t ) { | g_x \rangle}.\end{aligned}\ ] ] now using the same technique as in ( [ boundix ] ) , we obtain therefore now and so combining this with ( [ s0eqn ] ) gives which implies what we wanted to prove : how do we interpret theorem 1 ? the goal is to find the minimum of the cost function using the quantum adiabatic algorithm .it is natural to pick for a hamiltonian whose ground state is , the uniform superposition of all states .however if we pick to be the one dimensional projector the algorithm will not find the ground state if goes to as goes to infinity .the problem is that has no structure and makes no reference to .our hope is that the algorithm might be useful for interesting computational problems if has structure that reflects the form of .note that theorem 1 explains the algorithmic failure discovered by nidari and horvat for a particular set of . fora simple but convincing example of the importance of the choice of , suppose we take a decoupled bit problem which consists of clauses each acting on one bit , say for each bit let us pick a beginning hamiltonian reflecting the bit structure of the problem , the ground state of is , the quantum adiabatic algorithm acts on each bit independently , producing a success probability of where as is the transition probability between the ground state and the excited state of a single qubit .as long as we have a constant probability of success .this can be achieved for of order , because for a two level system with a nonzero gap , the probability of a transition is .( for details , see appendix a. ) however , from theorem 1 we see that a poor choice of would make the quantum adiabatic algorithm fail on this simple decoupled bit problem by destroying the bit structure .next , suppose the satisfiability problem we are trying to solve has clauses involving say 3 bits .if clause involves bits , and we may define the clause cost function the total cost function is then to get to reflect the bit and clause structure we may pick \end{aligned}\ ] ] with in this case the ground state of is again . with this setup , theorem 1does not apply .versus bit number . at each bit numberthere are 50 random instances of exact cover with a single satisfying assignment .we choose the required run time to be the value of for which quantum adiabatic algorithm has success probability between 0.2 and 0.21 . for the projector beginning hamiltonian we use with .the plot is log - linear .the error bars show the 95% confidence interval for the true medians.,width=432 ] we did a numerical study of a particular satisfiability problem , exact cover . for this problem if clause involves bits , and , the cost function is some data is presented in fig . 1 . herewe see that with a structured beginning hamiltonian the required run times are substantially lower than with the projector .in the previous section we showed that removing all structure from dooms the quantum adiabatic algorthm to failure . in this sectionwe remove structure from the problem to be solved ( ) and show that this leads to algorithmic failure .let be a cost function whose minumum we seek .let be a permutation of and let }(z)=h\left(\pi^{-1}(z)\right).\ ] ] we will show that no continuous time quantum algorithm ( of a very general form ) can find the minimum of } ] with a permutation of to ,will make it impossible for the algorithm to find the minimum of }$ ] in time less than order for a typical permutation . for example suppose we have a cost function and have chosen so that the quantum algorithm finds the minimum in time of order .still scrambling the cost function results in algorithmic failure .these results do not imply anything about the more interesting case where and are structured , i.e. , sums of terms each operating only on several qubits .the authors gratefully acknowledge support from the national security agency ( nsa ) and advanced research and development activity ( arda ) under army research office ( aro ) contract w911nf-04 - 1 - 0216 .let us consider a two level system with hamiltonian which varies smoothly with . here and are orthonormal for all .the schrdinger equation reads the two energy levels in the system are separated by a gap which we assume is always larger than .let us introduce ( with the dimension of energy ) as and let we pick the phases of and such that . plugging ( a1 ) into the schrdinger equation gives or equivalently , where now let .we have and we want the transition amplitude at which is ^{\bar{\theta}}_0 + \frac{1}{it } \int_0^{\bar{\theta } } e^{it\theta } \left ( { \frac{\textrm{d}c_0}{\textrm{d}\theta}}f+ c_0 { \frac{\textrm{d}f}{\textrm{d}\theta}}\right ) \textrm{d}\theta \\ & = & \frac{1}{t}\left ( \left[i c_0 f e^{it\theta }\right]^{\bar{\theta}}_0 -i \int_0^{\bar{\theta } } \left ( c_1 f f^*+ e^{it\theta } c_0 { \frac{\textrm{d}f}{\textrm{d}\theta } } \right ) \textrm{d}\theta \right ) .\end{aligned}\ ] ] now and . as long as the gap does not vanish and bounded so we have that .the probability of transition to the excited state for a two - level system with a nonzero gap is thus 99 e. farhi , j. goldstone , s. gutmann , m. sipser , _ quantum computation by adiabatic evolution _ , http://arxiv.org/abs/quant-ph/0001106[quant-ph/0001106 ] ( 2000 ) e. farhi , s. gutmann , _ analog analogue of a digital quantum computation _ ,a * 57 * , 2403 ( 1998 ) , http://arxiv.org/abs/quant-ph/9612026[quant-ph/9612026 ] c. h. bennett , e. bernstein , g. brassard , and u. v. vazirani , _ strengths and weaknesses of quantum computing _ , siam journal on computing 26:1510 - 1523 ( 1997 ) , http://arxiv.org/abs/quant-ph/9701001[quant-ph/9701001 ] m. nidari , m. horvat , _ exponential complexity of an adiabatic algorithm for an np - complete problem _ , http://arxiv.org/abs/quant-ph/0509162[quant-ph/0509162 ] ( 2005 )
|
the quantum adiabatic algorithm is a hamiltonian based quantum algorithm designed to find the minimum of a classical cost function whose domain has size . we show that poor choices for the hamiltonian can guarantee that the algorithm will not find the minimum if the run time grows more slowly than . these poor choices are nonlocal and wash out any structure in the cost function to be minimized and the best that can be hoped for is grover speedup . these failures tell us what not to do when designing quantum adiabatic algorithms .
|
the function of many physiological systems depends on branched structures that exist both at the tissue ( e.g. nervous plexi , lungs , and the vascular and lymphatic systems ) and the cellular level ( e.g. neurons ) .of particular interest , local and global propagation of electrical signals within the nervous system depends on the integration , processing , and further generation of electrical pulses that travel through neurons . in turn , the tree - like morphology of neurons facilitates simultaneous signaling to cells located in different places and over long distances .neuronal morphology is typically modeled by assuming that the shape of small neuronal segments , or _neurites _ , is approximated by cylinders of different diameters . as a consequence ,cable theory , rall60 , , , , plays a central role in the theoretical and experimental study of electrical conduction in neurons ; see , for example , bluman : tuckwell87 , , , cox : raol04 , , , , , , , and references therein . notably , one of the most interesting results from recent theoretical work is that geometrical properties of neuronal membranes may exert powerful effects on signal propagation even in the presence of voltage - dependent channels vett : roth : haus01 .theoretical research involving realistic neuronal morphologies is typically done by numerically solving systems of cable equations defined on cylinders with different radii , and assuming that voltage and current are continuous functions of space and time . to the best of our knowledge ,graphical methods seem have not been widely applied yet in the mathematical modeling of neurons .graphical methods are very useful and popular in different branches of modern physics .it is worth noting , for example , feynman diagrams in quantum mechanical or statistical field theory , akh : ber , , , , fynmanqed , , , , vilenkin - kuznetsov - smorodinskii approach to solutions of -dimensional laplace equation , , , smor76 , applications in solid - state theory , etc .a goal of this paper is to make a modest step in this direction ( see also , coombesetal07 and references therein ) .we use explicit solutions from recent papers on variable quadratic hamiltonians in nonrelativistic quantum mechanics , , cor - sot : sua : sus , , , lan : sus , , , to describe steady state and transient solutions to linear cable equations modeling neurites with non - necessarily constant radius .at a closer view , neurites can be regarded as volumes of revolution , defined by rotating a smooth function representing the local radius of the neurite where represents distance along the neurite . as a result ,the cable theory implies the following set of equations , : , represents the voltage difference across the membrane ( interior minus exterior ) as a deviation from its resting value , is the membrane current density , is the total axial current , is the membrane resistance , is the intercellular resistivity and is membrane capacitance ( more details can be found in , and ) . differentiating equation ( [ cable3 ] ) with respect to and substituting the result into ( [ cable1 ] ) with the help of ( [ cable2 ] ) one gets is the cable equation with tapering for a single branch of dendritic tree ( see , , and surkisetal96 for more details ). we shall be particularly interested in solutions of the cable equation ( cableequation ) corresponding to termination with a sealed end , namely , when at the end point the membrane cylinder is sealed with a disk composed of the same membrane . in this case , the corresponding boundary condition can be derived by setting then , in view of ( [ cable2])([cable3 ] ) , one gets rall59: in a similar fashion , at the somatic end one gets is the somatic resistance and is the somatic capacitance .we shall use these conditions for the steady - state and transient solutions of the cable equation .( later we may impose similar boundary conditions at the points of branching . ) in this letter , we shall first concentrate on steady - state solutions of the cable equation , when then boundary value problem can be conveniently solved ( by a direct substitution for each branch of the dendritic tree ) in terms of standard solutions of this second order ordinary differential equation as follows and are two linearly independent solutions of the stationary cable equation ( [ sturm ] ) that satisfy special boundary conditions and then the corresponding current density / voltage ratio function given by in term of the standard solutions and throughout this letter , we shall refer to a case , when as the case of weak tapering .an opposite situation , when at certain point and an inverse of the current may occur , shall be called a case of the strong tapering .( a case of strong tapering has been numerically discovered in . )in this letter , we consider a general model of a dendrite as a ( binary ) directed tree ( from the soma to its terminal ends ) consisting of axially symmetric branches with the following types of tapering . here , , and the cable equation takes the simplest form a familiar solution , , rall62: to boundary conditions(see , , , , and references therein for more details . ) here , with the steady - state solution of the corresponding cable equation to boundary conditions ( [ cableconeboundary ] ) are given by baer : herr : sus : vega , the standard solutions that satisfy and can be constructed as follows \label{sbessel}\]]and \sqrt{\frac{r_{0}}{r_{1}}}i_{1}\left ( \mu \sqrt{r}\right ) \label{cbessel } \\ & & + \left [ \mu \sqrt{r_{0}}i_{0}\left ( \mu \sqrt{r_{0}}\right ) -2i_{1}\left ( \mu \sqrt{r_{0}}\right ) \right ] \sqrt{\frac{r_{0}}{r_{1}}}k_{1}\left ( \mu \sqrt{r}\right ) \notag\end{aligned}\]]in terms of modified bessel functions and of orders ( different aspects of the advanced theory of bessel functions can be found in , , , , , , , and ) .if on an interval the cable equation ( [ cableequation ] ) takes the form .\label{hypercable}\]]this special case of tapering is integrable in terms of elementary functions ( see also and for a similar problem related to a model of the dumped quantum oscillator ) . for the steady - state solutions one obtains the following equation new parameters corresponding two linearly independent solutions , namely, be verified by a direct substitution for an arbitrary parameter the required steady - state solution of the boundary value problem given by(see for more details . )one can use numerical methods and/or wkb - type approximation in order to obtain standard solutions .for example , , \label{wkb}\]]where(see and for further details . )graphical rules for steady - state voltages and currents in a model of dendritic tree with tapering are as follows . for a single branch with tapering voltage and current density / voltage ratioare given by ( see figure 1 ) .the internal potential and current are assumed to be continuous at all dendritic branch points and at the soma - dendritic junction .we consider a general case when each branch has its own tapering , say and ( see figure 2 ) .then the total ratio constant at the branching point is given by the following expression the ratio constant is the coefficient found by the previous formula ( [ branchingb ] ) . in a similar fashion , at the branching point one gets(see figure 3 . ) combination of the above graphical rules results in a simple algorithm of evaluation of voltages and currents in the model of dendritic tree under consideration as follows .evaluate constants for all branching points of the tree : ( a ) first apply formula ( [ branchingb ] ) for all open notes ; ( b ) remove the above nodes from the tree and keep repeating the previous step until you reach the root of tree ( soma ) . in order to find voltage at a point of the dendritic tree ,follow the path and multiply the initial voltage by consecutive corresponding factors from formula ( [ rulev ] ) changing at each intersection of the tree. the ratio of voltages and at two terminal points , can be determine in a graphic form by the previous rule applied to the shortest path formulas ( [ branchingb])([branchingb0 ] ) define ratio coefficients for all vertexes for the standard node on figure 2 . for the corresponding voltages, one can write v\left ( x_{12}\right ) \label{ex1 } \\ & = & \left [ c\left ( x_{12}-x_{0}\right ) + b\left ( x_{12}\right ) s\left ( x_{12}-x_{0}\right ) \right ] v\left ( x_{12}\right ) \notag \\ & & \times \left [ c\left ( x_{1}-x_{12}\right ) + b\left ( x_{1}\right ) s\left ( x_{1}-x_{12}\right ) \right ] v\left ( x_{1}\right ) \notag \\ & = & \left [ c\left ( x_{12}-x_{0}\right ) + b\left ( x_{12}\right ) s\left ( x_{12}-x_{0}\right ) \right ] v\left ( x_{12}\right ) \notag \\ & & \times \left [ c\left ( x_{2}-x_{12}\right ) + b\left ( x_{2}\right ) s\left ( x_{2}-x_{12}\right ) \right ] v\left ( x_{2}\right ) \notag\end{aligned}\]]and examples are left to the reader .let us consider the cable equation ( [ cableequation ] ) for a single branch with an arbitrary smooth tapering on the interval the separation of variables in results in is a separation constant .the boundary condition at the sealed end ( [ sealedendboundary ] ) takes the form general solution of this problem can be conveniently written ( for each branch of the dendritic tree ) as follows , \label{sepgensol}\]]where is a constant and and are two linearly independent standard solutions of equation ( [ sepcableeq ] ) that satisfy special boundary conditions and then the boundary condition ( [ somaendboundary ] ) at the somatic end , u\right ) \right\vert _ { x = x_{0}}=0,\qquad \tau _ { s}=c_{s}r_{s } , \label{sepsomaboundary}\]]results in a transcendental equation \frac{r_{i}}{r_{s}}=\frac{c^{\prime } \left ( x_{0},\omega \right ) + \frac{1}{2}\omega ^{2}s^{\prime } \left ( x_{0},\omega \right ) } { c\left ( x_{0},\omega \right ) + \frac{1}{2}\omega ^{2}s\left ( x_{0},\omega \right ) } \label{eigenvaluesomega}\]]for the eigenvalues ( there are infinitely many discrete eigenvalues and , ince ? . ) the corresponding eigenfunctions are orthogonal respect to an inner product that is given in terms of the lebesgue stieltjes integral ( see also appendix and kellogg21 , and ): a formal solution of the corresponding initial value problem takes the form u_{n}\left ( x\right ) , \notag\end{aligned}\]]where is the steady - state solution , are roots of the transcendental equation ( eigenvaluesomega ) and the corresponding eigenfunctions are given by can be obtained by methods of refs . and with the help of the modified orthogonality relation ( [ orthogonal ] ) as follows of ( [ an ] ) into ( [ gensolivp ] ) and changing the order of summation and integration result in \frac{u_{n}\left ( x\right ) u_{n}\left ( y\right ) } { \left\vert u_{n}\right\vert ^{2 } } \label{greanfunc}\]]is an analog of the heat kernel . infinite speed of propagation .method of images . in the case of a piecewise tapering in a similar fashion, one can write , & x_{0}\leq x\leq x_{01 } \\a_{1}\left [ c_{1}\left ( x,\omega \right ) + \frac{1}{2}\omega ^{2}s_{1}\left ( x,\omega \right ) \right ] , & x_{01}\leq x\leq x_{1}\end{array}\right . \label{u2peace}\]]provided and and and continuity and smoothness of the solution at the point namely, in the following equation for the eigenvalues can obtain a formal solution in the form ( [ integralsol])(greanfunc ) once again .further details are left to the reader .in this letter , we propose a simple graphical approach to steady state solutions of the cable equation for a general model of dendritic tree with tapering . a simple case of transient solutions is also briefly discussed . * acknowledgments .* we thank carlos castillo - chvez , steve baer , hank kuiper and hal smith for support , valuable discussions and encouragement .this paper is written as a part of the summer 2010 program on analysis of mathematical and theoretical biology institute ( mtbi ) and mathematical , computational and modeling sciences center ( mcmsc ) at arizona state university .the mtbi / sums summer research program is supported by the national science foundation ( dms-0502349 ) , the national security agency ( dod - h982300710096 ) , the sloan foundation and arizona state university .we consider the sturm liouville type problem, the second order differential operator -q\left ( x\right ) u , \label{a2}\]]where and are continuous real - valued functions on an interval , ] exists and is continuous in , ] the junction of three branches ( see figure 2 ) can be considered in a similar fashion .suppose that -q_{i}\left ( x\right ) u\label{a7}\]]with for three corresponding branches , respectively , and boundary conditions are given by the terminal ends . introducing integration over the whole tree by additivity, applying the green formula ( [ a3 ] ) for each branch , one gets shall assume that the following continuity conditions: at the branching point in view of of the boundary conditions ( [ a8 ] ) , the modified orthogonality relation takes the form case of junction of -branches ( see figure 3)is similar . in general , for an arbitrary tree , one may conclude that only the terminal ends shall add additional mass points to the measure , if the corresponding boundary and continuity conditions hold .further details are left to the reader .r. cordero - soto , r. m. lopez , e. suazo and s. k. suslov , _ propagator of a charged particle with a spin in uniform magnetic and perpendicular electric fields _ , lett .* 84 * ( 2008 ) # 23 , 159178 .j. d. evans , g. major and g. c. kember , _techniques for the application of the analytical solution to the multicylinder somatic shunt cable model for passive neurones _ , math .* 125 * ( 1995 ) # 1 , 150 .a. foster , e. hendryx , a. murillo , m. salas , e. j. morales - butler , s. k. suslov and m. herrera - valdz , _ extensions of the cable equation incorporating spatial dependent variations in nerve cell diameter _ , mtbi-07 - 01 m technical report , 2010 ; see also http://mtbi.asu.edu/research/archive .m. meiler , r. cordero - soto , and s. k. suslov , _ solution of the cauchy problem for a time - dependent schrdinger equation _ , j. math* 49 * ( 2008 ) # 7 , 072102 : 127 ; see also arxiv : 0711.0559v4 [ math - ph ] 5 dec 2007 .w. rall , _ core conductor theory and cable properties of neurons _ , in : _ handbook of physiology .the nervous system .cellular biology of neurons _ , ( e. r. kandel et .al , editors ) , am .bethesda , md , 1977 , volume 1 , pp .w. rall , _ cable theory for dendritic neurons _ , in : _ methods in neuronal modeling _ , ( c. koch and i. segev , editors ) , a bradford book , the mit press , cambridge , massachusets and london , england , 1989 , pp . 962 .a. schierwagen and m. ohme , _ a model for the propagation of action potentials in nonuniform axions _ , in : _collective dynamics : topics on competition and cooperation in the biosciences : a selection of papers in the proceedings of the biocomp2007 international conference _ , ( l. m. riccardi , a. buonocore and e. pirozzi , editors ) , aip conf . proc .* 1028 * ( 2008 ) , 98112 .a. smorodinskii , _ trees and many - body problem _ , radiophysics and quantum electronics , * 19 * ( 1976 ) # 6 , 664672 ; translated from izvestiya vysshikh uchebnykh zavedenii , radiofizika , * 19 * ( 1976 ) # 6 , 932941 .a. surkis , b. taylor , c. s. peskin and c. s. leonard , _ quantitative morphology of physiologically identified and intracellularly labeled neurons from the guinea - pig laterodorsal tegmental nucleus in vitro _ , neuroscience * 74 * ( 1996 ) # 2 , 375392 .
|
we propose a simple graphical approach to steady state solutions of the cable equation for a general model of dendritic tree with tapering . a simple case of transient solutions is also briefly discussed .
|
many widely used applications read and write documents in a domain specific language based on xml or json . this paper and accompanying source code ( github.com/breck7/tree )present a new whitespace - based notation that can serve a similar purpose but with a grammar roughly one - tenth the size of xml or json .this paper describes the notation and three of its advantages when used as a document encoding .tree notation encodes two data structures .the first is a * tree * , which is an array of nodes .the second is a * node * , which may contain a line of content and also may contain a tree ( enabling recursion ) .tree notation defines three tokens : a node separator ( `` \n '' ) , a node edge ( `` ' ' ) , and a node pair separator ( `` ' ' ) .a comparison quickly illustrates nearly the entirety of the notation : json : .... { " title " : " about ada " , " stats " : { " pageviews " : 42 } } .... tree notation : .... title about ada stats pageviews 42 ....tree notation s grammar , while minimal , provides a natural way to represent complex data structures like maps , sets , vectors , n - dimensional matrixes , tuples , structs , and arrays , with or without recursion .tree provides the two base data structures but a dsl building upon those can represent any of these complex structures as well as primitives . any structure currently encoded in a json or xml documentcan be represented in tree , even preserving data types with an appropriate dsl .in addition , tree notation has a few advantages when compared to other document formats .when a document is composed of blocks written in multiple languages , those blocks may require verbose encoding to accomodate the underlying base notation . in the example snippetbelow , a json - backed ipython notebook encodes python to json .the resulting document is more complex : .... { " source " : [ " import matplotlib.pyplot as plt\n " , " import numpy as np\n " , " print(\"ok\ " ) " ] } .... with tree notation , the python block is indented and requires no additional transformation : .... source import matplotlib.pyplot as plt import numpy as np print("ok " ) ....json and xml serializers can encode the same object to different documents by varying whitespace .although ignoring whitespace can be a useful feature in a language , it can also lead to large diffs and sometimes merge conflicts for small or non - existant semantic changes , because of different serialization implementations . in tree notation, there is one and only one way to serialize an object .diffs contain only semantic meaning .tree notation does not have parse errors .every document is a valid tree notation document .errors only occur at the higher dsl level ( i.e. a mistyped property name ) .typos made at the spot of a tree notation token affect only the local nodes .with other base encodings , to get from a blank document to a certain valid document in keystroke increments requires stops at invalid documents . with tree notationall intermediate steps are valid . segments of a document may be edited at runtime with no risk of breaking the parsing of the entire document .a developer working on an editor that allows a user to edit the document source does not have to worry about handling both errors at the dsl level and errors at the base notation level .the latter class of errors is eliminated with tree notation .while this paper s primary purpose is to introduce tree notation and highlight some benefits , it is relevant to also address a few obvious drawbacks of the language .xml and json are now ubiquitous , with json alone having over 250 widely used and well tested implementations in over 50 programming languages .tree notation is new , and library and application support , compared to other popular base notations , rounds to zero . despite the lack of widespread support at present , because of the ease of implementation and intrinsic benefits mentioned above , tree notation may still be worthwhile in certain applications . some popular formats , including json , specify encodings for common primitive types like booleans and numbers .encoded documents can then be parsed directly to the matching data structures in memory .the base tree notation is relatively bare and delegates the encoding and decoding of additional types to the implementation or dsl . thus parsing a tree document into the desired data structures requires an additional specification and/or parse step .this may not be a significant disadvantage , however . as noted by others ,rarely are the primitive data structures in a base level encoding like json enough to fully describe a structure , and in practice a higher level specification and additional parse step is used .tree s permissive , antifragile grammar enables experimentation and may lead to the development of beneficial higher level notations over time that build on tree without introducing backwards incompatibilities .some developers dislike indentation - based encodings .in addition , a tree structure in tree notation extends over multiple lines , with one node per line , whereas other notations may benefit from a denser display of information , with multiple nodes per line .tree notation can serve as an alternative to json or xml for a base level encoding for a dsl .tree supports clean multi - lingual composition , aligns well with version control paradigms , and has a permissive base grammar that allows for robust runtime editing . with more tooling support and further experimentation, tree notation may improve developer productivity and enable new beneficial design patterns in document editors ..... tree nodes : node [ ] ; new(str , nodesep="\n",edgechar= " " , pairchar= " " ): tree ; node name : string ; value ?: string ; tree ? : tree ; ....note : a bare implementation can drop the pairchar and replace the name and value members with a single `` line : string '' member ..... grammar tree ; tree : node+ ; node : edge name ?newline ? | name separator ?newline ?| newline ; edge : space+ ; name : stringchar+ ; value : ( stringchar|space)+ ; separator : space ; space : ' ' ; newline : ' \n ' ; stringchar : ~ [ \n ] ; ........ o : s : title about ada o : stats n : pageviews 42 ....
|
a new minimal notation is presented for encoding tree data structures . tree notation may be useful as a base document format for domain specific languages . shell : bare demo of ieeetran.cls for journals
|
suppose is a stochastic process such that as typically is a deterministic seed or arbitrary value initiating the iteration and we are interested in the limiting value the sequence may be deterministic , obtained by using a numerical integration scheme to approximate an integral , or a newton - raphson scheme to approximate the root of an equation .it may be a ratio estimator estimating a population ratio or the result of a stochastic or deterministic approximation to a root or maximum .in general we will only assume that it is possible to sample from the distribution of the stochastic process for a finite period , i.e. sample for fixed .a common argument advanced in favour of the use monte carlo ( mc ) methods as an alternative to numerical ones is that the mc estimator is usually unbiased with estimable variance . by increasing the sample size we are assured by unbiasedness that the estimator is consistent and we can produce , for any sample size, a standard error of the estimator . the statistical argument is advanced against the use of numerical methods that they do not offer easily obtained estimates of error .the purpose of this brief note is to show that this argument is flawed ; generally any consistent sequence of estimators can be easily rendered unbiased and an error estimate is easily achieved .we do not attempt to merely reduce the bias , but by introducing randomization into the sequence , to completely eliminate it .the price we pay is an additional randomization inserted into the sequence and a possible increase in the mean squared error ( mse ) .suppose is a random variable , independent of the sequence taking finite _ non - negative _ integer values .suppose all for a given sequence we define the first backward difference as define the random variable this can be written in the more general form where random variables with =1 ] assume for the calculation of the variance that as then +var(e(y|\mathcal{f}% ) ) \nonumber\\ & = e[var(y|\mathcal{f})]+var(x_{\infty})\nonumber\\ & = e\left ( \sum_{n=1}^{\infty}\frac{(\nabla x_{n})^{2}}{q_{n}^{2}}% q_{n}(1-q_{n})+2\sum_{n=2}^{\infty}\sum_{j=1}^{n-1}\frac{\nabla x_{n}\nabla x_{j}}{q_{n}q_{j}}q_{n}(1-q_{j})\right ) \nonumber\\ & = e\left ( \sum_{n=1}^{\infty}\frac{(\nabla x_{n})^{2}}{q_{n}}% ( 1-q_{n})+2\sum_{j=1}^{\infty}\sum_{n = j+1}^{\infty}\frac{\nabla x_{n}\nabla x_{j}}{q_{j}}(1-q_{j})\right ) \label{vary}\\ & = e\left ( \sum_{n=1}^{\infty}\frac{(\nabla x_{n})^{2}}{q_{n}}% ( 1-q_{n})+2\sum_{j=1}^{\infty}\frac{(x_{\infty}-x_{j})\nabla x_{j}}{q_{j}% } ( 1-q_{j})\right ) \nonumber\\ & = e\left ( \sum_{n=1}^{\infty}\left [ ( \nabla x_{n})^{2}+2(x_{\infty}% -x_{n})\nabla x_{n}\right ] ( \frac{1-q_{n}}{q_{n}})\right ) \nonumber\\ & = \sum_{n=1}^{\infty}e\left ( \frac{\nabla x_{n}\left [ 2x_{\infty}% -x_{n}-x_{n-1}\right ] } { q_{n}}-\sum_{n=1}^{\infty}\nabla x_{n}\left [ 2x_{\infty}-x_{n}-x_{n-1}\right ] \right ) \nonumber\\ & = \sum_{n=1}^{\infty}e\left ( \frac{2x_{\infty}\nabla x_{n}-\nabla x_{n}% ^{2}}{q_{n}}-\sum_{n=1}^{\infty}\left ( 2x_{\infty}\nabla x_{n}-\nabla x_{n}^{2}\right ) \right ) \nonumber\\ & = \sum_{n=1}^{\infty}e\left ( \frac{2x_{\infty}\nabla x_{n}-\nabla x_{n}% ^{2}}{q_{n}}\right ) -\left ( 2x_{\infty}(x_{\infty}-x_{0})-\left ( x_{\infty } ^{2}-x_{0}^{2}\right ) \right ) \nonumber\\ & = \sum_{n=1}^{\infty}e\left ( \frac{2x_{\infty}\nabla x_{n}-\nabla x_{n}% ^{2}}{q_{n}}\right ) -(x_{\infty}-x_{0})^{2}\nonumber\\ & = \sum_{n=1}^{\infty}\frac{2x_{\infty}\nabla\mu_{n}-\nabla(\sigma_{n}% ^{2}+\mu_{n}^{2})}{q_{n}}-(x_{\infty}-x_{0})^{2}\nonumber\\ & = \sum_{n=1}^{\infty}\frac{2\left ( x_{\infty}-\xi_{j}\right ) \nabla\mu _ { j}-\nabla\sigma_{j}^{2}}{q_{n}}-(x_{\infty}-x_{0})^{2}%\end{aligned}\ ] ] where =ex_{n}^{2}-ex_{n-1}^{2}=\sigma_{n}^{2}+\mu_{n}% ^{2}-\sigma_{n-1}^{2}-\mu_{n-1}^{2}=\nabla(\sigma_{n}^{2}+\mu_{n}^{2}) ] whatever the distribution of suppose we use a shifted geometric distribution for so that for . evidently to minimize the variance we should choose so that the variance for general is \text { \ where } 1>q > r^{2}.\end{aligned}\ ] ] suppose we wish to minimize this over the values of and subject to the constraint that is constant ( ignoring the integer constraint on ) .then with \text { subject to } % \frac{1}{z-1}+s=\mu_{n}\text { or } % \]] , \text { for } \frac { 1}{r^{2}}>z>1\ ] ] which minimum occurs when or and and then the minimum variance is notice that the mean squared error , if we were to stop after iterations , is so we have purchased unbiasedness of the estimator at a cost of increasing the mse by a factor of approximately this factor is plotted in figure [ plotr ] .it can be interpreted as follows : in the worst case scenario when is around .4 , we will need about 3 times the sample size for the debiased estimator to achieve the same mean squared error as a conventional iteration using determinisitic however when is close to indicating a slower rate of convergence , there is very little relative increase in the mse .[ ptb ] plotr.eps note : the optimisation problem above tacitly assumed that the computation time required to generate the sequence is this is not the case with some applications ; for example in the numerical integral below the computation time is since there are intervals and function evaluations and in this case a more appropriate minimization problem , having budget constraint is , with \text { \ subject to } % \frac{1}{2}>q > r^{2}\text { and \ } 2^{s}\frac{1-q}{1 - 2q}=c>2^{s},s=0,1,2, ... \ ] ] or , putting \ ] ] which minimum appears to occur for $ ] and when and otherwise may be somewhat smaller .intuitively , when the rate of convergence is reasonably fast ( so is small ) then the minimum variance is achieved by a large guarantee on the value of ( large ) and then the residual budget used to produce unbiasedness by appropriate choice of estimation of a root suppose we wish to find the root of a nonlinear function . for a toy example , suppose we wish to solve for the equation we might wish to use ( modified ) newton s method with a random starting value to solve this problem , requiring randomly generating the initial value and then iterating but of course after a finite number of steps , the current estimate is likely a biased estimator of the true value of the root .we implemented the debiasing procedure above with and .we generated from a distribution , chose for simplicity used and repeated for simulations .the sample mean of the estimates was and the sample variance .although the procedure works well in this case when we start sufficiently close to the root , it should be noted that this example argues for an adaptive choice of one which permits a larger number of iterations ( larger values of when the sequence seems to indicate that we have not yet converged .this is discussed below . *stopping times for * in view of the last example , particularly if is far from it would appear desirable to allow to be a stopping time . in order to retain unbiasedness ,it is sufficient that = e\left [ x_{0}+\sum_{n=1}^{\infty}\nabla x_{n}\right ] \nonumber\ ] ] or = 1.\ ] ] therefore it is sufficient that and one simple rule for an adaptive construction of is : {ccc}% p(n\geq n-1|x_{1},x_{2}, ... x_{n-1 } ) & \text{if } & \nabla x_{n}>\varepsilon\\ pp(n\geq n-1|x_{1},x_{2}, ... x_{n-1 } ) & \text{if } & \nabla x_{n}\leq\varepsilon \end{array } \right.\ ] ] there are , of course , many potential more powerful rules for determining the shift in the distribution of but we we concentrate here on establishing the properties of the simplest version of this procedure . simpson s rule . consider using a trapezoidal rule for estimating the integral using function evaluations which evaluate the function on the grid .denote the estimate of the integral here and the error in simpson s rule assuming that the function has bounded fourth derivative is .this suggests a random such that or a ( possibly shifted ) geometric distribution with suppose this means which is quite small . in general ,the estimator has finite variance since more generally , if has a shifted geometric distribution with probability function parameter the expected number of function evaluations in the quadrature rule is and this is for example , when how well does this perform ?this provides an unbiased estimator of the integral with variance \ ] ] which can be evaluated or estimated in particular examples and compared with the variance of the corresponding crude monte carlo estimator . for a reasonable comparison, the latter should have the same ( expected ) number of function evaluations , i.e. 7 and therefore has variance take , for example , the function so that and in this case the variance of the mc estimator with seven function evaluations is we compare this with the estimator obtained by randomizing the number of points in a simpson s rule .here it is easy to check that [ c]||l|l|l|l|l|l|l|| & 1 & 2 & 3 & 4 & 5 & 6 + & 0.3047 & 0.2149 & 0.2124 & 0.2122 & 0.2122 & 0.2122 + & -0.089821 & -0.002562 & -0.000139 & -0.000008 & -0.000001 & -0.00000003 + table 1 : values of the numerical integral and with intervals and in this case the variance of the debiased simpson s rule estimate is indicating more than a two thousand - fold gain in efficiency over crude monte carlo .* note : * we have chosen the grid size in view of the fact that when we need the integrals for all in this case , we can simply augment the function evaluations we used for in order to obtain the major advantage of this debiasing procedure however is not as a replacement for crude monte carlo in cases where unbiased estimators exist , but as a device for creating unbiased estimators when their construction is not at all obvious .this is the case whenever the function of interest is a nonlinear function of variables that can be easily and unbiasedly estimated as in the following example .heston stochastic volatility model in the heston stochastic volatility model , under the risk neutral measure the price of an asset and the volatility process are governed by the pair of stochastic differential equations where and are independent brownian motion processes , is the interest rate , is the correlation between the brownian motions driving the asset price and the volatility process , is the long - run level of volatility and is a parameter governing the denote by the black - scholes price of a call option having initial stock value volatility , interest rate expiration time option stike price and dividend yield .the price of a call option in the heston model can be written as an expected value under the risk - neutral measure of a function of two variables ( see for example willard ( 1997 ) and broadie and kaya ( 2006))\text { \ where } \\ \xi & = \xi(v_{t},i(t))=\exp(-\frac{\rho^{2}}{2}i(t)+\rho\int_{0}^{t}% \sqrt{v_{s}}dw_{1}(s))\\ & = \exp(-\frac{\rho^{2}}{2}i(t)+\frac{\rho}{\sigma}(v_{t}-v_{0}+\kappa i(t)-\kappa\theta t))\text { and}\\ \widetilde{\sigma } & = \widetilde{\sigma}(i(t))=\sqrt{\frac{i(t)}{t}}\text { \ where } i(t)=\int_{0}^{t}v_{s}ds.\end{aligned}\ ] ] this can be valued conditionally on with the usual black - scholes formula . in particular with the option price is note that is clearly a highly nonlinear function of and and so , even if exact simulations of the latter were available , it is not clear how to obtain an unbiased simulation of in the heston model , and indeed various other stochastic volatility models , it is possible to obtain an exact simulation of the value of the process at finitely many values of so it is possible to approximate the integral using obtained from a trapezoidal rule with points .this raises the question of what we should choose as a distribution for under conditions on the continuity of the functional of the process whose expected value is sought , kloeden and platen ( 1995 , theorem 14.1.5 , page 460 ) show that the euler approximation to the process with interval size results in an error in the expected value of order where for sufficiently smooth ( four times continuously differentiable ) drift and diffusion coefficientsso for simplicity consider this case .this implies that which suggests we choose . as before we randomly generate from a ( possibly shifted ) geometric distribution with .the function to be integrated is not twice differentiable so we need to determine empirically the amount of the shift ( and we experimented with reasonable values of ) .we chose parameters and shifted the geometric random variable by so that for the parameters used in our simulation were taken from broadie and kaya(2004 ) : for which , according to broadie and kaya , the true option price is around 34.9998 .1,000,000 simulations with and provided an estimate of this option price of 34.9846 with a standard error of 0.0107 so there is no evidence of bias in these simulations .with parameter values and and we conducted simulations leading to an estimate of 6.8115 with a standard error of 0.0048998 .this is in agreement with the broadie and kaya `` true option price '' of 6.801 .note that the feller condition for positivity requires which fails in the above cases .this means that the volatility process hits zero with probability one , and for some parameter values , it does so frequently which may call into question the value of this model with these parameter values .100,000 simulations from these models used about 10 - 13 minutes running matlab 5.0 on an intel core 2 quad cpu .5 ghz .when numerical methods such as quadrature or numerical solutions to equations may result in a biased estimator , a procedure is suggested which eliminates this bias and provides statistical estimates of error .this procedure is successfully implemented both in simple root finding problems and in more complicated problems in finance and has enormous potential for providing monte carlo extensions of numerical procedures which allow unbiased estimates and error estimates .
|
consider a stochastic process that as the sequence may be a deterministic one , obtained by using a numerical integration scheme , or obtained from monte - carlo methods involving an approximation to an integral , or a newton - raphson iteration to approximate the root of an equation but we will assume that we can sample from the distribution of for finite . we propose a scheme for unbiased estimation of the limiting value , together with estimates of standard error and apply this to examples including numerical integrals , root - finding and option pricing in a heston stochastic volatility model.*keywords and phrases : * monte carlo simulation , unbiased estimates , numerical integration , finance , stochastic volatility model
|
an _ iot _ is a short - range wireless network of interconnected devices , e.g.,__wban__s , _ wi - fi _ , _ ieee 802.15.4 _ ( _ zigbee _ ) , _ rfids _ , _ tags _ , _ sensors _ , _ pdas _ , _ smartphones _ , etc ., that could sense , process and communicate information .example applications of _ iot _ are smart homes , health monitoring , wearables , environment monitoring , transportation and industrial automation . within an _ iot _, various types of wireless networks are required to facilitate the exchange of application - dependant data among their heterogeneous wireless devices .however , such diversity could give rise to coexistence issues among these networks , a challenge that limits the large - scale deployment of the _ iot_. therefore , new protocols are required for communication compatibility among its heterogeneous devices .basically , the _ ieee 802.15.6 _ standard , e.g. , _ wbans _ , utilizes a narrower bandwidth than other wireless networks , e.g. , _ieee 802.11_. however , the _ ieee 802.11 _ based wireless devices may use multiple channels that cover the whole international license - free _ 2.4 ghz _ndustrial , cientific and edical radio , denoted by _ ism _ ,band , so there could be overlapping channel covering an _ ieee 802.15.6 _ based network and thus create collisions between _ ieee 802.15.6 _ and these devices .in addition , _ ieee 802.11 _ based wireless devices may transmit at a high power level and thus relatively distant coexisting _ ieee 802.15.6 _ devices may still suffer interference .thus , the pervasive growth in wireless devices and the push for interconnecting them can be challenging for __ wban__s due to their simple and energy - constrained nature . basically , a _wban _ may suffer interference not only because of the presence of other __ wban__s but also from wireless devices within the general _ iot _ simultaneously operating on the same channel .thus , co - channel interference may arise due to the collisions amongst the concurrent transmissions made by sensors in different _ _ wban__s collocated in an _ iot _ and hence such potential interference can be detrimental to the operation of _ _ wban__s .therefore , robust communication is necessary among the individual devices of the collocated networks in an _iot_. in this paper , we propose a protocol to enable _ wban _ operation within an _ iot _ and leverage the emerging _ble _ technology to facilitate interference detection and mitigation .motivated by the reduced power consumption and low cost of _ ble _ devices , we integrate a _ble _ transceiver and a _ cr _ module within each _ wban _ s coordinator node , denoted by _ crd _ , where the role of _ ble _ is to inform the _crd _ about the frequency channels that are being used in its vicinity .in addition , the superframe s active period is further extended to involve not only a _ tdma _frame , but also a _fcs _ and _ fbtdma _ frames , for interference mitigation .when experiencing high interference , the _ wban s crd _ will be notified by the _ble _ device to use the _ cr _ module for selecting a different channel .when engaged , the _ cr _ assigns a stable channel for interfering sensors that will be used later within the _ fbtdma _ frame for data transmission .the simulation results show that our proposed approach can efficiently improve the spectrum utilization and significantly lower the medium access collisions among the collocated wireless devices in the general _iot_. the rest of the paper is organized as follows .section slowromancap2@ sets our work apart from other approaches in the literature .section slowromancap3@ summarizes the system model and provides a brief overview of the _ ble _ and the _ cr_. section slowromancap4@ describes _ csim _ in detail .section slowromancap5@ presents the simulation results .finally , the paper is concluded in section [email protected] and mitigation of channel interference have been extensively researched in the wireless communication literature . to the best of our knowledge , the published techniques in the realm of _ iot _ are very few and can be categorized as resource sharing and allocation , power control , scheduling techniques and medium access schemes .example schemes that pursued the resource sharing and allocation include , , , .bakshi et al . , proposed a completely asynchronous and distributed solution for data communication across _ iot _ , called _emit_. _ emit _ avoids the high overhead and coordination costs of existing solutions through employing an interference - averaging strategy that allows users to share their resources simultaneously .furthermore , _ emit_ develops power - rate allocation strategies to guarantee low - delay high - reliability performance .torabi et al ., proposed a rapid - response and robust scheme to mitigate the effect of interfering systems , e.g. , _ ieee 802.11 _ , on _ wban _ performance .they proposed dynamic frequency allocation method to mitigate bi - link interferences that affect either the _ wban s crd _ or _ wban _ sensors and hence impose them to switch to the same frequency .shigueta et al ., presented a strategy for channel assignment in an _iot_. the proposed strategy uses opportunistic spectrum access via cognitive radio .the originality of this work resides in the use of traffic history to guide the channel allocation in a distributed manner .ali et al . , proposed a distributed scheme that avoids interference amongst coexisting _ _wban__s through predictable channel hopping . based on the latin rectangle of the individual _ wban _ , each sensoris allocated a backup time - slot and a channel to use if it experiences interference such that collisions among different transmissions of coexisting _ _wban__s are minimized .xiao et al ., adopted the approach of power control and considered machine - to - machine , denoted by _ m2 m _, communication for an _ iot _ network .the authors proposed a framework of full - duplex _m2 m _ communication in which the energy transfer , i.e. , surplus energy , from the receiver to the transmitter and the data transmission from the transmitter to the receiver take place at the same time over the same frequency .furthermore , the authors established a stochastic game - based model to characterize the interaction between autonomous _transmitter and receiver .meanwhile , chen et al ., introduced a new area packet scheduling technique involving _ ieee 802.15.6 _ and _ ieee 802.11 _ devices .the developed packet scheduler is based on transmitting a common control signal known as the blank burst from _ mac _ layer .the control signal prevents the _ ieee 802.15.6 _ devices to transmit for a certain period of time during which the _ ieee 802.11 _ devices could transmit data packets .a number of approaches pursued the medium access scheduling methodology include ,, to mitigate interference among the _ ieee 802.11 _ and _ ieee 802.15.4 _ , i.e. , _ zigbee _ , based devices .wang et al . , proposed a new technique , namely , the acknowledgement , denoted by _ ack _ , with interference detection (_ ack - id _ ) , that reduces the _ ack _ losses and consequently reduces _ zigbee _ packet retransmissions due to the presence of collocated _ ieee 802.11 _ wireless networks .basically , in _ ack - id _ , a novel interference detection process is performed before the transmission of each _ zigbee _ _ ack _ packet in order to decide whether the channel is experiencing interference or not .inoue et.al ., proposed a novel distributed active channel reservation scheme for coexistence , called _ dacros _ , to solve the problem of _ wban _ and _ ieee 802.11 _ wireless networks coexistence ._ dacros _ uses the request - to - send and clear - to - send frames to reserve the channel for a superframe time of _ wban_. along the whole beacon time , i.e. , the whole superframe of the _ wban _ , all _ ieee 802.11 _ wireless devices remain silent and do not transmit to avoid collisions .zhang et al . , proposed cooperative carrier signaling , namely , _ccs _ , to harmonize the coexistence of _ zigbee _ _ _ wban__s with _ ieee 802.11 _ wireless networks ._ ccs _ allows _ zigbee _ _ _ wban__s to avoid _ ieee 802.11 _ wireless network - caused collisions and employs a separate _ zigbee _ device to emit a busy tone signal concurrently with the _ zigbee _ data transmission .as pointed out , none of the predominant approaches can be directly applied to _ iot _ because they do not consider the heterogeneity of the individual networks forming an _ iot _ in their design .motivated by the emergence of _ ble _ technology and compared to the previous predominant approaches for interference mitigation , our approach lowers the power and communication overheads introduced on the coordinator- and sensor - levels within each _wban_. unlike prior work ,in this paper , we propose a distributed protocol to enable _ wban _ operation and interaction within an existing _ iot_. we integrate a _ ble_ transceiver to inform the _ wban _ about the frequency channels that are being used in its vicinity and a _ cr _ module within the _wban s _ _crd_. our approach relies on both _ble _ transceiver and the _ cr _ module for stable channel selection and allocation for interference mitigation .module , when engaged determines a set of usable channels for the _ crd _ to pick from .each interfering sensor will then switch to the new channel to retransmit data to the _ crd _ in its allocated backup time - slot .bluetooth low energy ( _ ble _ ) is one of the promising technologies for _ iot _ services because of its low energy consumption and cost . _ble _ is a wireless technology used for transmitting data over short distances and broadcasting advertisements at a regular interval via radio waves .ble _ advertisement is a one - way communication method ._ ble _ devices , e.g. , ibeacons , that want to be discovered can periodically broadcast self - contained packets of data .these packets are collected by devices like smartphones , where they can be used for a variety of applications to trigger prompt actions .we envision that each collocated set ( cluster ) of wireless devices of such _ iot _ will have to include a _transceiver that periodically broadcasts the channel that is being used by the _devices in the vicinity .in fact , with the increased popularity of _ ble _ , it is conceivable that every _ iot _ device will be equipped with a _ ble _transceiver to announce its services and frequency channel .ble _ has a broadcast range of up to 100 meters , which makes _ble _ broadcasts an effective means for mitigating interference between __ wban__s and other _ iot _ devices .the _ iot _ environment consists of different wireless networks , each uses some set of common channels in the international license - free _ 2.4 ghz _ _ ism _ band .in addition , we assume that each network transmits using different levels of transmission power , bandwidth , data rates and modulation schemes . meanwhile , _ _ wban__s are getting pervasive and thus form a building block for the ever - evolving future _iot_. we consider _n _ _ tdma_-based __ wban__s that coexist within the general _iot_. each _ wban _ consists of a single _ crd _ and up to _ k _ sensors , each transmits its data on a channel within the international license - free _ 2.4 ghz __ ism _ band .basically , we assume all _ crds _ are equipped with richer energy supply than sensors and all sensors have access to all _ zigbee _ channels at any time .in addition , each _ crd _ is integrated with _ble _ to enable effective coordination in channel assignment and to allow the interaction with the existing _ iot _ devices .furthermore , each _ crd _ has a _ cr _ module to decide the usability and the stability of a channel .a co - channel interference takes place if the simultaneous transmissions of sensors and the _ crd _ in a _ wban _collide with those of other _ iot _ coexisting devices .the potential for such a collision problem grows with the increase in the communication range and the density of sensors in the individual _ _ wban__s as well as the number of collocated _ iot _ devices . to address this problem , our approach assigns each _ wban _ a _ default channel _ and in case of interference it allows the individual sensors to switch to a different channel to be picked by the _ crd _ in consultation with the _ cr _ module to mitigate the interference .the use of _ ble _ enables the _ crd _ to be aware of interference conditions faster and more efficiently . to achieve that, our approach extends the size of the superframe through the addition of flexible number of backup time - slots to lower the collision probability of transmissions . at the network setup time, each _ crd _ randomly picks a _default channel _ from the set of _ zigbee _ channels and informs all sensors within its _ wban _ through a beacon to use that channel along the _ tdma _ frame of the superframe , as will be explained below ._ csim _ depends on acknowledgements ( _ acks _ ) and time - outs to detect the collision at _sensor- _ and _ coordinator- _ levels . in the _ tdma _ frame shown in * fig. [ superframeicc ] * , each sensor transmits its packet in its assigned time - slot to the _ crd _ using the _ default channel _ and then sets a time - out timer . if it successfully receives an _ ack _ from its corresponding _ crd _ , it considers the transmission successful , and hence it sleeps until the _ tdma _ frame of the next superframe .however , if that sensor does not receive an _ ack _ during the time - out period , it assumes failed transmission due to interference . basically , all sensors experienced interference within the _ tdma _ frame wait until the _ fcs _ frame completes , and then each switches to the common interference mitigation channel .afterwards , each sensor retransmits its packet in its allocated time - slot within the _ fbtdma _ frame to the _ crd_. * algorithm [ csim ] * provides high level summary of _ csim_. * table [ symbol ] * shows notations and their corresponding meanings .lll * notation*&*meaning * + _ & _ wban _ + & sensor of _ wban _ + &default channel of _ wban _ + &stable channel of _ wban _ + &coordinator of _ wban _ + &bluetooth low power device of coordinator + &cognitive radio module of coordinator + & packet of sensor + & acknowledgement transmitted to sensor + & time - slot of _ tdma _ frame + & time - slot of _ fbtdma _ frame + & set of channels used by nearby _ iot_ devices + & list of interfering sensors in + _ fcs _ & _ flexible channel selection _+ _ fbtdma _ & _ flexible backup tdma _ + along the _ tdma _ frame , each _ crd _ s _ ble _ collects information based on broadcast announcements made by other nearby _ ble _ transceivers about the set of channels being used by wireless devices in the vicinity of a designated _ wban _( \{_lch _ } ) , and then reports this information to its associated _ cr_. the _ cr _ uses the following sets of channels which are defined as follows : * \{_*g * _ } : is a set of _ 16 _ channels available in the international license - free _ 2.4 ghz _ _ ism _ band of _ zigbee _ standard . * \{_*lch * _ } : is a set of channels that are being used in the vicinity of a designated _ wban_. * \{_*defaultchannel * _ } : is a singleton set that involves the _ default channel _ that is being used by a designated _ wban_. * \{_*us * _ } : is a set that consists of the remaining _ zigbee _ channels that are not being used in the vicinity of a designated _ wban _ , where }. in low or moderate conditions of interference , where there are some available channels , i.e. , \{_us _ } is not empty , or the size of the set \{_lch _ } is smaller than the size of the set \{_g _ } , the _ crd _ will not exploit the service of the _ cr _ when notified by the _ble _ about a channel conflict ; instead , the _ crd _ selects one available channel from \{_us _ } for efficient data transmission .however , in high interference conditions , the set \{_us _ } will be empty .therefore , once notified by the _ble _ , the _ crd _ can not select one available channel from \{_us _ } , and hence the _ cr _ should scan the set \{_lch _ } to eventually select the most stable channel to be used within the _ fbtdma _ frame for interference mitigation . basically , the designated _ cr _ looks for a usable channel from the set \{_lch _ } , if the first channel is not , then it starts sequentially sensing channels until a usable channel will be found . if it finds a usable channel and satisfies the stability condition , then it reports its index to the associated _ crd _ to be eventually used for interference mitigation .our approach relies on _ cr _ to decide the usability and stability of a channel using the received noise power as an indicator ( _ _ ) ._ _ during time - slot _ i _ is given by * eq .[ eq1]*. where , _ u _ is the time - bandwidth product and _ _ is a gaussian noise signal with zero mean and unit variance .the probability density function , denoted by _ _ is given by * eq .[ eq2]*. where , _ _ is the gamma function , _ _ and _ . based on _ _ , the _ cr _ decision criterion can be expressed as follows : 1 .a channel _ _ is usable , if _ _ 2 . _ _ requires power boost ( _ usable _ ) , if _ . in this case , we can use the theorem of shannon ( 1948 ) of the maximum transmission capacity ( _ p _ ) given in _bit / s _ in * eq .[ eq4 ] * 3 . _ _ can not be used in time - slot _ i _ ( _ unusable _ ) , if _ _ , where _ _ and _ _ are thresholds depend on the receiver sensitivity and the channel model in use . thus , the range of _ _ is divided into three regions , and is given by * eq .[ eq5]*. where _ _ is equal to _ 0 _ and _ _ is equal to _ . we mean by , a stable channel , if the probability of channel quality can not be decreased before the end of the transmission on that channel .the probability to being in a stable state _j _ is given by * eq .[ eq6]*. the integration is done between _ _ and _ . when the _ cr _ is engaged , it looks for a usable and stable channel which is done in the steps below . *_ step 1 : _ * _ crd _ looks for _ n _ usable channels .if the first channel is not , then the _ cr _ starts sequentially sensing channels until a usable channel is found . if the _ cr _module finds a usable channel , then * _ step 2 _ * is executed to test the stability of the selected channel . otherwise , the _ cr _ module informs _ crd _ that no usable channel is available , _ crd _ stays silent during a predetermined time - slot . * _ step 2 : _ * if the selected usable channel satisfies the stability condition , then _ cr _ reports the index of this stable channel back to _crd_. in _wban__s , sensors sleep and wake up dynamically and hence , the number of sensors being active during a period of time is unexpected .therefore , a flexible way of scheduling different transmissions is required to avoid interference . we consider each _s superframe delimited by two beacons and composed of two successive frames : ( i ) active , that is dedicated for sensors , and ( ii ) inactive , that is designated for _ _ crd__s .the superframe structure is shown in * fig .[ superframeicc]*. during the inactive frame , _ _crd__s transmit collected data to a command center . in addition , the inactive frame directly follows the active frame and whose length depends on the underlying duty cycle being used .however , the active frame is further divided into three successive frames .the traditional _ tdma _ frame consists of up to k time - slots that are allocated to sensors . each _ wban _s sensor transmits its packet to its associated _ crd _ in its allocated time - slot using the _ default channel_.during the _ fcs _ which is of a fixed size , each _ wban _ s _ crd _ selects a stable interference mitigation channel and instructs all interfering sensors within its _ wban _ to use that channel during the _ fbtdma _ frame . based on the number of interfering sensors ,each _ crd _ determines the size of the _ fbtdma _ frame and reports this information through a short beacon broadcast using the _ default channel _ to the designated sensors within its _ wban_. in addition , the _ crd _ allocates a time - slot within the _ fbtdma _ frame for each interfering sensor to eventually retransmit its packet .although , the beacon could be lost due to the interference , our approach enables early mitigation .basically , the _ ble _ alert limits the probability of collision on the _ default channel _ since the _ crd _ will get a hint earlier than typical .the _ fbtdma _ frame consists of a flexible number of backup time - slots that depends on the number of sensors experiencing interference in the _ tdma _ frame .basically , each _ crd _ knows about these sensors through using the expected number of acknowledgement and data packets received in an allocated time - slot for each sensor . in _fbtdma _ frame , each interfering sensor retransmits in its allocated backup time - slot to the _ crd _ using the selected stable channel ._ * stage 1 : network setup tdma data collection * _ * sensor - level collision : * _ _ picks one _ _ from _ \{g } _ ; _ _ transmits in _ _ to _ _ on _ _ ; _ _ sleeps until next superframe ; _ _ waits its _ _ within _ _ frame ; * coordinator - level collision : * _ _ transmits in to on ; _ will tune to within __ frame ; * channel selection setup : * forms the set _ \{ } _ ; forms the set _ _ ; _ * stage 2 : channel selection * _ _ _ forms _ _ frame from _ _ ; _ _ selects from _ \{ } _ ; _ _ informs _ _ sensors by _ _ & _ _ frame ; _ * stage 3 : interference mitigation * _ retransmits in on ; sleeps until next superframe ; receives an earlier alert of interference ; [ csim ]in this section , we have conducted simulation experiments to evaluate the performance of the proposed _ csim _ scheme .we compare the performance of _ csim _ with smart spectrum allocation scheme , denoted by _ssa _ , which assigns orthogonal channels to sensors belonging to the interference set , denoted by _ is _ , formed between each pair of the interfering _ _ wban__s .furthermore , we compare the energy consumption of the _ wban _ s coordinator with and without switching the _ ble _ transceiver on . we define the probability of channel s availability , denoted by _ _ , at each _ crd _ as the frequency that a channel is not being used by any of the nearby _ iot _ devices .cluster is defined as a collection of _ _ wban__s , _ wi - fi _ and other wireless devices collocated in the same space .the simulation network is deployed in three dimensional space ( ) and the locations of the individual _ _ wban__s change to mimic uniform random mobility and consequently , the interference pattern varies .the channel interference between any two wireless devices is evaluated on probabilistic interference thresholds .the simulation parameters are provided in * table [ csimsp]*. lllll & * exp .1*&*exp . 2*&*exp .3 * + # sensors/_wban_&10&10&var + # _wban_/network&var&10&10 + sensor txpower ( dbm)&-10&-10&-10 + snr threshold ( dbm)&-25&var&-25 + # time - slots/_tdma _ frame & k&k&k + in _ experiment 1 _ , the probability of channel s availability , denoted by _ _ , versus the cluster size , denoted by , for _ csim _ and _ ssa _ are compared , and results are shown in * fig .[ plt1]*. as seen in the figure , _ csim _ always provides a higher _ _ than _ ssa _ because of the channel selection is done at the _ wban_- rather than _ sensor_-level . for _ csim_ , the _ _ significantly decreases from 0.79 to 0.27 , when because of the larger number of _ zigbee _ channels that are being used by _ iot _ devices than the number of channels available at each _crd_. when , _ _ decreases very slightly and eventually stabilizes at 0.215 because all _ zigbee _ channels are used by the _ iot _ devices which makes it very hard for _ _ crd__s to select stable channels .however , for _ ssa _ , it is also observed from this figure that _ _ decreases significantly from 0.51 to 0.08 when because of the larger number of _ zigbee _ channels that are being assigned to the sensors in the interfering set ( _ is _ ) for any pair of _ _ wban__s .when , _ _ decreases very slightly and eventually stabilizes at 0.07 because of the maximal number of _ zigbee _ channels being assigned to sensors coexisting within the interference range of a designated _ wban _ , i.e. , the number of these sensors exceeds the _ 16 _ channels of _zigbee_. _ experiment 2 _ studies the effect of signal - to - noise ratio threshold denoted by _ _ on _ . the results in * fig .[ plt2 ] * shows that _ csim _ always achieves higher _ _ than _ ssa _ for all _ _ values . in _ csim _ , the _ _ significantly increases as _ _ increases from to ; similarly increasing _ _ in _ csim _ diminishes the interference range of each _ wban _ ,i.e. , lowers the number of interfering _ iot _ devices .therefore , limiting the frequency of channel assignments prevents distinct _ _ wban__s to pick the same channel , which decreases the probability of collisions among them _ , the _ _ increases very slightly and eventually stabilizes at 0.92 because of the minimal number of interfering _ iot _ devices and hence , a high _ _ is expected due to the larger number of _ zigbee _ channels than the number of those interfering devices . however , _ssa _ always achieves lower _ _ than _ csim _ for all _ _ values .the _ _ significantly decreases from 0.6 to 0.2 as __ increases from to . basically , increasing _ _ in _ ssa _ is similar to increasing the interference range of each _ wban _ , and hence putting more sensors in the _ wban _ interference set .therefore , more channels are needed to be assigned to those sensors and that _ _ is reduced . _ , the _ _ eventually stabilizes at 0.21 because of the maximal number of sensors in the interference set is attained by each _wban_. _ experiment 3 _ studies the effect of the number ( ) of sensors per a _ wban _ , denoted by _ _ , on _ . as can be seen in * fig .[ plt3 ] * , _ csim _ always achieves higher _ _ than _ ssa _ for all values of .it is also observed from this figure that _ _ decreases very slightly and from 0.905 to 0.8 when and eventually stabilizes at 0.8 when . in both cases , _ is high due to two reasons , 1 ) the number of _ _ wban__s is fixed to _ 10 _ which is smaller than the number of _ zigbee _ channels , which makes it possible for two or more distinct _ _ wban__s to not pick simultaneously the same channel and , 2 ) _ csim _ selects a stable channel based on the number of interfering _ _wban__s rather than the number of interfering sensors .however , the _ _ decreases significantly from 0.9 to 0.1 when because adding more sensors into _ _wban__s increases the probability of interference and consequently requires more channels to be assigned to those sensors ; consequently _ _ is reduced .furthermore , _ssa _ assigns channels to interfering sensors rather than to interfering _ _wban__s , which justifies the decrease of _ _ when grows . _ , the _ _ eventually stabilizes at 0.1 because of the maximal number of sensors in the interference set is attained by each _ wban_. ) , scaledwidth=30.0% ] ) , scaledwidth=30.0% ] * fig . [ plt4 ] * shows the average reuse factor , denoted by _ avgrf _ , versus the interference threshold , denoted by , for all _ _ wban__s . as seen in this figure , _ csim _ achieves a higher _ avgrf _ for all values .however , increasing the interference threshold puts more interfering sensors in the interference range of any specific _ wban _ than the corresponding __ wban__s of these sensors , i.e. , _ ssa _ requires more channels to be assigned to sensors than to _ _ wban__s in _the average energy consumption of the _ wban _ coordinator , denoted by _ avgec _, versus the interference threshold ( ) for _ csim _ with ( _ csim - w _ ) and without switching the _ ble _ transceiver _ on _ ( _ csim - wo _ ) are compared , and results are shown in * fig . [ plt5]*. as seen in the figure , _ csim - w _ always provides a lower _ avgec _ than _ csim - wo _ because of the earlier _ ble _ alerts of interference to the coordinator , i.e. , the coordinator scans the channels only upon receiving of these alerts . for _ csim - w _ , the _ avgec _ increases slightly as the interference threshold grows , which increases the number of interfering sensors , hence the frequency of _ ble _ alerts of interference increases , and consequently , the energy consumption increases due to the additional scanning . when exceeds -20 , the _ avgec _ increases very slightly and eventually stabilizes at ; this reflects the case where all channels are used by nearby _ iot _devices forcing the _ crd _ to engage the _ cr _ for finding a stable channel . for _csim - wo _ , the _ avgec _ increases significantly with all values of because of the continuous scanning of all channels all the time , i.e. , the coordinator periodically scans all the channels to find out which channels are not noisy .it is worth saying that the _ ble _ alerts reduces the frequency of channel scanning and hence saves the coordinator s energy .in this paper , we have presented _ csim _ , a distributed protocol to enable _ wban _ operation and interaction within an existing _ iot_. _ csim _ leverages the emerging _ ble _ technology to enable channel selection and allocation for interference mitigation . in addition , the superframe s active period is further extended to involve not only a _ tdma _ frame , but also a _fcs _ and _ fbtdma _ frames , for interference mitigation .we integrate a _ble _ transceiver and a _ cr _ within the _ wban _ s coordinator , where the role of the _ ble _ transceiver is to inform the _wban _ about the frequency channels that are being used in its vicinity .when experiencing high interference , the _ ble _ device notifies the _wban s crd _ to call the _ cr _ which determines a different channel for interfering sensors that will be used later within the _ fbtdma _ frame for interference mitigation .the simulation results show that _ csim _ outperforms sample competing schemes .16 ieee standard for local and metropolitan area networks - part 15.6 : wireless body area networks : ieee std 802.15.6 - 2012 arjun bakshi , lu chen , kannan srinivasan , can emre koksal , atilla eryilmaz : emit an efficient mac paradigm for the internet of things .infocom 2016 : 1 - 9 n. torabi , w. k. wong and v. c. m. leung : a robust coexistence scheme for ieee 802.15.4 wireless personal area networks .2011 ieee consumer communications and networking conference ( ccnc ) , las vegas , nv , 2011 , pp .1031 - 1035 .roni f. shigueta , mauro fonseca , aline carneiro viana , artur ziviani , anelise munaretto : a strategy for opportunistic cognitive channel allocation in wireless internet of things .wireless days 2014 : 1 - 3 ali , m.j . and moungla , h. and younis , m. and mehaoua , a. : distributed scheme for interference mitigation of wbans using predictable channel hopping .18th int . conf .on e - health networking , application & services ( healthcom ) : munich , germany .2016 yong xiao , zixiang xiong , dusit niyato , zhu han , luiz a. dasilva : full - duplex machine - to - machine communication for wireless - powered internet - of - things .icc 2016 : 1 - 6 dong chen , jamil y. khan , jason brown : an area packet scheduler to mitigate coexistence issues in a wpan / wlan based heterogeneous network .ict 2015 : 319 - 325 zhipeng wang , tianyu du , yong tang , dimitrios makrakis , hussein t. mouftah : ack with interference detection technique for zigbee network under wi - fi interference .bwcca 2013 : 128 - 135 fumihiro inoue , masahiro morikura , takayuki nishio , koji yamamoto , fusao nuno , takatoshi sugiyama : novel coexistence scheme between wireless sensor network and wireless lan for hems .smartgridcomm 2013 : 271 - 276 xinyu zhang , kang g. shin : cooperative carrier signaling : harmonizing coexisting wpan and wlan devices .ieee / acm trans .21(2 ) : 426 - 439 ( 2013 ) ieee standard for local and metropolitan area networks part 15.4 : low - rate wireless personal area networks ( lr - wpans ) , " in ieee std 802.15.4 - 2011 ( revision of ieee std 802.15.4 - 2006 ) , vol ., no . , pp.1 - 314 , sept .5 2011 ala i. al - fuqaha , mohsen guizani , mehdi mohammadi , mohammed aledhari , moussa ayyash : internet of things : a survey on enabling technologies , protocols , and applications .ieee communications surveys and tutorials 17(4 ) : 2347 - 2376 ( 2015 ) hassine moungla , kahina haddadi , saadi boudjit : distributed interference management in medical wireless sensor networks .ccnc 2016 : 151 - 155 , martial coulon enseeih : systemes de telecommunications .2007 - 2008 samaneh movassaghi , mehran abolhasan , david b. smith : smart spectrum allocation for interference mitigation in wireless body area networks .icc 2014 : 5688 - 5693 j. lindhh , s. kamath : measuring bluetooth low energy power consumption .application note an092 ; texas instruments : dallas , tx , usa , 2010
|
recent advances in microelectronics have enabled the realization of ireless ody rea etworks ( _ _ wban__s ) . however , the massive growth in wireless devices and the push for interconnecting these devices to form an nternet f hings ( _ iot _ ) can be challenging for _ _ wban__s ; hence robust communication is necessary through careful medium access arbitration . in this paper , we propose a new protocol to enable _ wban _ operation within an _ iot_. basically , we leverage the emerging luetooth ow nergy technology ( _ ble _ ) and promote the integration of a _ ble _ transceiver and a ognitive adio module ( _ cr _ ) within the _ wban _ coordinator . accordingly , a _ ble _ informs _ _ wban__s through announcements about the frequency channels that are being used in their vicinity . to mitigate interference , the superframe s active period is extended to involve not only a ime ivision ultiple ccess ( _ tdma _ ) frame , but also a lexible hannel election ( _ fcs _ ) and a lexible ackup _ tdma _ ( _ fbtdma _ ) frames . the _ wban _ sensors that experience interference on the _ default channel _ within the _ tdma _ frame will eventually switch to another nterference itigation hannel ( _ imc _ ) . with the help of _ cr _ , an _ imc _ is selected for a _ wban _ and each interfering sensor will be allocated a time - slot within the ( _ fbtdma _ ) frame to retransmit using such _ imc_. # 1*/ * _ index terms _ * * *
|
from a series of seminal papers ( watts & strogatz , barabasi & albert , dorogovtsev & mendes , newman , see also for an overview ) since 1999 , small - world and scale - free networks have been a hot topic of investigation in a broad range of systems and disciplines .+ metabolic and other biological networks , collaboration networks , www , internet , etc . , have in common that the distribution of link degrees follows a power law , and thus has no inherent scale .such networks are termed ` scale - free networks ' .compared to random graphs , which have a poisson link distribution and thus a characteristic scale , they share a lot of different properties , especially a high clustering coefficient , and a short average path length .however , the question of _ complexity _ of a graph still is in its infancies . a ` blind ' application of other complexity measures ( as for binary sequences or computer programs ) does not account for the special properties shared by graphs and especially scale - free graphs as they appear in biological and social networks .mathematically , a graph ( or synonymously in this context , a network ) is defined by a ( nonempty ) set of nodes , a set of edges ( or links ) , and a map that assigns two nodes ( the `` end nodes '' of a link ) to each link . in a computer , a graph may be represented either by a list of links , represented by the pairs of nodes , or equivalently , by its adjacency matrix whose entries are 1 ( 0 ) if nodes are connected ( disconnected ) .useful generalizations are weighted graphs , where the restriction of is relaxed from binary values to ( unsually nonnegative ) integer or real values ( e.g. resistor values , travel distances , interaction coupling ) , and directed graphs , where no longer needs to be symmetric , and the link from to and the link from to can exist independently ( e.g. links between webpages , or scientific citations ) . in this chapterthe discussion will be kept limited to binary undirected graphs .in biological sciences , the evolution of life is studied in detail and at large ; and it is observed qualitatively that evolution creates , on average , organisms of increasing complexity . if one wants to quantify an increase of complexity , one has to define siutable complexity measures . in some sense , the number of cells may be an indicator , but quantifies rather body size than complexity . instead one may observe the number of organelles , the size of the metabolic network , the behavioural complexity of social organisms , or similar properties . to have a time series of the complexity distribution of all organisms during evolution on earth , would be highly interesting for the test of models of evolution , speciation and extinctions . but apart from such academic questions , there are many areas of practical use of complexity measures in biology and medicine , as the complexity of morphological structures , cell aggregates , metabolic or genetic networks , or neural connectivities .for text strings ( as computer programs , or dna ) there are common complexity measures in theoretical computer science , such as _ kolmogorov complexity _( and the related _ lempel - ziv complexity _ and _ algorithmic information content _aic ) .for example , aic is defined by the length of the shortest program generating the string . for random structures , thus also for random graphs , these measures indicate high complexity .a distinction of complex structured ( but still partly random ) structures from completely random ones usually is prohibitive for this class of measures .for this reason , measures of _ effective complexity _ have been discussed ; usually these are defined as an entropy ( or description length ) of `` a concise description of a set of the entity s regularities '' .here we are mainly interested in this second class , and straightforwardly one would try to apply existing measures , e.g. , to the link list or to the adjacency matrix . however , mathematically it is not straightforward to apply these text string based measures to graphs , as there is no unique way to map a graph onto a text string .thus one desires to use complexity measures that are defined directly for graphs .two classical measures are known from graph theory ; _ graph thickness _ and _ coloring number _ have a low `` resolution '' and their relevance for real networks is not clear .two new complexity measures recently have been proposed for graphs , _ medium articulation _ for weighted graphs ( as they appear in foodwebs ) and a measure for directed graphs by meyer - ortmanns based on the _ network motif _concept ) .unfortunately , the latter two complexity measures are computationally quite costly .a computational complexity approach has been defined by machta and machta as _ computational depth _ of an _ ensemble of graphs _ ( e.g. small - world , scalefree , lattice ) .it is defined as the number of processing time steps a large parallel computer ( with an unlimited number of processors ) would need to generate a _ representative _ member of that graph ensemble .unlike other approaches , it does not assign single complexity values to each graph , and again is nontrivial to compute .table [ tablecomplex ] gives a qualitative assessment of the behaviour of some of the mentioned complexity measures for lattices in 2d and 3d , complex and random structures .note that especially the ability to distinguish nonrandom complex structures from pure randomness differs between the approaches .hence , a _ simpler estimator _ of graph complexity is desired , and one possible approach , the offdiagonal complexity , is proposed here . a striking observation of the node - node link correlation matrices of complex networks is , that entries are more evenly spread among the offdiagonals , compared to both regular lattices and random graphs .this can now be used to a complexity measure , for undirected graphs .this chapter is organized as follows . in sec .[ sec_odcdef ] odc is defined and illustrated with an example .sections [ sec_helico ] and [ sec_celegans ] investigate the application of odc to two quite different biological problems : a protein interaction network , compared with randomized surrogates , and a temporal sequence of spatial cell adjacency during early _ caenorhabditis elegans _ development , quantifying the temporal increase of complexity ..qualitative assessment of various complexity measures .[ tablecomplex ] [ cols="<,^,^,^",options="header " , ] + the vector of diagonal sums is + ( 5,15,16,2,2,1,0,0 ) .+ resulting entropy : + the random reshuffling lowers the odc entropy away from + .to demonstrate that odc can distinguish between random graphs and complex networks , the helicobacter pylori protein interaction graph has been chosen . for different rewiring probabilities and realizations each , the links have been reshuffled , ending up with a random graph for . as can be seen in fig .[ helico ] , rewiring in any case lowers the offdiagonal complexity .the tiny ( 1 mm ) nematode worm _ caenorhabditis elegans _ looks like a quite primitive organism , but nevertheless has a nervous system , muscles , thus shares functional organs with higher - developed animals .more important , it shows a morphogenetic process from a single - cell egg thorugh morphogenesis to an adult worm . towards an understanding of the genetic mechanisms of the cell division cycle in general , _c.elegans _ has become one of the genetically best studied animals . despite that , little is known ( in the sense of a dynamical model ) how the cell divison and spatial reorganization takes place .not even the spatial organization of cells during morphogenesis is well described .the earliest embryo development states of caenorhabditis elegans have been recorded experimentally and described quantitatively recently .the cell division development have been described in simplicial spaces , and the cell division operations are described by operators in finite linear spaces . the premorphogenetic phase of development runs until the embryo reaches a state of about 385 cells .the detailed division times and spatial cell movement trajectories follow with high precision a mechanism prescribed in the genetic program .while many of the genetic mechanisms are known especially for _ c.elegans_ , we are a long way towards a mathematical modelling of the cell divison and spatial organization directly from the genome . thus it is still desired to develop mathematical models for this spatiotemporal process , and to compare it with quantitative experimental data .+ with good reliability the cell adjacency is known experimentally in a number of intermediate steps , which in the remainder we called cell states . herewe focus on the adjacency matrices of the cells describing each intermediate state between cell divisions and cell migrations , and investigate the complexity of neighborhood relations .+ the result for 28 state matrices are shown in fig .the dashed line shows the supremum value a graph of the same size could reach , despite the fact that due to combinatorical reasons this supremum is not necessarily always reached .+ the moderate decay in the last two states may be due to the fact that ( at least for poisson - like link distributions ) the summation implies some self - averaging if one wants to compare networks of different size .one way to avoid this problem is to define the complexity measure from all entries , this can be called full offdiagonal complexity , as the full set of matrix entries is taken into account .the result for fodc is shown in fig .[ fig1 ] ., title="fig : " ] , title="fig : " ] as expected , the complexity of the spatial cell structure increases along the first premorphogenetic phase . compared to the maximal possible complexity that could be reached by a graph of same number of node degrees ( but not for a three - dimensional cell complex ) the complexity , as measured by odc , saturates .this has a straightforward explanation : the limiting case of a large homogeneous cell agglomerate would end up with roughly two classes of cells ( at surface and within bulk ) and thus three classes of neighborhood pairs : bulk - bulk , bulk - surface and surface - surface ( see fig .[ fig_spatial2d ] ) .as the coordination numbers within bulk and surface fluctuate , this effectively delimits the growth of possible different neighborhood geometries . after initial growth , fodc resolves fluctuations corresponding to the effect of alternating cell division and spatial reorganization ., title="fig : " ] , title="fig : " ] , title="fig : " ]a new complexity measure for graphs and networks has been proposed .contrary to other approaches , it can be applied to undirected binary graphs .the motivation of its definition is twofold : one observation is that the binning of link distributions is problematic for small networks .herefrom the second observation is that if one uses instead of the ( plain ) entropy of link distribution , which is unsignificant for scale - free networks , a `` biased link entropy '' , it has an extremum where the exponent of the power law is met .+ the central idea of odc is to apply an entropy measure to the link correlation matrix , after summation over the offdiagonals .this allows for a quantitative , yet still approximative , measure of complexity .odc roughly is ` hierarchy sensitive ' and has the main advantage of being computationally not costly .j.c.c . thanks christian starzynski for the simulation code for fig .[ helico ] , and a. krmer for kindly providing the experimental data of the cell adjacency matrices .
|
many complex biological , social , and economical networks show topologies drastically differing from random graphs . but , what is a complex network , i.e.how can one quantify the complexity of a graph ? here the offdiagonal complexity ( odc ) , a new , and computationally cheap , measure of complexity is defined , based on the node - node link cross - distribution , whose nondiagonal elements characterize the graph structure beyond link distribution , cluster coefficient and average path length . the odc apporach is applied to the _ helicobacter pylori _ protein interaction network and randomly rewired surrogates thereof . in addition , odc is used to characterize the spatial complexity of cell aggregates . we investigate the earliest embryo development states of _ caenorhabditis elegans_. the development states of the premorphogenetic phase are represented by symmetric binary - valued cell connection matrices with dimension growing from 4 to 385 . these matrices can be interpreted as adjacency matrix of an undirected graph , or network . the odc approach allows to describe quantitatively the complexity of the cell aggregate geometry .
|
one of the big problems facing the search for a unification of quantum theory and gravitation is the almost complete absence of experimental data to be used as guidance by theorists .in fact there is up to this date no single man made experiment in which the realms of quantum mechanics and gravitation intersect , i.e. are simultaneously needed to account for the results .the famous cow experiments and the recent cold neutron " experiments that are some times exhibited as examples of tests of the interface of gravity and quantum mechanics , can not be taken as such when viewed from within the relativistic paradigm , as they can be fully accounted for in terms of physics within a single inertial reference frame , and thus einsteinian gravity can not be said to be playing any role ( for a more detailed discussion of this point see for instance ) .there exists however * one single situation * offered to us by nature , which satisfies the two criteria of , being observationally accessible and requiring , for a complete understanding , both general relativity and quantum physics .that is : the origin of the seeds of cosmic structure . among the most important achievements in observational cosmology we have the precision measurements of the anisotropies in the cmb .these together with an extensive set of observational studies of large scale matter distribution , led to a very satisfactory picture of the evolution of structure our universe based on a detailed understanding of the physics behind it .in fact it is nowadays widely accepted that the origin of structure in of our universe has its natural explanation within the context of the inflationary scenarios : inflation takes relatively arbitrary initial conditions presumably emerging from a planck era and leads to an _ almost _ de - sitter phase of accelerated expansion that essentially erases all memories of the initial conditions . at this point inflationhas lead to a featureless universe , which seems to lack even a small degree of inhomogeneity and anisotropy that is necessary to lead to the subsequent structure formation .at this point quantum mechanics is thought to provide this essential ingredient : the quantum fluctuations of the inflaton field , which has been put by inflation in its ground state " . these are thought to provide the seeds of the anisotropies and inhomogeneities that eventually evolve into the structure we first see in the cmb and eventually into the obvious features of our universe such as galaxy clusters , galaxies , stars , etc .the remarkable fact is that the calculations based on the above scheme seem to lead naturally to the correct spectrum of these primordial fluctuations .there is however a serious hole in this seemingly blemish - less picture : the description of our universe or the relevant part thereof- starts with an initial condition which is homogeneous and isotropic both in the background space - time and in the quantum state that is supposed to describe the fluctuations " , and it is quite easy to see that the subsequent evolution through dynamics that do not break these symmetries can only lead to an equally homogeneous and anisotropic universe .in fact many arguments have been put forward in order to deal with this issue , that is often phrased in terms of the quantum to classical transition without focusing on the required concomitant breakdown of homogeneity and isotropy in the state the most popular ones associated with the notion of decoherence .these the alternatives have been critically discussed in .one of the main obstacles is that in order to justify any explanation based on decoherence , one has to argue that certain degrees of freedom must be traced over because they are unobservable , and this in turn can only be justified by relying on the limitations we humans currently have , in making certain measurements .the problem is that in so doing we would be using our existence as input , but what cosmology is all about is understanding the evolution of the universe and its structure including the emergence of the conditions that make humans possible .in other words , in order to understand the emergence of an inhabitable universe , we would be relying on the existence and limitations of such inhabitants .therefore any explanation based purely on standard decoherence becomes circular by definition .there are further problems in each of the specific proposals , and we direct the reader to the above references for extended discussion of this issues .there are other cosmologists that have acknowledged that there is problem here , and that quantum mechanics as we know it needs modifications to be applicable to the cosmology , with one of them explicitly stating that decoherence does not offer a complete and satisfactory resolution to this problem .moreover , if we were to think in terms of first principles , we would start by acknowledging that the correct description of the problem at hand would involve a full theory of quantum gravity coupled to a theory of all the matter quantum fields , and that there , the issue would be whether we start with a quantum state that is homogeneous and isotropic or not ? .even if these notions do not make sense within that level of description , a fair question is whether or not , the inhomogeneities and anisotropies we are interested on , can be traced to aspects of the description that have no contra - part in the approximation we are using .recall that such description involves the separation of background _ vs. _ fluctuations and thus must be viewed only as an approximation , that allows us to separate the nonlinearities in the system as well as those aspects that are inherent to quantum gravity from the linear part of problem represented by the fluctuations , which are treated in terms of linear quantum field theory . in this sense, we might be tempted to ignore the problem and view it as something inherent to such approximation .this would be fine , but we should recognize then that we could not argue that we understand the origin of the cmb spectrum , if we view the asymmetries it embodies as arising from some aspect of the theory we do not know , rely on , or touch upon .in fact , in the treatment which we describe next , the proposal is to bring up one particular element or aspect , that we view as part of the quantum gravity realm , to the forefront of the treatment , in order to modify in a minimalistic way the semiclassical treatment , that , as we said , we find lacking , and provide a setting in which the obscure issues can at least be focus on .it is of course not at all clear that the problem we are discussing should be related to quantum gravity , but the later is the only sphere which is now believed capable of leading to a radical change in the paradigm of fundamental physics which is of course what we are considering here .the approach taken in is influenced by penrose s suggestion that quantum gravity might play a role in triggering a real dynamical collapse of the wave function of systems .his proposals would have a system collapsing whenever the gravitational interaction energy between two alternative realizations that appear as superposed in a wave function of a system reaches a threshold naturally identified with .we will show that these ideas can , in principle , be investigated in the present context , and that they could lead to observable effects .in fact the very early universe can be seen as a case for which there exists already a wealth of empirical information and one which , as we have argued can not be fully understood without involving some new physics , with features that would seem to be quite close to those of penrose s proposals .there are of course alternatives settings in which modifications of quantum theory , in principle unrelated to quantum gravity , could play the role of the new physics that we have argued is needed in order to account for the seeds of cosmic structure , but we will limit ourselves here to a rather generic setting motivated by the first set of ideas .before we present the treatment we are proposing , which should be consider as being of phenomenological nature it is worthwhile to see how would it fit within the context of a fundamental theory such as a quantum theory of gravity .the first thing one should note is that the notions of space - time are likely to change dramatically when considered in a fully quantum theory of gravitation . a fundamental theory of quantum gravity ( with or without matter )is naturally expected to be a timeless theory , but general relativity is certainly not .in fact we have a good example of this arising in the past and current attempts to apply the canonical quantization procedure to the theory of general relativity : in all such schemes one ends with a timeless theory in which the wave functionals ] .the idea is then that upon the identification of as a physical clock variable , one would be able to talk about the probability that the space - time metric and its conjugate variable take such and such value when the clock takes a given value , and from such information one would presumably be able to estimate the most likely space - time , correlations and so forth .examples of application of such approach can be seen in .these effective descriptions can be expected quite generally to incorporate some degree of breakdown in unitary evolution .our point here is that the recovering of the usual space - time picture is expected , to be a complex procedure even if we have a complete theory of quantum gravity in interaction with all matter fields .it should be said that in the lqg program , even the simpler spatial notions as distance volumes and to a lesser degree areas , turn out to be also a rather involved procedure .the recovery of the standard evolution of physical degrees of freedom in space - time can be expected to involve an even more cumbersome procedure including suitable approximations and averaging .it is thus not unnatural to consider that those might include what we would call jumps " , and in general the sort of general behavior that would look as an effective collapse of the wave - function " as seen from the stand point of the effective description .needless is to say that we have at this time no hope of being able to describe the above procedure in any detail , among other reasons because we do not have at the moment a fully satisfactory and workable theory of quantum gravity . on the other handthe effective description is expected to lead in the appropriate limit to general relativity as the description of space - time , and in the corresponding appropriate limit , to quantum field theory for the description of matter fields .we assume that the situation of interest ( the inflationary regime of the early universe ) lies in a region where both these descriptions are approximately valid , but where some modifications tied to the fact that these picture is only an effective one , need to be incorporated in a seemingly _ ad hoc _ manner .of course we would be complete loss if we did not have any other guidance as to the nature of these modifications , but here is where we look in the opposite direction and guide ourselves on some empirical facts : the inflationary account of the origin of the seeds of cosmic structure works `` almost well '' and its defects might be dealt with by the introduction of one such additional feature to the picture .what is required is the assumption of the existence of a process that can take a symmetric ( i.e h&i ) state of a close system ( the universe ) , into a state with small departures from a symmetric state while the standard evolution would have preserved those symmetries .as we will see , a quantum mechanical collapse of a wave function seems to have the required characteristics , except that it is usually assumed to be associated with the interaction of the quantum mechanical system with an external classical apparatus " or observer " .it is clear that in the situation at hand we can not call upon any such feature so we will assume that the feature in question appears as a self induced collapse , along the lines that have been suggested , based on quite different arguments , as a likely feature of quantum gravity by r. penrose .these observations and ideas lead us to consider , situations where a quantum treatment of other fields would be appropriate but an effective classical treatment of gravitation would be justified .that is the realm of semi - classical gravity that we will assume to be valid in our context except at those instants where would break down in association with the jumps or collapses of the state of the quantum field that we considered to be part of the effective description of underlying fundamental quantum theory containing gravitation . in accordance with the ideas above we will use a semi - classical description of gravitation in interaction with quantum fields as reflected in the semi - classical einstein s equation whereas the other fields are treated in the standard quantum field theory ( in curved space - time ) fashion .as indicated this could not hold when a quantum gravity induced collapse of the wave function occurs , at which time , the excitation of the fundamental quantum gravitational degrees of freedom must be taken into account , with the corresponding breakdown of the semi - classical approximation . the possible breakdown of the semi - classical approximation is formally represented by the presence of a term in the semi - classical einstein s equation which is supposed to become nonzero * only * during the collapse of the quantum mechanical wave function of the matter fields .thus we write [ semiceq ] r_ab -(1/2 ) g_ab r + q_ab = 8 g t_ab thus , we consider the development of the state of the universe during the time at which the seeds of structure emerge to be initially described by a h. & i. state for the gravitational and matter d.o.f . at some time, , the quantum state of the matter fields reaches a stage whereby the corresponding state for the gravitational d.o.f .is forbidden , and a quantum collapse of the matter field wave function is triggered .this new state of the matter fields does no longer need to share the symmetries of the initial state , and by its connection to the gravitational d.o.f .now accurately described by einstein s semi - classical equation leads to a geometry that is no longer homogeneous and isotropic .that is the approach that is taken in the work presented here , where the intent will be to emphasize the phenomenology potential and the lessons that can be extracted from it in order to make it clear that the proposal can be viewed as a viable path to investigate aspects of the physical world that have for long seemed absolutely beyond reach .next we give a short description of the analysis of the origin of the primordial cosmological inhomogeneities and anisotropies based on the ideas outlined above .the staring point is as usual the action of a scalar field coupled to gravity .[ eq_action ] s = d^4x r[g ] - 1/2_a_bg^ab - v ( ) , where stands for the inflaton and for its potential .one then splits both , metric and scalar field into a spatially homogeneous `` background '' part and an inhomogeneous part `` fluctuation '' , i.e. the scalar field is written , while the perturbed metric can , ( after appropriate gauge fixing and by focusing on the scalar perturbation ) be written ,\ ] ] where is the relevant perturbation called the `` newtonian potential '' .the background solution corresponds to the standard inflationary cosmology during the inflationary era has a scale factor with the scalar field in the slow roll regime so . the perturbation of the scalar field leads to a perturbation of the energy momentum tensor , and thus einstein s equations at lowest order lead to where .we must now write the quantum theory of the rescaled the field . for definitenesswe consider the field in a box of side , and write the field and momentum operators as where the sum is over the wave vectors satisfying for with integers , then , we write the fields in terms of the annihilation and creation operators and with given that we are interested in considering a kind of self induced collapse which operates in close analogy with a measurement " which normally involves self adjoint operators , we work find it convenient with the real and imaginary components of the fields and thus we write and where the operators and are hermitian .let be any state in the fock space of ,and assign to each such state the following quantity : the expectation values of the modes of the fundamental field operators are then expressible as for the vacuum state we have of course : while their corresponding uncertainties are * the collapse : * next we provide a simple specification of what we mean by the collapse of the wave function " by stating the form collapsed state in terms of its collapse time .we assume the collapse to be analogous to some sort of imprecise measurement of the operators and . in order to describeis the state after the collapse we must specify .this is done by making the following assumption about the state after collapse : [ schemme1 ] _k^r , i(^c_k ) _= x^r , i_k,1 = x^r , i_k,1|y_k(^c_k)| , ^(y)_k^r , i(^c_k)_=x^r , i_k,2=x^r , i_k,2|g_k(^c_k)| , where are selected randomly from within a gaussian distribution centered at zero with spread one .we note that our universe , corresponds to a single realization of the random variables , and thus each of the quantities has a single specific value .later , we will see how to make relatively specific predictions , despite these features .the connection to gravitational sector is at the semi - classical level so eq.([main3 ] ) turns into we note that before the collapse , the expectation value on the right hand side is zero .next we determine what happens after the collapse : to this end , we need to solve the equations for and then substitute this in and then in the fourier transform of eq.([main4 ] ) and obtain where ,\ ] ] with and where with . turning to the observational quantities we recall that the quantity that is measured is as a function of , the coordinates on the celestial two - sphere which is expressed as .the angular variations of the temperature are then identified with the corresponding variations in the newtonian potential " , by the understanding that they are the result of gravitational red - shift in the cmb photon frequency so ( we are ignoring , for simplicity the complications of the late time physics such as reheating or acoustic oscillations ) .thus , the quantity of interest is the newtonian potential " on the surface of last scattering : , from where one extracts to evaluate the expected value for the quantity of interest we use ( [ psi ] ) and ( [ f ] ) to write then , after some algebra we obtain where indicates the direction of the vector .it is in this expression that the justification for the use of statistics becomes clear .the quantity we want to evaluate is the result of the combined contributions of an ensemble of collapsing harmonic oscillators each one contributing with a complex number to the sum , leading to what is in effect a bi - dimensional random walk whose total displacement corresponds to the observational quantity .we can not of curse evaluate such total displacement but only its most likely value we do so and then take the continuum limit and which after rescaling the variable of integration to , becomes where in the exponential expansion regime where vanishes and in the limit where , we find : which has the standard functional result .however we must consider the effect of the finite value of times of collapse codified in the function .we note is that in order to get a reasonable spectrum there is a single simple option : that be essentially independent of that is the time of collapse of the different modes should depend on the mode s frequency according to .there are of course other possible schemes of collapse and we have investigated the most natural ones and their corresponding effects on the primordial fluctuation spectrum , with results that to a large extent confirm that the above conclusion is rather robust ( see figure [ fig : c1_log ] , section [ sec : furth - phen - analys ] and specifically for a deeper discussion ) .thus we can conclude that the above pattern of times of collapse seems to be implied by the data ( as far as our preliminary analysis has shown so far ) .in our view such conclusion represents one important and relevant piece of information about whatever the mechanism of collapse is .based on the analysis of the inadequacies of quantum mechanics as a complete theory of nature and the places from where solutions can arise , r. penrose has argued that the collapse of quantum mechanical wave functions is an actual dynamical process , independent of observation , and that the fundamental physics is related to quantum gravity .more precisely , according to this suggestion , the collapse into one of several coexisting quantum mechanical alternatives would take place when the gravitational interaction energy between the alternatives exceeds a certain threshold .we have considered a naive realization of penrose s ideas appropriate for the present setting to be as follows : each mode would collapse by the action of the gravitational interaction between it s own possible realizations . in our case, one could estimate the interaction energy by considering two representatives of the possible collapsed states on opposite sides of the gaussian associated with the vacuum .we interpret , literally as the newtonian potential and consequently , should be identified with matter density. then the gravitational interaction energy between alternatives should be : [ ge1 ] e_i()=^(1 ) ^(2)dv = ( a / l^3)_0 _ k ^(1 ) _ k ( ) ^(2)_k ( ) , where refer to the two different realizations chosen . recalling that , we find e_i()= -4 g ( a / l^3 ) _ 0 ^ 2_k ( 1/k^2 ) ^(1)_k ( ) ^(2)_k()_k(g / ak ) ( _ 0)^2 . where we have used equation ( [ momentito ] ) , to estimate by would occur when this energy reaches the value of the planck mass .thus the condition determining the time of collapse of the mode becomes , z_k=^c_k k = ( v)^2(h_i m_p)^-3=(v)^1/2z^c , which is independent of , and thus , as we saw in the previous section leads to a roughly scale invariant spectrum of fluctuations in accordance with observations .the scheme of collapse we have considered in section [ sec_main ] , and which will be referred as the symmetric scheme " or scheme no 1 , is evidently far from unique and other similarly natural schemes can be considered here we will briefly discuss two alternatives : the momentum preferred scheme " or scheme no 2 and the wigner functional scheme " or scheme no 3 .the first corresponds to the assumption that it is only the momentum conjugate mean value that changes during the collapse according to equation [ schemme1 ] while the field s expectation value maintains its initial value during the collapse , namely zero .the scheme no 3 corresponds to the assumption that after the collapse the expectation values of field and momentum modes , follow the correlations in the corresponding uncertainties that existed in the of the pre - collapse state , namely : where is given by the major semi - axis of the ellipse characterizing the bi dimensional gaussian function ( the ellipse corresponds to the boundary of the region in `` phase space '' where the wigner function has a magnitude larger than its maximum value ) , and is the angle between that axis and the axis . the subsequent analysis proceeds in the same fashion as that presented in section [ sec_main ] and the result has the same mathematical expression as in equation ( [ alm5 ] ) , with the sole exception being the exact expression of the function , which for the scheme no .2 takes the form : and in the scheme no .3 is : \left(\cos\delta_k -\frac{\sin\delta_k}{z_k}\right)^2 + \sin^2\delta_k \left[\mathcal{a } - 3z_k^2 - 7 \right ] + 8z_k\cos\delta_k \sin\delta_k \bigg\}.\end{gathered}\ ] ] with . in was shown that , despite the fact that the expression for looks by far more complicated that , their dependence in is very similar , except for the amplitude of the oscillations .these in turn lead to particular forms of the primordial spectrum ( i.e the spectrum which emerges from inflation and has not yet been modified to include the late time physics such as the acoustic oscillations responsible for the famous peaks ) . at the approximation level we are working herethe spectra would be all identical to the standard ( ) scale invariant harrison - zeldovich ( hz ) spectrum corresponding to a flat line in the graph of vs. , if we assume that is independent of ( corresponding to a time of collapse of mode given by ) .therefore , at this level the different collapse schemes are not distinguishable .we thus were let to considered the sensitivity of the resulting spectrum to small deviations of the `` independent of pattern '' by studying a linear departure from the independent characterized by as ( where stands for the radius of the surface of s last scattering ) in order to examine the robustness of the various collapse schemes in as far as predicting the observational spectrum .some of the results of these analysis can be seen in the graphs [ fig : c2_log ] , [ fig : c_wigner_log ] in which we see that each collapse scheme leads to a particular pattern of modifications of the spectrum , clearly showing the potential of the present approach to teach us something about the effective collapse , or alternatively to account for possible deviations if anything of this sort were to be detected in future observations .one of the most important predictions of the scheme , is the absence of tensor modes , or at least their very strong suppression .this can be understood by considering the semi - classical version of einstein s equation and its role in describing the manner in which the inhomogeneities and anisotropies in the metric arise in our scheme .as indicated in the introduction , the metric is taken to be an effective description of the gravitational d.o.f ., in the classical regime , and not as the fundamental d.o.f .susceptible to be described at the quantum level .it is thus the matter degrees of freedom ( which in the present context are represented by the inflaton field ) the ones that are described quantum mechanically and which , as a result of an hypothetical fundamental aspect of gravitation at the quantum level , would be subject to an effective quantum collapse ( the reader should recall that our point of view is that gravitation at the quantum level will be drastically different from standard quantum theories , and that , in particular , it will not involve universal unitary evolution ) .this leads to a nontrivial value for , which leads to the appearance of the metric fluctuations .the point is that the energy momentum tensor contains linear and quadratic terms in the expectation values of the quantum matter field fluctuations , which are the source terms determining the geometric perturbations . in the case of the scalar perturbations ,there are first order contributions to the perturbed energy momentum tensor , which are proportional to , while there are no similar first order terms that would appear as source of the tensor perturbations ( i.e. of the gravitational waves ) . in the usual treatment , and besides its conceptual shortcomings , no such natural suppression of the tensor modes can be envisaged . at the time of the writing of this article , the tensor modes had not been detected , in contrast with the scalar modes .we have presented the first steps in the proposal involving the introduction of novel aspect ( a self induce collapse of the wave function ) of physics in the description of the emergence from the quantum uncertainties in the state of the inflaton field , of the seeds of structure in our universe . we have argued that such novel aspect is likely to be associated to the connection of a fundamental theory of quantum gravity , with the effective description in terms of the equations of semi - classical general relativity .we find it quite remarkable that in doing so , we are able to obtain a relatively satisfactory picture. we do not know what exactly is the physics of collapse but we were nevertheless able to obtain some constraints on it ( about the time of collapse of the different modes ) , and shown that a simplistic extrapolation of penrose s ideas satisfy this constraint .we have not investigated the possible connection of our proposal with other more developed schemes involving similar non unitary modifications of quantum theory such as the various schemes considered by colleagues participating in this meeting and others .te reason for that is that we found it better to try to extract some information about what would be needed for the scheme to work in the cosmological case on which we have centered our interest , and would hope to be able to explore the connections of our proposal , with such schemes an their compatibility of their implications with the conclusions extracted in this initial analysis . we have reviewed the serious shortcoming of the inflationary account of the origin of cosmic structure , and have given a brief account of the proposals to deal with them which were first reported in .these lines of inquiry have lead to the recognition that something else seems to be needed for the whole picture to work and that it could be pointing towards an actual manifestation quantum gravity .we have shown that not only the issues are susceptible of scientific investigation based on observations , but also that a simple account of what is needed , seems to be provided by the extrapolation of penrose s ideas to the cosmological setting .interestingly the scheme does in fact lead to some deviations from the standard picture where the metric and scalar field perturbations are quantized .for instance , as discussed in the last section , one is lead to expect no excitation of the tensor modes , something that we can expected to be able confront with relatively precise data in the near future .we also find new avenues to address the fine tuning problem that affects most inflationary models , because one can follow in more detail the objects that give rise to the anisotropies and inhomogeneities , and by having the possibility to consider independently the issues relative to formation of the perturbation , and their evolution through the reheating era ( for a more extended discussion of this point see ) .other aspects that can , in principle , be tested , were discussed in the last section .the noteworthy fact is that what initially could have been thought to be essentially a philosophical problem , leads instead to truly physical issues .our main point is however that in our search for physical manifestations of new physics tied to quantum aspects of gravitation , we might have been ignoring what could be the most dramatic such occurrence : the cosmic structure of the universe itself .it is a pleasure to acknowledge very helpful conversations with j. garriga , e. verdaguer and a. perez .this work was supported in part by dgapa - unam in108103 grant .xx lange a e _ et ._ 2001 _ phys . rev . _ * d63 * , 042001 ; hinshaw g _ et . al ._ 2003 _ astrophys .j. supp . _* 148 * , 135 ; gorski k m _ et ._ 1996 _ astrophys .j. _ * 464 * , l11 ; bennett c l _ et ._ 2003 _ astrophys .j. suppl . _ * 148 * , 1 .rovelli c 1998 _ living rev .rel . _ * 1 * , 1 ( _ preprint _ gr - qc/9710008 ) ; ashtekar a 2001 quantum geometry and gravity : recent advance _ preprint _ gr - qc/0112038 ] ; thiemann t 2001 introduction to modern canonical quantum general relativity , _ preprint _ gr - qc/0110034 .penrose r 1989 _ the emperor s new mind _ oxford university press ; penrose r 1996 on gravity s role in quantum state reduction _ gen ._ * 28 * pp 581 - 600 ( reprinted in _physics meets philosophy at the planck scale _ ed .callender c. pp 290304 ) .gambini r and .pullin j 2004 _ phys.rev.lett . _ * 93 * , 240401 ; gambini r , porto r a and pullin j 2004 _ phys.rev . _ * d70*:124001 ( _ preprint _ gr - qc/0408050 ) .halliwell j j 1989 _ phys ._ d * 39 * , 2912 ; kiefer c 2000 _ nucl ._ * 88 * , 255 ; polarski d and starobinsky a a 1996 semiclasicallity and decoherence of cosmological perturbations _gr - qc/9504030 ; zurek w h 1990 , environment induced superselection in cosmology in _moscow 1990 , proceedings , quantum gravity _( qc178:s4:1990 ) , 456 - 72 ( see high energy physics index 30 ( 1992 ) no . 624 ) ; laflamme r and matacz a 1993 _ int ._ d * 2 * , 171 ; castagnino m and lombardi o 2003 _ int .phys . _ * 42 * , 1281 ; lombardo f c and .lopez nacir d 2005 _ phys ._ d * 72 * , 063506 ; martin j 2005 _ lect .notes phys . _* 669 * , 199 .keifer c , lohmar i , polarski d and starobinsky a a 2006 _ preprint _ astro - ph/0610700 .ghirardi g c , rimini a and weber t 1986 _ phys ._ * d 34 * , 470 bassi a 2007 dynamical reduction models : present status and future developments _ preprint _ quant - ph/0701014v2 .bassi a and ghirardi g c 2003 dynamical reduction models _ preprint _ quant - ph/0302164v2
|
inflationary cosmology has , in the last few years , received a strong dose of support from observations . the fact that the fluctuation spectrum can be extracted from the inflationary scenario through an analysis that involves quantum field theory in curved space - time , and that it coincides with the observational data has lead to a certain complacency in the community , which prevents the critical analysis of the obscure spots in the derivation . we argue here briefly , as we have discussed in more detail elsewhere , that there is something important missing in our understanding of the origin of the seeds of cosmic structure , as is evidenced by the fact that in the standard accounts the inhomogeneity and anisotropy of our universe seems to emerge from an exactly homogeneous and isotropic initial state through processes that do not break those symmetries . this article gives a very brief recount of the problems faced by the arguments based on established physics . the conclusion is that we need some new physics to be able to fully address the problem . the article then exposes one avenue that has been used to address the central issue and elaborates on the degree to which , the new approach makes different predictions from the standard analyses . the approach is inspired on penrose s proposals that quantum gravity might lead to a real , dynamical collapse of the wave function , a process that we argued has the properties needed to extract us from the theoretical impasse described above .
|
global and local symmetries of a physical system play an essential role in modern theoretical physics and its physical applications . apart from a well - known relation between symmetries and conserved charges via the noether theorem, the knowledge of a symmetry group of a certain theory is deeply built - in in its theoretical description and may lead to restrictive ` no - go ' theorems .introduction of new algebras is , in that sense , required by possible solutions to these theoretical problems in the hope that they could circumvent the ` no - go ' theorems .a beautiful example is the introduction of lie superalgebras that unify in a nontrivial way spacetime and internal symmetries of the microscopic world , not allowed in a purely bosonic context by the coleman - mandula theorem . in this way , the study of the relation between lie algebras and groups , and especially the derivation of new algebras from them , is a problem of great interest in mathematics and physics , because finding a new lie group from an already known one also means that a new physical theory can be obtained from a known one .this is particularly useful , for example , in gauge theories ( like yang - mills and chern - simons theories ) which have the symmetry group as a fundamental ingredient .thus , setting aside the trivial problem of finding whether a lie algebra is a subalgebra of another one , there are , essentially , three different ways of relating and/or obtaining new algebras from given ones .in fact , it was during the second half of the xx century that certain mechanisms were developed to obtain _ _ non - trivial _ _ relations between different lie groups and algebras .these mechanisms are known as _ contractions _ , _ deformations _ and _ extensions _ , which all share the property of maintaining the dimension of the original group or algebra .as we are going to see now , this work is focused on a generalization of the contraction procedure called _ expansion _ that , starting from a given algebra , permits us to generate algebras of a higher dimension than the original one .expansions of lie algebras are generalizations of the weimar - woods ( ww ) contraction method and were introduced some years ago in refs . . while in a contraction a suitable rescaling of some generators of the lie algebrais done , in the expansion method the starting point is to consider an algebra , with the basis of generators , as described by the maurer - cartan ( mc ) forms on the manifold of its associated group .as it is known , the local structure of the lie group is encoded in the so called maurer - cartan equations : which are just an equivalent description to the one given in terms of lie brackets among lie algebra generators , = c_{ij}^{k}\,x_{k} q=\ s_{\left ( 2\right ) } ^{1} \lambda_{1} \lambda_{2} \lambda_{1} \lambda_{1} \lambda_{1} \lambda_{2} \lambda_{1} \lambda_{1}s_{\left ( 2\right ) } ^{2} \lambda_{1} \lambda_{2} \lambda_{1} \lambda_{1} \lambda_{1} \lambda_{2} \lambda_{1} \lambda_{2}s_{\left ( 2\right ) } ^{4} \lambda_{1} \lambda_{2} \lambda_{1} \lambda_{1} \lambda_{2} \lambda_{2} \lambda_{2} \lambda_{1} ] where = \left\ { x_{k^{\left ( 1\right ) } } \right\ } e_{n}^{\text{red}}=\left\ { x_{\left ( i_{n},\alpha\right ) } \right\ } \mathcal{g}_{\mathcal{s}}^{\text{red}} n^{\prime\text{red}}=\left\ { x_{\left ( i_{s},\alpha\right ) _ { n^{\prime}}}\right\ } \mathcal{g}_{\mathcal{s}}^{\text{red}} e_{s}^{\prime}=n^{\prime\text{red}}\uplus s_{\text{exp}}^{\text{red}} s_{\text{exp}}^{\text{red}}=\left\ { x_{\left ( i_{s},\alpha\right ) _ { s}}\right\ } \mathcal{g}_{\mathcal{s}}^{\text{red}}\mathcal{t}_{0}=\mathcal{g}_{0}\cap\mathcal{g}_{k} \mathcal{p}_{0}=\mathcal{g}_{0}\cap\left ( i\mathcal{g}_{k}\right ) s_{\left ( 3\right ) } ^{6} \lambda_{1} \lambda_{2} \lambda_{3} \lambda_{1} \lambda_{1} \lambda_{1} \lambda_{1} \lambda_{2} \lambda_{1} \lambda_{1} \lambda_{2} \lambda_{3} \lambda_{1} \lambda_{2} \lambda_{3} { \small s\otimes}\mathcal{g}s_{\left ( 3\right ) } ^{12} \lambda_{1} \lambda_{2} \lambda_{3} \lambda_{1} \lambda_{1} \lambda_{1} \lambda_{1} \lambda_{2} \lambda_{1} \lambda_{2} \lambda_{3} \lambda_{3} \lambda_{1} \lambda_{3} \lambda_{2}s_{0}=\left\ { \lambda_{1},\lambda_{2}\right\ } s_{1}=\left\ { \lambda_{1},\lambda_{3}\right\ } s_{\left ( 4\right ) } ^{88} \lambda_{1} \lambda_{2} \lambda_{3} \lambda_{4} \lambda_{1} \lambda_{1} \lambda_{1} \lambda_{1} \lambda_{1} \lambda_{2} \lambda_{1} \lambda_{2} \lambda_{2} \lambda_{2} \lambda_{3} \lambda_{1} \lambda_{2} \lambda_{3} \lambda_{4} \lambda_{4} \lambda_{1} \lambda_{2} \lambda_{4} \lambda_{4}s_{0}=\left\ { \lambda_{1},\lambda_{2},\lambda _ { 3}\right\ } s_{1}=\left\ { \lambda_{1},\lambda_{2},\lambda_{4}\right\ } \begin{tabular } [ c]{c|ccccc} & & & & & \\\hline & & & & & \\ & & & & & \\ & & & & & \\ & & & & & \\ & & & & & \end{tabular } \ \ \ \ s_{0}=\left\ { \lambda_{1},\lambda_{2},\lambda _ { 3}\right\ } s_{1}=\left\ { \lambda_{1},\lambda_{4},\lambda _ { 5}\right\ } \begin{tabular } [ c]{c|ccccc} & & & & & \\\hline & & & & & \\ & & & & & \\ & & & & & \\ & & & & & \\ & & & & & \end{tabular } \ \ \ \ s_{0}=\left\ { \lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}\right\ } s_{1}=\left\ { \lambda_{1},\lambda_{2},\lambda_{3},\lambda_{5}\right\ } \begin{tabular } [ c]{c|ccccc} & & & & & \\\hline & & & & & \\ & & & & & \\ & & & & & \\ & & & & & \\ & & & & & \end{tabular } \ \ \ s_{0}=\left\ { \lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}\right\ } s_{1}=\left\ { \lambda_{1},\lambda_{2},\lambda_{3},\lambda_{5}\right\ } \begin{tabular } [ c]{c|ccccc} & & & & & \\\hline & & & & & \\ & & & & & \\ & & & & & \\ & & & & & \\ & & & & & \end{tabular } \ \ \ s_{0}=\left\ { \lambda_{1},\lambda_{2},\lambda _ { 3}\right\ } s_{1}=\left\ { \lambda_{1},\lambda_{4},\lambda _ { 5}\right\ } \begin{tabular } [ c]{c|ccccc} & & & & & \\\hline & & & & & \\ & & & & & \\ & & & & & \\ & & & & & \\ & & & & & \end{tabular } \ \ \ \ \ s_{0}=\left\ { \lambda_{1},\lambda_{2},\lambda _ { 5}\right\ } s_{1}=\left\ { \lambda_{1},\lambda_{3},\lambda _ { 4}\right\ } \begin{tabular } [ c]{c|ccccc} & & & & & \\\hline & & & & & \\ & & & & & \\ & & & & & \\ & & & & & \\ & & & & & \end{tabular } \ \ \ \ \ s_{0}=\left\ { \lambda_{1},\lambda_{2},\lambda _ { 3}\right\ } s_{1}=\left\ { \lambda_{1},\lambda_{4},\lambda _ { 5}\right\ } y_{1}=x_{\left ( 1,2\right ) } y_{4}=x_{\left ( 2,5\right ) } y_{2}=x_{\left ( 1,3\right ) } y_{5}=x_{\left ( 3,4\right ) } y_{3}=x_{\left ( 2,4\right ) } y_{6}=x_{\left ( 3,5\right ) } \left [ y_{1},y_{3}\right ] = 0 \left [ y_{2},y_{5}\right ] = 2y_{3} \left [ y_{1},y_{4}\right ]= -2y_{6} \left [ y_{2},y_{6}\right ] = 0 \left [ y_{1},y_{5}\right ] = 0 \left [ y_{3},y_{5}\right ] = 2y_{2} \left [ y_{1},y_{6}\right ] = 2y_{4} \left [ y_{3},y_{6}\right ] = 0 \left [ y_{2},y_{3}\right ] = -2y_{5} \left [ y_{4},y_{5}\right ] = 0 \left [ y_{2},y_{4}\right ] = 0 \left [ y_{4},y_{6}\right ] = 2y_{1}\left [ y_{1},y_{3}\right ] = -2y_{5} \left [ y_{2},y_{5}\right ] = 2y_{3} \left [ y_{1},y_{4}\right ] = -2y_{5} \left [ y_{2},y_{6}\right ] = 2y_{4} \left [ y_{1},y_{5}\right ] = 2y_{3} \left [ y_{3},y_{5}\right ] = 2y_{1} \left [ y_{1},y_{6}\right ] = 2y_{3} \left [ y_{3},y_{6}\right ] = 2y_{1} \left [ y_{2},y_{3}\right ] = -2y_{5} \left[ y_{4},y_{5}\right ] = 2y_{1} \left [ y_{2},y_{4}\right ] = -2y_{6} \left [ y_{4},y_{6}\right ] = 2y_{2}y_{1}=x_{\left ( 1,2\right ) } y_{4}=x_{\left ( 2,4\right ) } y_{2}=x_{\left ( 1,5\right ) } y_{5}=x_{\left ( 3,3\right ) } y_{3}=x_{\left ( 2,3\right ) } y_{6}=x_{\left ( 3,4\right ) } \left [ y_{1},y_{3}\right ] = -2y_{5} \left [ y_{2},y_{5}\right ] = 2y_{4} \left [ y_{1},y_{4}\right ] = -2y_{6} \left [ y_{2},y_{6}\right ] = 2y_{3} \left [ y_{1},y_{5}\right ] = 2y_{3} \left [ y_{3},y_{5}\right ] = 2y_{1} \left [ y_{1},y_{6}\right ] = 2y_{4} \left [ y_{3},y_{6}\right ] = 2y_{2} \left [ y_{2},y_{3}\right ] = -2y_{6} \left[ y_{4},y_{5}\right ] = 2y_{2} \left [ y_{2},y_{4}\right ] = -2y_{5} \left [ y_{4},y_{6}\right ] = 2y_{1}y_{1}=x_{\left ( 1,2\right ) } y_{4}=x_{\left ( 2,5\right ) } y_{2}=x_{\left ( 1,3\right ) } y_{5}=x_{\left ( 3,4\right ) } y_{3}=x_{\left ( 2,4\right ) } y_{6}=x_{\left ( 3,5\right ) } \left [ y_{1},y_{3}\right ] = -2y_{5} \left [ y_{2},y_{5}\right ] = 2y_{4} \left [ y_{1},y_{4}\right ] = -2y_{6} \left [ y_{2},y_{6}\right ] = 2y_{3} \left [ y_{1},y_{5}\right ] = 2y_{3} \left [ y_{3},y_{5}\right ] = 2y_{2} \left [ y_{1},y_{6}\right ] = 2y_{4} \left [ y_{3},y_{6}\right ] = 2y_{1} \left [ y_{2},y_{3}\right ] = -2y_{6} \left [ y_{4},y_{5}\right ] = 2y_{1} \left [ y_{2},y_{4}\right ] = -2y_{5} \left [ y_{4},y_{6}\right ] = 2y_{2} \mathcal{g}\mathcal{g}_{s} \mathcal{g}_{s , r} \mathcal{g}_{s , r}^{\text{red}} \text{arbitrary} \text{arbitrary} \text{arbitrary} \begin{array } [ c]{c}\text{semisimple}\\ \mathcal{g=}s \end{array } \begin{array } [ c]{c}\text{arbitrary}\\ \mathcal{g}_{s}\mathcal{=}n_{\text{exp}}\uplus s_{\text{exp}}\end{array } \begin{array } [ c]{c}\text{arbitrary}\\ \mathcal{g}_{s , r}\mathcal{=}n_{\text{exp,}r}\uplus s_{\text{exp,}r}\end{array } \begin{array } [ c]{c}\text{arbitrary}\\ \mathcal{g}_{s , r}^{\text{red}}\mathcal{=}n_{\text{exp,}r}^{\text{red}}\uplus s_{\text{exp,}r}^{\text{red}}\end{array } \begin{array } [ c]{c}\text{arbitrary}\\ \mathcal{g=}n\uplus s \end{array } \begin{array } [ c]{c}\text{arbitrary}\\ \mathcal{g}_{s}\mathcal{=}n_{\text{exp}}\uplus s_{\text{exp}}\end{array } \begin{array } [ c]{c}\text{arbitrary}\\ \mathcal{g}_{s , r}\mathcal{=}n_{\text{exp,}r}\uplus s_{\text{exp,}r}\end{array } \begin{array } [ c]{c}\text{arbitrary}\\ \mathcal{g}_{s , r}^{\text{red}}\mathcal{=}n_{\text{exp,}r}^{\text{red}}\uplus s_{\text{exp,}r}^{\text{red}}\end{array } ] , ] as it can be seen in equation ( [ aa2 ] ) . then for we have \\ & = \left [ e_{n}^{\left ( 1\right ) } + n^{\prime\left ( 1\right ) } + e_{n}^{\left ( 0\right ) } , e_{n}^{\left ( 1\right ) } + n^{\prime\left ( 1\right ) } + e_{n}^{\left ( 0\right ) } \right ] \\ & \subset\left\ { e_{n}^{\left ( 2\right ) } , n^{\prime\left ( 2\right ) } , e_{n}^{\left ( 1\right ) } , e_{n}^{\left ( 0,1\right ) } \right\}\end{aligned}\ ] ] where ] and ] . restricting both sides to , we then have : \subset n''\,,\ ] ] that is is a solvable ideal of , which can not be since is maximal in .* the expanded semisimple algebra : * here is given the proof of theorem of section [ cartandec ] , i.e. that ( [ cc_1]-[scd_02 ] ) is the cartan decomposition of when is compact .this is done by providing a conjugation of with respect to such that : and then by showing the following relations: let us find how the explicit form of is found .consider the elements and and let and be respectively the bases of and .then , we can write where and are real constants .let us also define a mapping such that where is the conjugation of with respect to and where denote the complex conjutate of .then the mapping of an arbitrary element can be expressed in terms of as follows: then it is straightforward to show that is a conjugation of with respect to , i.e. , that also satisfies = \left [ \sigma_{s}\left ( b_{1}\right ) , \sigma_{s}\left ( b_{2}\right ) \right ] \text { \ } \forall b_{1},b_{2}\in \mathcal{g}_{s}\text { \ \ and \ } \left ( \sigma_{s}\right ) ^{2}=i\text{.}\ ] ] now let us prove ( [ cd1 ] ) , i.e. , that is invariant under the conjugation ( [ cd4 ] ) .in fact , consider the action of on an abitrary element : where we have used ( [ cd4_1 ] ) and . in this way ( [ cd1 ] )is satisfied .here we give the proof of theorem of section [ cartandec ] .we have to find a compact real form , , of ( the complex form of ) satisfying the conditions ( [ bcd3 ] ) , that in this case read as: where is a conjugation in with respect to .as we saw before , the expansion of the compact algebra , with is compact when . besides satisfies the resonant condition , as can be seen in ( [ mario_res0]-[mario_res ] ) , so is the resonant subalgebra of and it is compact because it is a subalgebra of a compact lie algebra , .let us prove now that ( [ cdres4]-[cdres6 ] ) are satisfied .considering that are respectively bases of , and we have that an arbitrary element in can be written as follows where a sum on is also assumed . then an arbitrary element on can be written as so the conjugation , defined before , acting on this element gives for .now let s consider an abitrary element , i.e. , where and are indices living on and on respectively .then , and as we have that so ( [ cdres4 ] ) is proved to be true . in the same way is it possible to show ( [ cdres5 ] ) and ( [ cdres6 ] ) .e. inn and e.p .wigner , _ on the contraction of groups and their representations _ , proc .39 * , 510 - 524 ( 1953 ) ; e. inn , _ contractions of lie groups and their representations_. in : grsey , f. ( ed . )_ group theoretical concepts in elementary particle physics _ pp 391 - 402 . gordon and breach , new york ( 1964 ) e. weimar - woods ,_ contractions of lie algebras : generalized inonu - wigner contractions versus graded contractions _ , j. math .phys . * 36 * , 4519 - 4548 ( 1995 ) ; e. weimar - woods , _ the three - dimensional real lie algebras and their contractions _ , jour .* 32 * ( 1991 ) 2028 ; e. weimar - woods , _ contractions , generalized inn and wigner contractions and deformations of finite - dimensional lie algebras _ , rev .* 12 * 1505 - 1529 ( 2000 ) a. nijenhuis and r.w .richardson jr . , _ cohomology and deformations in graded lie algebras _ , bull .a , math . soc . * 72 * , 1 - 29 ( 1966 ) ; a. nijenhuis and r.w .richardson jr . , _ deformations of lie algebra structures _ , j. math .mech . * 171 * , 89 - 105 ( 1967 ) j. a. de azcarraga , j. m. izquierdo , m. picon , and o. varela , _ generating lie and gauge free differential ( super ) lgebras by expanding maurer - cartan forms and chern - simons supergravity _ nucl .b * 662 * ( 2003 ) , 185 .arxiv : hep - th/0212347 p. mora , r. olea , r. troncoso , j. zanelli , _ finite action principle for chern simons ads gravity_. j. high energy phys .jhep06(2004)036 arxiv : hep - th/0405267 ; p. mora , r. olea , r. troncoso , j. zanelli , _transgression forms and extensions of chern simons gauge theories_. j. high energy phys .jhep02(2006)067 arxiv : hep - th/0601081 .p. mora , _ transgression forms as unifying principle in field theory_. ph.d .thesis , universidad de la republica , uruguay ( 2003 ) .arxiv : hep - th/0512255 ; p. mora , _ unified approach to the regularization of odd dimensional ads gravity_. arxiv : hep - th/0603095 .j. daz , o. fierro , f. izaurieta , n. merino , e. rodriguez , p. salgado and o. valdivia , _ a generalized action for ( 2 + 1)-dimensional chern simons gravity _ ,a : math . theor . * 45 * ( 2012 ) 255207 ( 14pp ) r. caroca , i. kondrashuk , n. merino and f. nadal , _bianchi spaces and its _ -dimensional isometries as _ _ -expansion_s of _ -dimensional isometries _ _ , j. phys .* 46 * ( 2013 ) 225201 ( 24pp ) , arxiv : math - ph/1104.3541 .distler a and kelsey t 2008 _ the monoids of order eight and nine _ , intelligent computer mathematics : 9th int . conf . on artificial intelligence and symbolic computation ( birmingham , july 2008 ) ed s autexier , j campbell , j rubio , v sorge , m suzuki and f wiedijk ( lecture notes in computer science vol 5144 ) 2008 ( berlin : springer ) pp 61 - 76 hildebrant j _ handbook of finite semigroup programs _ , lsu mathematics electronic preprint series , preprint 2001 - 24; plemmons r 1969 _ a survey of computer applications to semigroups and related structures _acm sigsam bulletin 12 pp 28 - 39
|
the study of the relation between lie algebras and groups , and especially the derivation of new algebras from them , is a problem of great interest in mathematics and physics , because finding a new lie group from an already known one also means that a new physical theory can be obtained from a known one . one of the procedures that allow to do so is called expansion of lie algebras , and has been recently used in different physical applications - particularly in gauge theories of gravity . here we report on further developments of this method , required to understand in a deeper way their consequences in physical theories . we have found theorems related to the preservation of some properties of the algebras under expansions that can be used as criteria and , more specifically , as necessary conditions to know if two arbitrary lie algebras can be related by the some expansion mechanism . formal aspects , such as the cartan decomposition of the expanded algebras , are also discussed . finally , an instructive example that allows to check explicitly all our theoretical results is also provided .
|
many human activities , such as mail and e - mail exchanges , library loans , stock market transactions , or even motor activities , display heavy tailed inter - event and waiting time distributions . to account for these heavy tails , a priority queuing model has been proposed by barabsi , that since then stimulated an active field of research with potential practical applications ( e.g. , see refs . ) . within barabsi priority queuing model ( bqm ) , each item in a list of fixed length has a priority value . at each time - step , the maximal priority task is executed with probability , otherwise , a randomly selected one is accomplished .once a task is executed , it is substituted by a new one ( or the same ) that adopts a new randomly selected priority value drawn from a probability density function ( pdf ) .this simple model yields power - law tailed distributions of inter - events times , mimicking the empirical histograms of many human activities . besides the value of queuing models for diverse practical questions , another issue that makes bqm attractive is its connection with diverse other physical problems such as invasion percolation or self - organized evolutionary models , as soon as the roles of priorities and fitness can be identifiedhowever , exact results for the bqm , both for steady and transient regimes , have been obtained for the simplest instance only .although lists of two items already display the power - law decay of the distribution of waiting times when approaches unity , naturally , other features are missed in the simplest case . moreover ,special attention has been given to the particular , and more tractable case , of extremal dynamics when , while non - null degree of randomness may also display interesting features .then , in the present work , we tackle the bqm with arbitrary values of and .the manuscript is organized as follows . in the next sectionwe show exact results for the pdfs of priorities in lists of arbitrary length , by recourse to a master equation . in sec .iii we obtain an approximate expression for the waiting time distribution .iv deals with exact results for `` avalanches '' which provide the time that higher priority tasks ( above a threshold ) remain in the list , and is also related to waiting time duration .the last section contains final remarks .a fundamental quantity is the probability that there are tasks with priority higher than a given value , at time , .its time evolution is ruled by a master equation ( me ) of the form for , with the non - null elements of the tridiagonal matrix given by for , and additionally , .here we have taken , however generality can be recovered simply by redefining the threshold through .notice that the me ( [ master])-([mnn ] ) signals a biased random walk with reflecting boundaries at and , setting the basis to write a continuum limit approximation .however , for arbitrary , drift and diffusion coefficients are state dependent and the approach of biased diffusion successfully applied to determine the scaling of the waiting time distribution , in other queuing systems with constant coefficients , becomes more tricky in the non - deterministic case .then , let us find the exact steady solution of the me ( [ master])-([mnn ] ) for arbitrary length . by recursion, one gets for , where , and from normalization the distribution , given by eqs .( [ pfinite])-([p0 ] ) , can be used now to evaluate diverse meaningful quantities . in particular, the pdf of the largest priority value can be extracted from the condition , hence fig .[ fig : first5 ] shows the exact pdfs of the two largest priorities in the list , and , for and different values of , compared to the results of numerical simulations of the bqm . , different values of indicated on the figure and .solid lines correspond to exact results and symbols to numerical simulations of the bqm performed as in previous figures . in the insets ,the average values are displayed as a function of ., scaledwidth=50.0% ] in the fully random case , eqs .( [ pfinite])-([p0 ] ) yield , hence , in accord with straightforward combinatorial analysis . in the opposite limit , gets closer to a unit step function at while approaches the dirac delta function . thisis expected since those tasks that have entered the list more recently and adopted priority values uniformly distributed in [ 0,1 ] have more chances to be chosen again , while the older tasks are more and more likely to remain in the list forever as tends to , then the second priority value ( and together with it the remaining ones ) collapse to zero . for large enough ( namely , ) , eqs .( [ pfinite])-([p0 ] ) lead to , where is the heaviside unit step function , and .in fact , finding directly the steady state solution of the me ( [ master])-([mnn ] ) , in the limit of large for fixed ( hence neglecting terms of order ) , or also when , one obtains a geometric progression that , for , can be summed up to obtain the simple expression for , all tend to vanish in the large limit .[ fig : p0 ] illustrates the performance of this approximation in comparison with exact results .the assumption fails as soon as the probability that becomes non negligible . for each , the exact is peaked around .the approximation becomes exact both in the limits of and .( for different values of , upper panel ) and ( for different values of and , lower panel ) , at and .solid lines correspond to exact results , dashed lines to the large approximation ., scaledwidth=50.0% ] the pdf of all priorities in the list , verifies .its time evolution is given by that in the long - time limit leads to the relation where .let us call old tasks those items whose priority has not been assigned at a given step .the cumulative pdf of old task priorities , , can be obtained from the relation and , by means of eq .( [ pgen ] ) can be expressed as in the particular case , eqs .( [ pfinite])-([p0 ] ) give and recalling that its derivation was carried out for uniform but the general case is recovered simply through the mapping , then , eq . ( [ allold ] ) allows to re - obtain the result of vazquez , namely , ] . at further time - stepsthis is a hard trail to proceed and the results may not be expressible in a readily manageable form . however , notice that while for small , the integral of is dominated by large values of , due to the propensity of such values to be re - chosen early , contrarily , for large enough , ( and hence ) will gain the main contribution from the purely random ( unconditioned ) selection from the bulk of relatively small values ( as can be seen in fig .[ fig : f_rrr ] where is displayed ) .this is expected to apply also when at any . for such cases ,one can write where is the effective probability that task is selected at some given step and can be estimated as , as soon as is the probability that there are no tasks with priorities higher than .[ fig : f_rrr ] also exhibits the comparison between exact and approximated functions . in particular , for , eq .( [ ptau ] ) is independent of the choice of and it correctly yields the pure exponential decay for all . in the opposite case , and using the approximation given by eq .( [ p0infinite ] ) for , eq .( [ ptau ] ) leads to the asymptotic behavior where .this expression for the characteristic time applies por any .thus , the characteristic exponential decay time is shifted to larger when as well as when increases .analytical predictions are compared to numerical simulations in fig .[ fig : f_tau ] .one observes that the approximate expression derived from eq .( [ ptau ] ) manages to describe the exponential cutoff in all cases and the scaling regime in the limit , although it fails to predict the -3/2 power - law neatly observed in numerical simulations for as ( notice in the lower panel of fig .[ fig : f_tau ] the deviation for , leading to a spurious power - law exponent -2 ) .this is due to the fact that the aging regime is overlooked by this approximation .let us remark that a -3/2 exponent is also found in classical queuing models with fluctuating length and the return time distribution of a random walk is at its origin .in view of the difficulties to find the exact expression for , to explain this scaling regime , we will solve next a closely related problem .let as also consider now the events between two successive times when the number of priorities above a given threshold vanishes ( avalanche ) .avalanche duration is relevant in the present context as as soon as it provides the duration of intervals in which there are queued tasks with priorities above a threshold to be executed . from the viewpoint of random walks , this is a first passage problem . following the lines in ref . , let us define , the probability of having values with priorities higher than , given that an avalanche started at ( time units ago ) . follows the same me ( [ master])-([mnn ] ) as does , except for , and the initial condition is and .thus , the probability that an avalanche , relative to threshold , has duration is fig .[ fig : avalanche ] illustrates the scaling that comes up for any at the critical threshold .exact results were obtained by numerical integration of the me for and compared to the results of numerical simulations of the bqm . notice that the scaling region increases with and shifts towards larger times as . for , different values of indicated on the figure and .solid lines join the exact values and symbols correspond to numerical simulations of the bqm . in the inset, exact results for and different values of indicated on the figure are displayed .dotted straight lines with slope -3/2 are drawn for comparison . , scaledwidth=50.0% ] the me of can be solved analytically through diverse standard methods . yet , in the limit of large and fixed , the me describes a simple biased random walk , with an absorbing boundary at and probabilities to step either to the right , to the left , or remain still , given by , respectively . from this viewpoint, is the probability that the first return to the origin occurs at when the avalanche started at , while is the probability of reaching at time , without having visited .thus , just differs from in appending the last step from 1 to 0 . for any , can be found by solving first the unbounded problem and then resorting to the reflection principle . moreover , if we are concerned with the asymptotic behavior , we can directly take advantage of the gaussian approximation from the central limit theorem. therefore , one has where , and are the mean and variance of each single step .this readily leads to the asymptotic behaviors that is , an exponential decay dominates the long - time decay in the biased cases , meanwhile , if ( hence ) , a power - law arises in the large limit , in agreement with the results displayed in fig .[ fig : avalanche ] and with the well known results for a driftless random walk . in particular, there is a correspondence with the random annealed bak - sneppen model , where the same scaling is observed for any at the critical threshold .let us remark that in the bak - sneppen model the transition matrix for the associated me has non - null diagonals , and a generic univoque relation between and does not emerge .however , concerning avalanches , the equivalence between both models arises for . due to the threshold being an upper or lower bound in each case , that relation is complementary to which arises by identifying ratios of deterministic / random sites .summarizing , we obtained analytical results for the bqm with queues of arbitrary length .exact expressions were shown to be in agreement with the outcomes of numerical simulations of the dynamics .progress has still to be made to obtain the exact waiting time distribution that displays different regimes between the purely exponential one ( at ) and the power - law decay with unit exponent ( at ) , when .however , an approximate expression has been found that accounts for most of the distribution traits .moreover , we have shown that avalanches , at the critical threshold , constitute another scale - free feature of the bqm for . besides the main applications here illustrated, the present results may allow to estimate many other relevant statistical quantities of the bqm and can be extended to other queuing systems .furthermore , our exact results set the basis to further explore the correspondence between bqm and other related models .
|
previous works on the queuing model introduced by barabsi to account for the heavy tailed distributions of the temporal patterns found in many human activities mainly concentrate on the extremal dynamics case and on lists of only two items . here we obtain exact results for the general case with arbitrary values of the list length and of the degree of randomness that interpolates between the deterministic and purely random limits . the statistically fundamental quantities are extracted from the solution of master equations . from this analysis , new scaling features of the model are uncovered .
|
understanding the complexity of genomes and the drives that shape them is a fundamental problem of contemporary biology , which poses a number of challenges to contemporary statistical mechanics . considering this problem from a large - scale viewpoint , the basic observables to account for are the distributions of the different `` functional components '' ( such as genes , introns , non - coding rna , etc . ) encoded by sequenced genomes of varying size .when these genome - wide data are parametrized by measures of `` genome size '' ( such as the number of bases or the number of genes in a genome ) , there are important emerging `` scaling laws '' both for the classes of evolutionary related genes , the functional categories of genes and some non - coding parts of genomes .these scaling laws are the signs of universal invariants in the processes and constraints that gave rise to the genomes as they can be observed today .a current challenge is the understanding of these laws using physical modeling concepts and the comparison of the models to the available whole - genome data .this effort can help disentangle neutral from selective effects .here we consider the statistical features of the set of proteins expressed by a genome , or proteome . a convenient level of analysis is a description of the proteome in terms of structural protein domains .domains are modular `` topologies '' , or sub - shapes , forming proteins .a domain determines a set of potential biochemical or biophysical functions and interactions for a protein , such as binding to other proteins or dna and participation in well - defined classes of biochemical reactions . despite the practically unlimited number of possible protein sequences , the repertoire of basic topologies for domains seems to be relatively small . with a looseparallel , domains could be seen as an `` alphabet '' of basic elements of the protein universe .understanding the usage of domains across organisms is as important and challenging as decoding an unknown language .the content of a genome is determined primarily by its evolutionary history , in which neutral processes and natural selection play interdependent roles .in particular , the coding parts of genomes evolve by some well - defined basic `` moves '' : gene loss , gene duplication , horizontal gene transfer ( the transfer of genetic material between unrelated species ) , and gene genesis ( the _ de novo _ origin of genes ) .since domains are modular evolutionary building blocks for proteins , they are coupled to the dynamics followed by genes . in particular ,a new domain topology can emerge by genesis or horizontal transfer , and new domains of existing domain topologies can emerge by duplication or be lost .finally , topologies can be completely lost by a genome if the last domain that carries them is lost ( see fig .[ fig : scheme ] ) .large - scale data concerning structural domains are available from bioinformatic databases , and can be analyzed at the genome level .these coarse - grained data structures can be represented as sets of `` domain classes '' ( the sets of all realizations of the same domain topology in proteins ) , populated by domain realizations . in particular ,much attention has been drawn by the intriguing discovery that the population of domain classes have power - law distributions : the number of domain classes having members follows the power - law , where the exponent typically lies between 1 and 2 .an interesting thread of modeling work ascribes the emergence of power - laws to a generic preferential - attachment principle due to gene duplication .growth models are formulated as nonstationary , duplication - innovation models and as stationary birth - death - innovation models .intriguingly , as we have recently shown , the domain content of genomes also exhibits scaling laws as a function of the total number of domains , indicating that even evolutionarily distant genomes show common trends where the relevant parameter is their size . *the number of domain classes ( or distinct hits of the same domain ) concentrates around a master curve that appears to be markedly sublinear with size , perhaps saturating .* the fitted exponent of the power - law - like distribution of domain classes having members , in a proteome of size decreases with genome size .in other words , there is evidence for a cutoff that increases linearly with . *the occurrence of fold topologies across genomes is highly inhomogeneous - some domain superfamilies are found in all genomes , some rare , with a sigmoid - like drop between these two categories , [ fig : alphafit ] , [ fig : cutoff ] , and [ fig : occurrence ] of this work . ] .we recently reported the above collective trends , and showed how the scaling laws in the data could be reproduced using universal parameters with non - stationary duplication - innovation models .our results indicate that the basic evolutionary moves themselves can determine the observed scaling behavior of domain content , _ a priori _ of more specific biological trends .this modeling approach , while similar in formulation to that of previous investigators who did not consider these scaling laws , has important modifications , mostly related to the scaling with of the relative probability of adding a domain belonging to a new class and duplicating an existing one . to reproduce the observed trends ,newly added domain classes can not be treated as _independent _ random variables , but are conditioned by the preexisting proteome structure . in this paper , we give a detailed account of this modeling approach , considering different variants of duplication - innovation - loss models for protein domains , and relate them to available results in the mathematical and physical literature . in particular , we will focus on mean - field approaches for the models and comparison with direct simulation , and we will show how they can be generally used to obtain the main qualitative and quantitative trends . the first part of the paper is devoted to the minimal model formulation , which only includes duplication and innovation moves for domains , and relates to the so - called _ chinese restaurant process ( crp ) _ of the mathematical literature .we will review the main known results for this model , derive analytically solvable mean - field equations , and show how they compare to the available rigorous results and the finite size behavior .the rest of the paper is devoted to biologically motivated variants of the main model related to two main features : including the role of loss of domains , which is a frequently reported event , and breaking the exchange symmetry of domain classes , which is unrealistic , as specific protein domains perform different biological functions . for these variants ,we will present mean - field and simulations results , and characterize their phenomenology in relation with empirical data .in particular , while in general for these variants the rigorous mathematical results existing for the crp break down , we will show how the use of simple mean - field methods proves to be a robust tool for accessing the qualitative phenomenology .the model represents a proteome through its repertoire of domains .domains having the same topology are collected in domain classes ( fig .[ fig : scheme ] ) .thus the relevant data structures are partitions of elements ( domains ) into classes .the basic observables considered are the following : , the total number of domains , , a random variable indicating the number of classes ( distinct domain topologies ) at size , a random variable , the population of class , , the size at birth of class , and : the number of domain classes having members .we will generally indicate mean values by capitalized letters ( e.g. is the mean value of , the mean value of , etc . ) .the model is conceived as a stochastic process based on the elementary moves available to a genome ( fig .[ fig : scheme ] ) of adding and losing domains , associated to relative probabilities : , the probability to duplicate an old domain ( modeling gene duplication ) , , the probability to add a new domain class with one member ( which describes domain innovation , for example by horizontal transfer ) , and , a loss probability ( which we will initially disregard , and consider in a second step ) .iteratively , either a domain is added or it is lost with the prescribed probabilities . an important feature of the duplication move is the ( null ) hypothesis that duplication of a domain has uniform probability along the genome , and thus it is more probable to pick a domain of a larger class .this is a common feature with previous models .this hypothesis creates a `` preferential attachment '' principle , stating the fact that duplication is more likely in a larger domain class , which , in this model as in previous ones , is responsible for the emergence of power - law distributions . in mathematical terms , if the duplication probability is split as the sum of per - class probabilities , this hypothesis requires that , where is the population of class , i.e. the probability of finding a domain of a particular class and duplicating it is proportional to the number of members of that class .it is important to notice that in this model , while can be used as an arbitrary measure of time , the ratio of the time - scales of duplication and innovation is not arbitrary , and is set by the ratio . in the model of gerstein and coworkers , this is taken as a constant , as the innovation move considered to be statistically independent from the genome content . in particular , both probabilities are considered to be constant .this choice has two problems .first , it can not give the observed sublinear scaling of .indeed , if the probability of adding a new domain is constant with , so will be the rate of addition , implying that this quantity will increase on average linearly with genome size .moreover , the same model gives power - law distributions for the classes with exponent larger than two , in contrast with most of the available data .previous investigators did not consider the fact that genomes cluster around a common curve , and thought of each of them as coming from an independent stochastic process with different parameters .furthermore , choosing constant implies that for larger genomes the influx of new domain families is heavily dominant on the flux of duplicated domains .as noted by durrett and schweinsberg , constant makes sense only if one thinks that new fold topologies emerge from an internal `` nucleation - like '' process with constant rate , rather than from an external flux .this process could be pictured as the genesis of new topologies from sequence mutation .empirically , while genesis events are reported and must occur , it is clear that domain topologies are very stable , and the exploration of sequence space is not free , but conditioned by a number of additional important factors , including chromosomal position and expression patterns of genes , and their role in biological networks .moreover , in prokaryotes , it is known that a large contribution to the innovation of coding genomes is provided by horizontal gene transfer , the exchange of genetic material between species , which can be reasonably represented in a model as an external flux , as opposed to the internal nucleation process representing genesis . for eukaryotes ,horizontal gene transfer is less important , and there can be multiple relevant innovation processes including exonization , loss of exons , alternative start sites changing the protein .we have not attempted to model the detailed processes leading to innovation , and because of their higher complexity in eukaryotes , we prefer to compare the model to the prokaryote data set alone . however , we can point out that in principle the same model has good agreement with the set of prokaryotes and eukaryotes together . in eukaryotes ,the change in number of classes with respect to size change is generally small , but seems to have a trend that `` glues '' quite well with prokaryotes ( and in particular , innovation decreases with size ) . motivated by the sublinear scaling of the number of domain classes , and taking into account in an effective way the role of processes that condition the addition of new domain topologies, we consider statistically _ dependent _ moves . on general grounds , if a genome is a complex system where sub - components interact in clusters and non - locally , domain topologies as well have to be coordinated with other parts of the system , so that it is reasonable that evolutionary moves are conditioned by what is already present , and that the actual number of domain topologies need not to be trivially an extensive quantity .the simplest way to implement this choice is to concentrate on the innovation process .let us consider the indicator , taking value if a new domain class is born at size , and value otherwise. the number of classes at size will be . if the random variables are independent and identically distributed , i.e. , follows a bernoulli distribution whose mean value is linear in .moreover , will be increasingly concentrated with increasing on the deterministic value .if vice versa the random variables are statistically dependent or also simply not identically distributed , the mean value may not be linear in , and the concentration phenomenon may not occur .both features , dependence and lack of concentration , are important . the former is necessary to obtain the observed sublinear behavior , the latter might create an intrinsic `` diversity '' in the genome ensemble , independently on the finite size of observed genomes ( however , the currently available data are insufficient to establish this empirically ) .we investigate this process using analytical asymptotic equations and simulations .we start by considering only growth moves , by duplication and innovation , postponing the inclusion of domain loss in the model .we will see that the resulting model contains the basic qualitative phenomenology of the scaling laws and can thus be regarded as the paradigmatic case .one can arrive at the defining equations with different arguments .a simple way is to assume that domain duplication is a rare event , described by a poisson distribution with characteristic time , during which there is a flux of external or new domain topologies . then . in this case the variables are independent but not identically distributed .it is immediate to verify that has mean value given by and thus grows as .the same result can be obtained by thinking of domain addition as a dependent move , conditioned on , or both .it is possible to consider different intermediate scenarios where the pool of old domain classes is in competition with the universe explorable by the new classes .the simplest scheme , which turns out to be quite general , can be obtained by choosing the conditional probability that a new class is born given the fact that at size hence where and $ ] .considering per - class duplication probability , one can choose the following expression , that asymptotically establishes the preferential attachment principle : here , represents a characteristic number of domain classes needed for the preferential attachment principle to set in , and defines the behavior of for small ( ) . is the most important parameter , which sets the scaling of the duplication / innovation ratio .intuitively , the smaller , the more the growth of is depressed with growing , and since is asymptotically proportional to the class density it is harder to add a new domain class in a larger , or more heavily populated genome . as we will see ,this implies as , corresponding to an increasingly subdominant influx of new fold classes at larger sizes .this choice reproduces the sublinear behavior for the number of classes and the other scaling laws described in properties ( i - iii ) .this kind of model has previously been explored in statistics under the name of pitman - yor , or chinese restaurant process ( crp ) , where it is known as one of the paradigmatic processes that generate partitions of elements into classes that are symmetric by swapping , or `` exchangeable '' .this process is used in bayesian inference and clustering problems . in the chinese restaurant ( with table sharing ) parallel , individual domains correspond to customers and tables are domain classes .a domain belonging to a given class is a customer sitting at the corresponding table . in a duplication event ,a new customer is seated at a table with a preferential attachment principle , and in an innovation event , a new table is added .a simple mean - field treatment of the crp allows to access its scaling behavior .rigorous results for the probability distribution of the fold usage vector , for , are in good agreement with mean - field predictions .it is important to note that for this stochastic process , the usual large - deviation theorems do not hold , so that large- limit values of quantities such as do not converge to numbers , but rather to random variables . despite of this non - self - average property , it is possible to understand the scaling of the averages and ( of and respectively ) at large , writing simple `` mean - field '' equations , for continuous .note that rigorously the mean value is still a random variable , function of the ( stochastic ) birth time of class . from the definition of the model, we obtain these equations have to be solved with initial conditions , and .hence , for , one has and \sim n^{\alpha } \ \ , \ ] ] while , for , these results imply that the expected asymptotic scaling of is sublinear , in agreement with observation ( i ) . the mean - field solution can be used to compute the asymptotic of , following the same line of reasoning used by barabasi and albert for the preferential attachment model .this works as follows . from the solution, implies , with , so that the cumulative distribution can be estimated by the ratio of the ( average ) number of domain classes born before size and the number of classes born before size , . can be obtained by derivation of this function . for , and , we find for , and for .the above formulas indicate that the average asymptotic behavior of the distribution of domain class populations is a power law with exponent between and , in agreement with observation ( ii ) .in contrast , the behavior of the model of gerstein and coworkers can be found in this framework by taking improperly , that is for constant .it gives a linearly increasing and a power - law distribution with asymptotic exponent for the domain classes .note that the phenomenology of the barabasi - albert preferential attachment scheme is reproduced by a crp - like model where at each step a new domain class ( corresponding to the new network node ) with on average members ( the edges of the node ) is introduced , and at the same time domains are duplicated ( the edges connecting old nodes to the newly introduced one ) .it is possible to obtain the same results through a different route compared to the above reasoning , by writing the hierarchy of mean - field equations for , using a master equation - like approach .similarly as what happens for the zero - range process , these equations contain source and sink terms governing the population dynamics of classes .duplications create a flux from classes with to classes with members , while only has a source term coming from the innovation move : we consider the limit of large , and use the ansatz .this ansatz can be justified empirically and by simulations , as shown in fig .[ fig : f1n ] , which compares this feature in empirical data and in simulations of different variants of the model .we have giving the solution of these equations is which can be estimated as ^{1+\a } \ \ , \ ] ] giving the result for the scaling of and its prefactor .going beyond scaling , the probability distributions generated by a crp contain large finite - size effects that are relevant for the experimental genome sizes . in this sectionwe analyze the finite size effect affecting the distribution over the domain classes , obtained performing direct numerical simulations of different crp realizations .the simulations allow to measure , and for finite sizes , and in particular for values of that are comparable to those of observed genomes shown in fig .[ fig : alphafit ] .the normalized distribution of the number of classes with domains over a genome of length reaches the theoretical distribution suggested by our model only in the asymptotic limit it is possible to obtain more information by studying the ratio of the asymptotic distribution and the distribution obtained from the crp simulation as shown in fig .[ sub : c1 ] .the plot shows that there is a value , depending on the size of the genome , beyond which the distribution is not anymore consistent with but shows exponential decay and large fluctuations . in order to obtain a quantitative estimate of the deviation from scaling generating the cutoff, we define an order parameter as follows . using data obtained from the simulation as in fig .[ sub : c1 ] , we find the mean of the first 30 points of the plotted function and then compute the standard deviation by analyzing windows of points together .the cutoff is defined when , where is a parameter .the result of this procedure can be seen in fig .[ sub : c2 ] .the cutoff shows a linear dependence from the genome length . to make sure the procedure does not depend too much on the number of iterations used to obtain the mean value of , we performed it for different values of .as can be expected ( fig.[sub : cut_iter ] ) , more statistics is needed for probing the cutoff trend in those regions where the probability density function is very small .were it necessary to obtain from the distribution over domains an estimate of parameters for the underlying crp , one could decide to consider only data with . at the scales that are relevant for empirical data , finite - size corrections are substantial .indeed , the asymptotic behavior is typically reached for sizes of the order of , where the predictions of the mean - field theory are confirmed . comparing the histogram of domain occurrence for the mean - field solution of the model , simulations and data ,it becomes evident that the intrinsic cutoff set by causes the observed drift in the fitted exponent of the empirical distribution visible in fig .[ fig : alphafit ] .this means that the common behavior of the slopes followed by the population of domain classes for genomes of similar sizes can be ascribed to finite - size effects of a common underlying stochastic process . beyond the linear cutoff ,the behavior of the distribution becomes realization - dependent due to the breaking of self - average .the relevant parameter to disentangle the realization - dependence is .high- realizations have different tails of the distribution from low- ones , giving rise to the large fluctuations observed in fig .[ sub : c1 ] .thus , while the mean - field approach is successful in predicting the asymptotic scaling of the distribution , it does not capture the finite - size effects which can be observed in single realization of the crp process with finite and . beyond mean - fieldis possible to obtain more information by considering the sum of all crp trajectories conditioned to reaching configurations with given and .this enables a statistical - mechanical derivation of the normalized distribution of the number of domains with classes over a genome of length ( number of domains ) .since the focus here is on the mean - field approach , the calculation is described in a parallel work .the above model does not describe evolutionary time in generations .conversely , it reproduces random ensembles of different genomes generated one from the other with the basic moves of duplication , innovation ( and loss , see below ) .it considers only events that are observed at a given , independently on when or why they happened in physical or biological time .genomes from the same realization can be thought of as a trivial tree of life , where each value of gives a new specie . in the case including domain deletions , more genomes of the same history can have the same size .in contrast , independent realizations are completely unrelated .the scaling laws in and hold for the typical realization , indicating that the scaling laws originate from the basic evolutionary moves and not from the fact that the species stem from a common tree with intertwined paths due to common evolutionary history .for example , two completely unrelated realizations will reach similar values of at the same value of .the data confirm this fact : phylogenetically distant bacteria with similar sizes have very similar number and population distribution of domain classes ( see fig . [fig : crpvspwl ] ) . while the scaling laws are found independently on the realization of the chinese restaurant model , the uneven occurrence of domain classes can be seen as strongly dependent on common evolutionary history .averaging over independent realizations , the prediction of the crp is that the frequency of occurrence of any domain class would be equal , as no class is assigned a specific label . in the chinese restaurant metaphor, the customers only choose the tables on the basis of their population , and all the tables are equal for any other feature . in order to capture this behavior with the model, one can consider the statistics of domain topology occurrence of a single realization , which is an extremely crude , but comparatively more realistic description of common ancestry .in other words , in this case , the classes that appear first are obviously more common among the genomes , and the qualitative phenomenology is restored , without the need of any adjustment in the model definition ( fig .[ fig : occurrence ] ) . to ( `` uniform '' ) ,i.e. the range of sizes observed in the data , or directly for the set of empirical sizes of the genomes ( `` sampled '' ) ., scaledwidth=80.0% ]loss of genes , and thus of domains , is reported to occur frequently in genomes .we will discuss now variants of the model considering the introduction of a domain deletion , or loss rate .the question we ask is whether the introduction of domain loss , which we consider mainly as a perturbation , affects the qualitative behavior of the model , for example by generating different scaling behavior or phase transitions .we will see that the answer to all these questions is mostly negative even for non - infinitesimal perturbations , provided the loss rate is constant and does not scale itself with and .the main exception to this behavior is found when the loss probability of a domain depends on its own class size .we introduce domain loss through a new parameter , which defines a loss probability in two _ a priori _ different ways .( 1 ) we can distribute this probability equally among domains , so that the per - class loss probability is .consequently , the duplication and innovation probabilities and are rescaled by a factor . ( 2 ) we can also weigh the loss probability of a domain on its own class size , in the same way as domains are duplicated in the standard crp , so we obtain a per - class loss move with probability , giving a total and the rescaling of and .we will see that model ( 1 ) and ( 2 ) are not equivalent . on technical grounds, the introduction of domain loss makes the stochastic process entirely different : is now a random variable , and all the observables that depend on it ( e.g. ) are stochastic functions of this variable .another parameter , , describes the iterations of the model .operatively , we tackle the two models with the usual mean - field approach , writing equations for and of the kind , , and hence obtain the behavior as a function of by considering .the exact meaning of these equations is not straightforward .for example should represent the average on all histories passing by , but the differential equation strictly describes only the dependence of the observable from the actual value of the random variable .nevertheless , the predictions of this mean - field approach agree well with the results of simulations , indicating that these complications typically do not affect the behavior of the means .we will consider situations where , on average , genomes are not shrinking . considering model ( 1 ) ,we can write the mean - field equations as where the sink term for derives from domain loss in classes with a single element , quantified by . since time does not count genome size , one has to consider the evolution of with time , given in this case by . in order to solve these expressions, we use the ansatz , and considering the limit in large .the ansatz is verified by simulations and holds also for empirical data , as previously shown ( fig .[ fig : f1n ] ) .the first equation reads : \ \ .\ ] ] the above equation gives the conventional scaling for and with replaced by , the correction resulting from the measured value of . by the use of computer simulations we notice that the coefficient tends to for infinite - size genomes ( fig .[ sub : c1a ] , [ sub : ara ] , and [ sub : an ] ) , so that the asymptotic trend of the equally distributed domain loss is identical to that of the standard crp .this behavior is independent from the chosen , as the asymptotic regime depends only on the growth of , governed by .when we analyze the domain distribution of finite - size genomes , we obtain the conventional results : the power - law depends on the genome size , but not on the value of ( fig . [sub : ara ] and [ sub : qrd ] ) .this is explained considering the fact we are comparing runs with fixed genome size , thus with different number of moves , and we do not consider genomes that lose all their own domains .in fact biologically one can not trace the number of moves needed to reach a specific genome , but essentially we can observe only genomes in their actual state .more precise results can be obtained by the use of the mean field `` master equation '' approach sketched above . using the same ansatz , we obtain the following hierarchy of equations for \chi_j=(1-\d)(j-1-\a)\chi_{j-1}+\d ( j+1)\chi_{j+1 } \ \ , \ ] ] with .it is possible to estimate the solution of this system by taking a continuum limit as - \d\partial_x \chi(x)=0 \ \ , \ ] ] which can be solved giving ^{1 + z } \ \ , \ ] ] with .we also find , which is consistent with the constraint since .it is then clear that the introduction of domain loss is equivalent to a rescaling of the parameter to , but in our case , asymptotically .the case has to be treated separately , but the behavior is similar ( fig .[ sub : qrd ] ) .a similar procedure is applicable to model ( 2 ) . in this case , however , the dependence of the effective death rate from can bring to an interesting change in the phenomenology , where can select the observed exponent and also determine a regime of linear growth for . to understand this pointwe can consider the mean - field evolution of .this is determined asymptotically by the balance of a growth term and a loss term .with the usual ansatz this gives an asymptotic evolution equation of the kind with the usual definitions .simulations confirm the ansatz for this second model , which we will use in the mean - field reasoning . if is intensive , i.e. asymptotically of order inferior to , the term can be neglected and this equation gives the scaling law , with .however , this equation has solution also if is order , i.e. , and the term in eq .( [ eq : mortepes ] ) can not be neglected . in this case and determine the prefactor of the scaling law . thus , there are two self - consistent mean - field asymptotic solutions , and we expect a transition between the two distinct behaviors. the existence of this transition is confirmed by simulations ( fig .[ sub : aramfa2 ] ) : at fixed , saturates to for larger values of this parameter . the transition point can be understood in mean - field as the intersection of the two solutions and at varying , and gives rise to a two - parameter `` phase - diagram '' separating the linear from the sublinear scaling of with , as shown in fig . [ sub : fase ] . in conclusion ,the mean - field approach is effective in exploring the effects of domain loss , which , under some general hypotheses , does not disrupt the basic phenomenology of the duplication - innovation model .specifically , there appear to be no qualitative changes introduced by a finite uniform loss rate , as long as this rate is constant with .a loss rate that is weighted as the innovation rate , instead , can induce an interesting transition from sublinear to linear scaling .thinking about the empirical system , no direct quantitative estimates are currently available regarding the domain loss rate as a function of genome size or number of classes .for this reason , it currently appears difficult to make a definite choice for this ingredient in the model .in the previous sections we analyzed models that make no distinction between domain topologies , but the latter are selected for duplication moves only on the basis of their population .it is then clear that they can reproduce the observed qualitative trends for the domain classes and their distributions with one common set of parameters for all genomes .one further question is to estimate the quantitative values of these parameters for the data .while the empirical slope of could be seen as more compatible with a model having , as its slope decays faster than a power law for large values of , the slopes of the power - law distribution of domain classes and their cutoff as a function of is in closer agreement to a crp with between 0.5 and 0.7 .those table colors can be set by any observable of interest . in our analysis, we considered the empirical occurrence of a domain topology as a label .indeed , the occurrence of a given domain class is determined by its biological function .for example , as expected , all the `` core '' biological functions such as translation of proteins and dna replication are performed by highly occurring domain topologies , since this machinery must be present in each genome .accordingly , these universal classes performing core functions have to appear preferentially earlier on in a model realization .this variant of the model is important for producing informed null models for the analysis of the empirical data , and , as we will see , shows the best agreement with respect to the scaling laws . from the superfamily database with mean - field predictions and simulations of the model with class specificity .comparison between of powerlaw fit ( lines in blue ) and universality of the two parameters from our model .two letters identify each genome , whose full name can be found in appendix ( table [ genname ] ) .simulations ( lines in gray ) from our model use the same values and . ]we will introduce the variant with class specificity by coupling the crp model to a simple genetic algorithm able to select between innovation moves that choose different classes .let us first introduce some notation to parametrize domain class occurrence .we define the matrices where : it is possible to consider the mean taken along the matrix columns , where the label `` emp '' means that this value is obtained from empirical data ( fig .[ fig : occurrence ] ) .generically , a genetic algorithm requires a representation of the space of solution and a function that tests the quality of the solution computed . in our case ,the former is simply the genome obtained from the a crp step , parametrized by .the latter is defined as the value of the above scoring function taken over simulated genomes measures how much the set of domain classes they possess agrees with experimental data ( in this case on occurrence ) and enables to compare different `` virtual '' crp moves .note that in this variant , since the empirical domain topologies are a finite set , domain classes are also finite . as a consequence ,tables with a given tablecloth are extracted without replacement , affecting the pool of available colors .as we will see , this is an important requirement to obtain agreement with the data , as it determines the saturation of the function also for large values of .we will discuss the role of an infinite pool in the following subsection .also , as anticipated , domain classes have different `` color '' , or in mathematical terms the exchangeability of the process is lost .classes are drawn from the set of the residual ones with uniform probability .genomes and are compared through the function and the highest score one will be the genome so that . in these conditions ,the rigorous results present in the literature for the crp cease to be valid .it is still possible , however , to analyze the behavior of this variant by the mean - field approach adopted here , and to compare with simulations .since the selection rule chooses strictly the maximum , it is essentially able to distinguish the sign of only .for this reason , it is sufficient to account for the positivity ( which we label by `` + '' ) and negativity ( `` - '' ) of this function for a given domain index .this means that , with the simplification of two virtual moves only , the model introduces only one extra effective parameter , i.e. the ratio of the `` universal '' ( positive ) to the `` contextual '' ( negative ) domain classes . in order to write the mean - field equations for this model variant, we first have to classify all the possible outcomes of the virtual crp moves . the genomes and proposed by the crp proliferation step can have the same ( labeled by `` '' ) , higher ( `` '' ) or lower ( `` '' ) score than their parent , depending on , and by the probabilities to draw a universal or contextual domain family , and respectively . using these labels ,the scheme of the possible states and their outcome in the selection step is given in table [ tab : moves ] .
|
we present a combined mean - field and simulation approach to different models describing the dynamics of classes formed by elements that can appear , disappear or copy themselves . these models , related to a paradigm duplication - innovation model known as chinese restaurant process , are devised to reproduce the scaling behavior observed in the genome - wide repertoire of protein domains of all known species . in view of these data , we discuss the qualitative and quantitative differences of the alternative model formulations , focusing in particular on the roles of element loss and of the specificity of empirical domain classes .
|
weak convergence of scaled input processes has been studied extensively over the last decade .the limit is a fractional brownian motion ( fbm ) or a lvy process depending on the particular scaling . while the motivation of such analysis originates from data traffic in telecommunications , both fbm and lvy processeshave recently become prevalent in finance .thereby , we construct a general stochastic process based on a poisson random measure , interpret it as stock price process and prove weak convergence results .we consider a real valued process of the form n_n(ds , du , dr)\ ] ] where is a deterministic function satisfying lipschitz condition , is a scaling sequence , and and are marks of the resulting poisson point process with denoting the time .the mean measure on is either a probability measure or an infinite measure on .the process depends on the scaling parameter through not only , but also through the mean measure of which is taken to have a regularly varying form in to comply with the long - range dependence property of teletraffic or financial data . after centering the process , we can obtain either an fbm or a stable lvy motion depending on the particular scaling of the mean measure of and the factor as . while fbm is a self - similar and long - range dependent model ,a lvy process has independent increments and self - similarity exists without long - range dependence .our main contribution is the generalization of the previous results with a specific linear form for and with an increasing satisfying some other technical conditions , to those with lipschitz functions . in this case , not only the proofs require more work , but we need lipschitz assumptions on the derivative of as well . we also show that the time scaling used in previous work can be replaced by parameter scaling of the distributions of the relevant random variables .the time scaling has been interpreted as ` birds - eye ' description of a process , which is not necessary when the scaling is interpreted in terms of its parameters .inspired by , we unify the results for general forms of with less stringent conditions in some cases . in , it is noted that a fractional brownian motion with can be approximated if the pulse is continuous on and has a compact support . as an alternative extension ,we consider continuous with no compact support while constructing ( [ 1 ] ) with as a centered process .a secondary generalization of previous work is the consideration of as a more general process than workload which would be positive by definition .we allow a signed process through the choice of real valued rate . in related work ,the poisson random measure is replaced by a general arrival process and a cluster poisson process , respectively .ergodicity is required for the limit theorems in the general arrival case as well . on the other hand ,most of the previous studies on scaled input processes are named infinite source poisson models due to the assumption of poisson arrivals . as for application in finance, the process can be interpreted as the price of a stock .the interpretation of ( [ 1 ] ) as a stock price process has been first presented in . our aim is to construct a model involving the behavior of agents that can be parameterized and estimated from data , yet having well - known stochastic processes as its limits .while the limiting models fit well to financial data , they do not involve the physical parameters of the trading agents .agent based modeling is widely used to find a model that best fits stock price processes . in some studies, agents are divided into two groups , mostly named as chartists and fundamentalists . in these studies ,the two agent groups have different demand functions for the stock and the price is generally determined via the total excess demand . in ([ 1 ] ) , the arrival time , the rate and the duration of the effect of an order are all governed by the poisson random measure . under the assumption of positive correlation between the total net demand and the price change, we expect that a buy order of an agent increases the price whereas a sell order decreases it .each order has an effect proportional to its volume and duration .the duration of the effect is assumed to follow a heavy tailed distribution .this effect starts when the order is given , increases to a maximum which is proportional to the total order amount , and then starts decreasing until it vanishes after a finite time .alternatively , its effect may last for some time and leave the price at a changed level on and after the time of transaction .the logarithm of the stock price is found by aggregating the incremental effects of orders placed by all active agents in ] and remains constant thereafter . in , the pulse considered for the aim of approximating an fbm .it has a compact support representing a limited effect that vanishes after the duration of the pulse .these special pulses , in other words effect functions , are sketched in fig.1 .general lipschitz functions are also considered in .the effect function is left unspecified in but with conditions on the tail properties of its distribution for large times .the duration is not parameterized in contrast to the present work . in ,the effect may last indefinitely although it decreases in time with a regularly varying tail . in , each effect is assumed to converge for large times to a finite random variable which has a distribution with a regularly varying tail .the stable limits are outlined in with several interesting special cases . in ,the effect function is deterministic which is randomized through a random variable for duration as in our case , but with an additional assumption that the effect function itself also has a regularly varying tail . on the other hand ,random effect functions have been also considered also in where central limit theorems are proved under general conditions . as a special case, the effect function could be a compound poisson process as in .this could be used to model the buy or sell transactions in smaller quantities for a given order in the present work .a semimartingale is assumed for the rate of the effect in . in applications, can be estimated to match the local dynamics of the price change or the workload .let the price process be given by where is the log - price process to be constructed in this section .we aim to introduce a stochastic process which is sufficiently general to approximate an fbm or a lvy motion , and has an adequate number of physical parameters that can be estimated from data .the effect function and the poisson random measure described in section 2 will be the main ingredients .previous models that involve heterogeneous agents usually classify them into two separate groups as chartists and fundamentalists according to their trading behavior . this can clearly be generalized to several types of agents .the total effect from agents of type , , can be further aggregated to form as then , the price at time is given by as before .we form the log - price process by aggregating the randomized effects .more precisely , the difference in the effect amplitudes at times and are integrated with respect to the poisson random measure to yield since the underlying poisson process has been going on long before time 0 , has stationary increments and by construction .we think as the sum of all effects due to all active agents between times 0 and .we assume that has the form ( [ k ] ) as before , and write \ , n(ds , du , dr).\ ] ] the propositions below give the sufficient conditions for to be well - defined for a finite measure and a specific -finite measure in equation ( [ mean ] ) , respectively .let denote the characteristic function of , that is , for .[ prop1 ] suppose that is a probability measure satisfying , and is a lipschitz continuous function on with for all and for all .then , , , is a finite random variable a.s . with characteristic function -1 \right\ } \lambda \ , ds\ , \nu(du)\ , \gamma(dr)\ ] ] * proof : * the integral of a deterministic function with respect to a poisson random measure defines a finite random variable if where is the mean measure .this is clearly satisfied if which also implies that the random variable has a finite expectation .therefore , it is sufficient to show that the expression is finite for to be well defined as . note that considering the two different regions and for the first integral , and the regions and for the second integral in , we get ds \\ & & + \int_{0}^t \left [ \int_0^{t - s}u\,\left|f(1)\right| \ , \nu(du ) \,ds + \int_{t - s}^{\infty}u\,\left|f\left(\frac{t - s}{u}\right ) \right| \,\nu(du ) \right ] \,ds\end{aligned}\ ] ] due to the lipschitz hypothesis on and since , we have ds \label{above } \\ & & + \int_{0}^t \left [ \int_0^{t - s}u\,m \ , \nu(du ) + \int_{t - s}^{\infty}u\,m \frac{t - s}{u } \,\nu(du ) \right ] \,ds \nonumber\end{aligned}\ ] ] where .we apply integration by parts for the inner integrals above . for ,integration by parts yields and we have where denotes the cumulative distribution function ( cdf ) of and . for ,we get by integration by parts and we have . putting all expressions together and using , we simplify ( [ above ] ) as changing the order of integration , we get which is finite by hypothesis as .the characteristic function of can be found immediately from formulae for integrals with respect to a poisson random measure . the function characterized in proposition [ prop1 ] represents the local dynamics due to the effect of an individual buy or sell order .the change in price , which can be non - monotonic , occurs over a finite time and remains at the same level thereafter .the specific shape of is left unspecified , and so is its sign . in general , we expect a buy order to increase the price and a sell order to decrease it .therefore , if is chosen to be an increasing function , then we could have for a buy order and for a sell order .however , a general form is assumed to leave room for modeling purposes in view of real data and to provide mathematical generality .the special case ( [ increasingf ] ) used in is a linearly increasing pulse as stated in the following corollary .suppose that is a probability measure satisfying , and , .then , , , is finite a.s . and its characteristic function is given by \lambda \ , ds\ , \nu(du)\ , \gamma(dr)\ ] ] for .the log - price process which is a semimartingale in general , becomes a martingale if its mean is zero. this would be satisfied if , which corresponds to symmetric effects from buy and sell orders , for example .the following proposition is based on the results of .the support of is chosen as [ 0,1 ] for simplicity without loss of generalization .[ prop2 ] suppose that , and is lipschitz continuous with compact support ] imply and ^ 2u^{1-\delta}\ , du \ , ds < \infty\ ] ] for each , and in particular , by ( * ? ? ?* prop.3.1 ) .we sketch the usual proof for defining as an almost sure limit of zero mean random variables , as in .let ,\ , k=1,2,\ldots,\,a_0=(1,\infty) ] for , and .it is given by }-1\bigg .\nonumber\\ & & \qquad \left .- i\sum_{k=1}^m\xi_k\ , r \,u\left[f\left(\frac{t_k - s}{u}\right)- f\left(\frac{-s}{u}\right)\right]\right\ } \frac{n^{\delta}}{h(n ) } \ : \lambda\ , ds\ , \nu_n ( du)\,\gamma(dr ) \label{chr1}.\end{aligned}\ ] ] we first show that the exponent in ( [ chr1 ] ) is bounded and then use bounded convergence theorem to take the limit .this theorem is a generalization of ( * ? ? ? *thm.1 ) with the general effect function .although we follow the same approach as in ( * ? ?* thm.1 ) , there are more terms to bound in our case .let }-1 -i\sum_{k=1}^m\xi_k \ , r \,u\left[f\left(\frac{t_k - s}{u}\right)- f\left(\frac{-s}{u}\right)\right]\ : .\ ] ] using the random variable , we denote the left hand side of ( [ nu ] ) as below . by integration by parts ,the exponent in ( [ chr1 ] ) is equal to where is and the hypothesis that is used . *a ) * bound for the integrand of ( [ byparts ] ) for large values of : in view of potter bounds , for there exists such that for all and , that is , . since for some , we have for all for some .note that by ( [ nu ] ) .assume for simplicity of notation .therefore , we get for all and . in ( [ byparts ] ) , we have }-1 \right]\partial_u s(s , u , r)\ ] ] where \\ & = & \sum_k \xi_k r \ , \left[f\left(\frac{t_k - s}{u}\right)- f\left(\frac{-s}{u}\right)\right ] \\ & & \qquad + \sum_k \xi_k r \ , \left[-f'\left(\frac{t_k - s}{u}\right)\frac{t_k - s}{u}+ f'\left(\frac{-s}{u}\right)\frac{-s}{u}\right]\end{aligned}\ ] ] now , we can bound using the lipschitz property of and on different regions for and .let and stand for the lipschitz constants of and , respectively , or their upper bound , whichever is larger .let us assume for simplicity of notation .\i ) and since and , we have and = \left|0-f'\left(\frac{-s}{u}\right)\frac{-s}{u}\right| \leq m ' \left| \frac{s}{u}\right|\leq m \left| \frac{s}{u}\right|\end{aligned}\ ] ] due to the form of and lipschitz assumptions .therefore , we get in this region .\ii ) and in this region , vanishes and .therefore , we have \iii ) and in this region , and we get \iv ) and we have and the corresponding bound on follows .now , we can bound the remaining terms in ( [ gu ] ) by using the inequalities and , , and the fact that . the index is replaced by in order to distinguish the cross products of sums below .we further note that since is bounded and , assuming for simplicity of notation .putting all terms together by ( [ 14 ] ) , ( [ and ] ) , ( [ and2 ] ) and i)-iv ) , we find that ( [ byparts ] ) is bounded as where } \label{b}\end{aligned}\ ] ] and denote the regions in i)-iv ) . since ] for , and is given by }-1\bigg .\nonumber\\ & & \qquad \left .- i\sum_{k=1}^m\xi_k\,\frac{r}{n}\,u\left[f\left(\frac{t_k - s}{u}\right)- f\left(\frac{-s}{u}\right)\right]\right\ } \frac{n^{2+\delta}}{h(n)}\ : \lambda\ , ds\ , \nu_n ( du)\,\gamma(dr)\label{chr}.\end{aligned}\ ] ] the same approach will be followed as in the proof of theorem [ icr ] . by integration by parts, we find that the exponent of ( [ chr ] ) is given by using potter bounds and lipschitz conditions on and , we get an inequality similar to ( [ bound ] ) for given by where is similar to ( [ b ] ) but with by hypothesis , and and .precisely , \end{aligned}\ ] ] if we choose such that then the right hand side of ( [ boundfcra ] ) is finite along the same lines of the proof of theorem [ icr ] with .on the other hand , we can bound ( [ byparts2 ] ) for similarly .therefore , we can use dominated convergence theorem .we have as in ( [ kucuklimit ] ) , and as is bounded , hence , uniformly continuous .then , we get \nonumber \left[f\left(\frac{t_k - s}{u}\right)-f\left(\frac{-s}{u}\right)\right]u^2}\end{aligned}\ ] ] by lemma [ lemma 2 ] .we now revert ( [ byparts2 ] ) after the limits above , by another integration by parts , and get the limit of ( [ chr ] ) as \left[f\left(\frac{t_k - s}{u}\right)-f\left(\frac{-s}{u}\right)\right]u^{2 } \lambda \ , ds \ , u^{-\delta-1}\ , du \ , \gamma(dr)\right\}\end{aligned}\ ] ] which is the characteristic function of , where is a gaussian vector with zero mean and covariance \left[f\left(\frac{t_k - s}{u}\right)-f\left(\frac{-s}{u}\right)\right ] u^{2}ds \,u^{-\delta-1}\ , du .\label{covariance_matrix}\ ] ] when ( [ covariance_matrix ] ) is evaluated at , the variance coefficient is found to be ^ 2 \,ds\,u^{1-\delta}\ , du\ ] ] which is finite by lemma [ lemma1 ] with . using the identity for and making several change of variables , we find that the covariance of in ( [ covariance_matrix ] ) is given by for with . by definition, has the characteristic function of an fbm .convergence in the skorohod topology on follows along the same lines of proof of theorem [ icr ] . in this case, we have ^ 2 \!\ ! \lambda \ ,ds\ , \nu_n(du)\ ] ] and ( [ tight ] ) holds with . as an example , the continuous flow rate model studied in is given by \,r \, n(ds , du , dr ) \label{taqqu5}\ ] ] with replaced by the special form ( [ increasingf ] ) in ( [ zf ] ) . in ( * ? ? ?* thm.2 ) , the limit is studied when the speed of time increases in proportion to the intensity of poisson arrivals . to balance the increasing trading intensity , timeis speeded up by a factor and the size is normalized by a factor provided that .we can let with .taking , we show the equivalence of the scaling of ( * ? ? ?* thm.2 ) to the scaling in theorem [ fcr ] .note that .the scaled and centered process has the form \,\tilde{n}_n(ds , du , dr ) \label{taqqu22}\end{aligned}\ ] ] where we have written an effect function in general .then , we can make change of variables and to get \,\tilde{n}_n(d(ns),d(nu),dr ) \nonumber\\ & & = \int_{-\infty}^\infty\int_0^\infty\int_{-\infty}^\infty \frac{r}{n } \ , u \left[f\left(\frac{t - s}{u}\right)- f\left(\frac{-s}{u}\right)\right ] \,\tilde{n}_n(d(ns),d(nu),dr ) \label{equivform}\end{aligned}\ ] ] where the mean measure is in theorem [ fcr ] , we start with the scaled process ( [ equivform ] ) essentially . this can be observed by the fact that for a poisson random measure with mean measure by definition of a poisson random measure , ( * ? ? ?* def.v.2.2 ) .equivalence of the scalings in theorem [ fcr ] and ( * ? ? ?* thm.2 ) is in distributional sense .however , this is sufficient for equivalence as the convergence results are in distribution rather than almost sure sense .therefore , we can apply theorem [ fcr ] to obtain the limit as a fbm with variance parameter ^ 2 \ , ds\ , u^{-\delta-1}du= \frac{\mathbb{e}r^2}{(2-\delta)(3-\delta)}\ : .\ ] ] it is shown in that the asymptotic behavior of the ratio determines the type of the limit process when time is speeded up by a factor . for a choice of sequences and , the random variable denotes the number of effects still active at time n. it measures the amount of very long pulses that are alive and how much they contribute to the total price .the expected value of the random variable is for large .the limit is considered in the cases where this value tends to a finite positive constant , to infinity , or to zero as and go to infinity .we have already studied the case of finite constant in theorem [ icr ] and infinity in theorem [ fcr ] , the so - called intermediate and fast connection rates , respectively , in view of telecommunication applications . as shown above, our scalings do not involve time scaling .they can be physically understood as scalings of the parameters of the log - price process .the slow connection rate will be investigated similarly in terms of the model parameters in theorem [ scr ] in the next section .the next theorem is a simpler version of theorem [ fcr ] due to the form of the measure .note that ( [ scalmean ] ) can be approximated as for large .this scaling is used below with the simpler form of .it can be interpreted as half way in taking the more involved limit of theorem [ fcr ] .[ manthm ] let \tilde{n}_n(ds , du , dr)\ ] ] where and suppose that and is a lipschitz continuous function satisfying either of the following conditions 1 . for all and for all , or 2 . has a compact support .then , the process , for , converge in law to an fbm with variance parameter ^ 2 u^{1-\delta } du \ , ds\ ] ] as .* proof : * although it can be found from the characteristic function of in proposition [ prop2 ] that for all under assumption ii , we form as above since may not exist with assumption i. for the convergence of finite dimensional distributions of , consider the characteristic function for , and .it is given by where is given in ( [ g ] ) .note that the characteristic function exists since are well defined in view of ( [ secmoment ] ) which follows from lemma [ lemma1 ] with under assumption i , and by proposition [ prop2 ] under assumption ii . as , we will show that the above characteristic function converges to \left[f\left(\frac{t_k - s}{u}\right)-f\left(\frac{-s}{u}\right)\right]u^{2 } \lambda \,u^{-\delta-1 } \ , ds \,du \, \gamma(dr)\right\ } \label{mandelbrot_bound}\end{aligned}\ ] ] due to the inequality for , the integrand in ( [ mandelbrot_bound ] ) is an upper bound to therefore , dominated convergence theorem allows us to take the limit inside the integral in ( [ mandelbrot2.5 ] ) .that is , we must find }-1 -i\sum_{k=1}^m\xi_k\frac{r}{n}u\left[f \left(\!\frac{t_k - s}{u } \ !\right ) - f \left ( \ ! \frac{-s}{u}\ !\right ) \right]\right)n^2\ ] ] which is now equal to \left[f\left(\frac{t_k - s}{u}\right)-f\left(\frac{-s}{u}\right)\right]u^2 \label{limit}\ ] ] by lemma [ lemma 2 ] .this shows that ( [ mandelbrot2.5 ] ) converges to ( [ mandelbrot_bound ] ) as by the continuity of the exponential function .the variance is evaluated in the proof of theorem [ fcr ] . to complete the proof , we need to show convergence in with skorohod topology .this is straight forward since the variance of is already free of and is bounded by a constant multiple of by the proof of lemma [ lemma1 ] . theorem [ manthm ] with condition ii . is theorem 3.1 of where it is noted that a fractional brownian motion with can be approximated if the pulse is continuous and has a compact support .condition i. above considers an effect function which is continuous , but with no compact support as an alternative .a process with stationary and independent increments is called a lvy process .the results of this section concerns a particular class of lvy processes , namely stable lvy motion .let , and let , and ] leading to estimates as in ( [ and ] ) and ( [ and2 ] ) .it follows that is an upper bound to when it is evaluated over where and are as above and for evaluating for smaller values of , we have a bound similar to ( [ onceki ] ) .therefore , is bounded by an integrable function uniformly over in view of the analogous computations in appendix b. by dominated convergence theorem , let .we find that \ , u^{-\delta-1}\lambda \ , ds \, du \ , \gamma(dr)\ ] ] by using the same approach for taking the limit of the characteristic function of the finite dimensional distributions above .then , we can write for sufficiently large , by ( [ chrbd ] ) , since is increasing in . simplifying further , we have \ , u^{-\delta-1 } du \ , \gamma(dr ) = \lambda \,t \,\xi^{\delta } \mathbb{e } |r|^{\delta } \int_0^{\infty } ( 1-\cos u ) \, u^{-\delta-1 } du\ ] ] where the second equality follows by a change of variable to .define the constant so that .now , substituting in ( [ nerdeyseson ] ) and changing to , we get which concludes the proof as . note that the stable process obtained in the limit is stable with a skewness parameter that depends on the distribution of the rate .moreover , it has stationary and independent increments .therefore , it is also a -stable lvy motion ( * ? ? ?* def.7.5.1 ) , but with scale parameter and skewness parameter by ( * ? ? ?* pg.s 10,11 ) , where and , see also ( * ? ? ?* pg.217 ) . in the context of supply and demand, one can interpret as the skewness caused by demand and by supply since they are expected to increase and decrease the price , respectively .the weak convergence result given in theorem [ scr ] is proved with skorohod s topology . in ,the analogous result based on ( [ increasingf ] ) has been omitted .the convergence is shown with topology instead of in where the effect function is assumed to be monotone increasing in the context of workload input to the system .the authors heuristically argue that some of the individual loads are too large ( * ? ? ?* rmk.4.2 ) . on the other hand , proves weak convergence with topology considering that the limit process has jumps .however , topology also works as shown above . the interplay between and discussed in for sums of moving averages .it is proved that convergence can not hold because adjacent jumps of this process can coalesce in the limit .an intuitive explanation is given as the jump of the limiting process occurring from a staircase of several jumps .under certain conditions , convergence is shown instead .we have a simpler situation where each arrival of the scaled process generates a jump of the limit process as evident from ( [ limituon ] ) .[ scrmandelbrot ] suppose the function is lipschitz continuous with for all , for all and is also differentiable with satisfying a lipschitz condition a.e . , and for some with .let \tilde{n}_n(ds , du , dr)\ ] ] where and then , the process , for , converges in law to as , where and are independent -stable lvy motions with mean 0 , and skewness intensity and , respectively .* proof : * we will give only a sketch of the proof due to its similarities with the previous theorem . the characteristic function for the finite dimensional distributions of be written as with as in ( [ g ] ) . making a change of variable to , we get now , is similar to ( [ gsuon ] ) and we take a similar limit to ( [ limituon ] ) with replaced by .this is justified by dominated convergence theorem since the integrand in ( [ sonchr ] ) can be bounded as in the proof of theorem [ scr ] .convergence in follows along the same lines , this time with in of ( [ k ] ) . the simpler form of scalings in theorems 3 and 5 facilitate neat interpretations in terms of the parameters of the price process . in theorem 3 , is scaled as and is scaled as , which means that the trading occurs more frequently , but in smaller quantities and yields a fractional brownian motion limit .in contrast , a stable process is obtained if the rate of trading decreases while its effect rate increases since is scaled as and is scaled as in theorem 5 .g. iori , a microsimilation of traders activity in the stock market : the role of heterogeneity , agents interactions and trade frictions , journal of economic behavior and organization , 49 ( 2002 ) , 269 - 285 .i. kaj and m. s. taqqu , convergence to fractional brownian motion and to the telecom process : the integral representation approach , brazilian probability school , 10th anniversary volume , eds .vares , v. sidoravicius , 2007 , birkhauser .we show that the right hand side of ( [ bound ] ) is finite . when the right hand side of ( [ bound ] ) is splitted over different regions , checking the finiteness of the integrals over reduces to showing that is finite .this is indeed true when we choose such that in region , we have ^{-\delta } \max(u^{-\epsilon},u^\epsilon)\,du \ , ds\ , \gamma(dr)\ ] ] if , it can be observed from fig.2 that the integral reduces to that over region .if , then the integral over yields an upper bound .that is , we can replace by the constant function and get where denotes a cutoff value of such that for after changing the order of integration for and in ( [ i1 ] ) .then , the right hand side of ( [ i ] ) is finite if we choose such that which clearly satisfies ( [ ep2 ] ) since . in this part, we show that ( [ 42 ] ) is integrable with respect to . substituting the limits of integration in regions shown by , respectively , we have where we put .the integrals are finite for since for and , where . in , we have as in appendix a , we consider two intervals ] to evaluate this integral . over the first interval ,it is finite for , and over the latter , it is proportional to which is bounded by 1 . as a result , ( [ 42 ] ) is finite if we choose such that
|
we construct a general stochastic process and prove weak convergence results . it is scaled in space and through the parameters of its distribution . we show that our simplified scaling is equivalent to time scaling used frequently . the process is constructed as an integral with respect to a poisson random measure which governs several parameters of trading agents in the context of stock prices . when the trading occurs more frequently and in smaller quantities , the limit is a fractional brownian motion . in contrast , a stable lvy motion is obtained if the rate of trading decreases while its effect rate increases . fractional brownian motion , arbitrage , stock price model , stable lvy motion , long - range dependence , self - similarity
|
in a prescient 1929 novel called _ lncszemek _ ( in english , _ chains _ ) , karinthy imagines that any two people can be connected by a small chain of personal links , using no more than _ five _ intermediaries .years later , milgram validates the concept by conducting real - life experiments .he asks volunteers to transmit a letter to an acquaintance with the objective to reach a target destination across the united states .while not all messages arrive , successful attempts reach destination after six hops in average , popularizing the notion of _ six degrees of separation_. yet , for a long time , no theoretical model could explain why and how this kind of _ small - world _ routing works .one of the first and most famous attempts to provide such a model is due to jon kleinberg .he proposes to abstract the social network by a grid augmented with _shortcuts_. if the shortcuts follow a heavy tail distribution with a specific exponent , then a simple greedy routing can reach any destination in a short time ( hops ) . on the other hand , if the exponent is wrong , then the time to reach destination becomes for some .this seminal work has led to multiple studies from both the theoretical and empirical social systems communities . in this paper , we propose a new way to numerically benchmark the greedy routing algorithm in the original model introduced by kleinberg .our approach uses dynamic rejection sampling , which gives a substantial speed improvement compared to previous attempts , without making any concession about the assumptions made in the original model , which is kept untouched . fueled by the capacity to obtain quick and accurate results even for very large grids ,we give a fresh look on kleinberg s grid , through three independent small studies .first , we show that the model is in practice more robust than expected : for grids of given size there is quite a large range of exponents that grant short routing paths .then we observe that the lower bounds proposed by kleinberg in are not tight and suggest new bounds .finally , we compare kleinberg s grid to milgram s experiment , and observe that when the grid parameters are correctly tuned , the performance of greedy routing is consistent with the _ six degrees of separation _ phenomenon .section [ sec : related - work ] presents the original augmented grid model introduced by kleinberg and the greedy routing algorithm .a brief overview of the main existing theoretical and experimental studies is provided , with a strong emphasis on the techniques that can be used for the numerical evaluation of kleinberg s model . in section [ sec : simulation - design ] , we give our algorithm for estimating the performance of greedy routing .we explain the principle of dynamic rejection sampling and detail why it allows to perfectly emulate kleinberg s grid with the same speed that can be achieved by toroidal approximations .we also give a performance evaluation of the simulator based on our solution . for readers interested in looking under the hood , a fully working code ( written in julia )is given in appendix [ sec : code ] .to show the algorithm benefits , we propose in section [ sec : applications ] three small studies that investigate kleinberg s model from three distinct perspectives : robustness of greedy routing with respect to the shortcut distribution ( section [ sec : efficient - enough - exponents ] ) ; tightness of the existing theoretical bounds ( section [ sec : asymptotic - behavior ] ) ; emulation of milgram s experiment within kleinberg s model ( section [ sec : six - degrees - of - separation ] ) .we present here the model and notation introduced by kleinberg in , some key results , and a brief overview of the subsequent work on the matter . in , kleinberg considers a model of directed random graph , where are positive integers and is a non - negative real number . a graph instance is built from a square lattice of nodes with manhattan distance : if and , then . represents some natural proximity ( geographic , social , ) between nodes .each node has some _ local _ neighbors and _ long range _ neighbors .the local neighbors of a node are the nodes such that .the long range neighbors of , also called _ shortcuts _ , are drawn independently and identically as follows : the probability that a given long edge starting from arrives in is proportional to .the problem of decentralized routing in a instance consists in delivering a message from node to node in a hop - by - hop basis . at each step , the message bearer needs to choose the next hop among its neighbors .the decision can only use the lattice coordinates of the neighbors and destination .the main example of decentralized algorithm is the _ greedy routing _, where at each step , the current node chooses the neighbor that is closest to destination based on ( in case of ties , an arbitrary breaking rule is used ) .the main metric to analyze the performance of a decentralized algorithm is the _ expected delivery time _ , which is the expected number of hops to transmit a message between two nodes chosen uniformly at random in the graph .this paper focuses on studying the performance of the greedy algorithm . unless stated otherwise, we assume ( each node has up to four local neighbors and one shortcut ) . let be the expected delivery time of the greedy algorithm in .the main theoretical results for the genuine model are provided in the original papers , where kleinberg proves the following : * ; * for , the expected delivery time of any decentralized algorithm is ; * for , the expected delivery time of any decentralized algorithm is .kleinberg s results are often interpreted as follows : short paths are easy to find only in the case .the fact that only one value of asymptotically works is sometimes seen as the sign that kleinberg s model is not robust enough to explain the small - world routing proposed by karinthy and experimented by milgram . however , as briefly discussed by kleinberg in , there is in fact some margin if one considers a grid of given .this tolerance will be investigated in more details in section [ sec : efficient - enough - exponents ] .while we focus here on the original model , let us give a brief , non - exhaustive , overview of the subsequent extensions that have been proposed since .most proposals refine the model by considering other graph models or other decentralized routing algorithms .new graph models are for example variants of the original model ( studying grid dimension or the number of shortcuts per node ) , graphs inspired by peer - to - peer overlay networks , or arbitrary graphs augmented with shortcuts . other proposals of routing algorithms usually try to enhance the performance of the greedy one by granting the current node additional knowledge of the topology .a large part of the work above aims at improving the bound of the greedy routing .for example , in the small - world percolation model , a variant of kleinberg s grid with shortcuts per node , greedy routing performs in .many empirical studies have been made to study how routing works in real - life social networks and the possible relation with kleinberg s model ( see for example and the references within ) .on the other hand , numerical evaluations of the theoretical models are more limited to the best of our knowledge .such evaluations are usually performed by averaging runs of the routing algorithm considered . in , kleinberg computes for and ] , with at least 10,000 runs per estimate .to explain such a gain , we first need to introduce the issue of shortcuts computation . as stated in , the main computational bottleneck for simulating kleinberg s model comes from the shortcuts .* there are shortcuts in the grid ( assuming ) ; * when one wants to a shortcut , any of the other nodes can be chosen with non - null probability .this can be made by inverse transform sampling , with a cost ; * the shortcut distribution depends on the node considered , even if one uses relative coordinates .for example , a corner node will have neighbors at distance for , against neighbors for inner nodes ( as long as the ball of radius stays inside the grid ) .this means that , up to symmetry , each node has a unique shortcut distribution distinct ( non - isomorphic ) distributions . ] .this prevents from mutualising shortcuts drawings between nodes . in the end ,building shortcuts as described above for each of the runs has a time complexity , which is unacceptable if one wants to evaluate on large grids .the first issue is easy to address : as observed in , we can use the _ principle of deferred decision _ and compute the shortcuts on - the - fly as the path is built , because they are drawn independently and a node is never used twice in a given path .this reduces the complexity to . to lower the complexity even more, one can approximate the grid by the torus .this is the approach adopted in .the toroidal topology brings two major features compared to a flat grid : * the distribution of the relative position of the shortcut does not depend on the originating node .this enables to draw shortcuts in advance ( in bulk ) ; * there is a strong radial symmetry , allowing to draw a `` radius '' and an `` angle '' separately . to illustrate the gain of using a torus instead of a grid , consider the drawing of shortcuts from distinct nodes . in a grid ,if one uses inverse transform sampling for each corresponding distribution , the cost is . in the torus, one can compute the probabilities to be at distance for between and ( the maximal distance in the torus ) , draw radii , then choose for each drawn radius a node uniformly chosen among those at distance . assuming drawing a float uniformly distributed over can be made in , the main bottleneck is the drawing of radii . using bulkinverse transform sampling , it can be performed in , by sorting random floats , matching then against the cumulative distribution of radii and reverse sorting the result .we now describe our approach for computing in the flat grid with the same complexity than for the torus approximation . in order to keep low computational complexity without making any approximation of the model, we propose to draw a shortcut of a node as follows : 1 .we embed the actual grid ( we use here to refer to the lattice nodes of ) in a virtual lattice made of points inside a ball of radius .note that the radius chosen ensures that is included in no matter the location of .we draw a node inside such that the probability to pick up a node is proportional to .this can be done in two steps ( radius and angle ) : * for the radius , we notice that the probability to draw a node at distance is proportional to , so we pick an integer between and such that the probability to draw is to .* for the angle , pick an integer uniformly chosen between 1 and 3 .this determines a unique point among the points at distance from in the virtual lattice , chosen with a probability proportional to .if belongs to the actual grid , it becomes the shortcut , otherwise we try again ( back to step # 2 ) .( -4.2,0 ) ( 0,4.2 ) ( 4.2,0 ) ( 0,-4.2 ) ( -4.2,0 ) ; in 1, ... ,6 iin 1 , ... , ( q1i ) at ( .7 * - .7*i , .7*i ) ; ( q2i ) at ( - .7*i,.7 * - .7*i ) ; ( q3i ) at ( .7*i- .7 * , -.7*i ) ; ( q4i ) at ( .7*i , .7*i- .7 * ) ; in 0, ... ,3 in 0, ... ,3 ( ) at ( .7 * - .7,.7 * - .7 ) ; in 0, ... ,3 in 0, ... ,2 ( ) ( ) ( ) ( ) ; \(u ) at ( 11 ) ; ( t1 ) at ( q232 ) # 1 ; ( u ) edge[bend angle = 10 , bend right , red ] node[red , below left ] ( t1 ) ; ( t2 ) at ( q462 ) # 2 ; ( u ) edge[bend angle = 10 , bend right , red ] node[red , left ] ( t2 ) ; ( t3 ) at ( 23 ) # 3 ; ( u ) edge[bend angle = 10 , bend right , green ] node[green , below ] ( t3 ) ; this technique , illustrated in figure [ fig : toyrejection ] , is inspired by the _ rejection sampling _method . by construction, it gives the correct distribution : the node that it eventually returns is in the actual grid and has been drawn with a probability proportional to .we call this _ dynamic _ rejection sampling because the sampled distribution changes with the current node . considering as a relative center , the actual grid moves with and acts like an acceptance mask . on the other hand ,the distribution over the virtual lattice remains constant .this enables to draw batches of relative shortcuts that can be used over multiple runs , exactly like for the torus approximation .the only possible drawback of this approach is the number of attempts required to draw a correct shortcut .luckily , this number is contained .[ lem ] the probability that a node drawn in belongs to is at least .we will prove that we use the fact that the probability decreases with the distance combined with some geometric arguments .let be a lattice that has as one of its corner .let the lattice centered in . in terms of probability of drawing a node in ,the worst case is when is at some corner : there is a bijection from to such that for all .such a bijection can be obtained by splitting into and three other sub - lattices that are flipped over ( see figure [ fig : eight ] ) .this gives then we observe that the four possible lattices obtained depending on the corner occupied by fully cover .in fact , axis nodes are covered redundantly .this gives , if one folds back into like the corners of a sheet of paper , we get a strict injection from to ( the diagonal nodes of are not covered ) . moreover , for all .this gives this concludes the proof , as we get [ [ remarks ] ] remarks + + + + + + + * when ( uniform shortcut distribution ) , the bound is asymptotically tight : the success probability is exactly the ratio between the number of nodes in and , which is . on the other hand ,as grows , the probability mass gets more and more concentrated around ( hence in ) , so we should expect better performance ( cf section [ sec : performance - analysis ] ) . *the dynamic rejection sampling approach can be used in other variants of kleinberg s model , like for other dimensions or when the number of shortcuts per node is a random variable ( like in ) .the only requirement is the existence of some _ root _ distribution ( like the distribution over here ) that can be carved to match any of the possible distributions with a simple acceptance test .* only the nodes from may belong to , so nodes from are always sampled for nothing . for example , in figure [ fig : toyrejection ] , this represents 36 nodes over 84 . by drawing shortcuts in instead of , we could increase the success rate lower bound to .however , this would make the algorithm implementation more complex ( the number of nodes at distance is not always ) , which is in practice not worth the factor 2 improvement of the lower bound .this may not be true for a higher dimension . adapting the proof from lemma [ lem ] ,we observe that using a ball of radius will lead to a bound , while a grid of side will lead to .the two bounds are asymptotically tight for . in that casethe grid approach is more efficient than the ball approach ( this grows exponentially with ) .table [ x = n , y = time ] ; table [ x = n , y = time ] ; table [ x = n , y = time ] ; we implemented our algorithm in julia 0.4.5 .a working code example is provided in appendix [ sec : code ] .simulations were executed on a low end device ( a dell tablet with 4 gb ram and intel m-5y10 processor ) , which was largely sufficient for getting fast and accurate results on very large grids , thanks to the dynamic rejection sampling .unless said otherwise , is obtained by averaging runs . for , we mainly consider powers of two ranging from ( about 16,000 nodes ) to ( about 280 trillions nodes ) .we focus on ] ; * the best value of is actually slightly lower than .we believe that these observations are very important as they show that routing in kleinberg s grid is more robust than predicted by theory : it is efficient as long as is _ close enough _ to 2 . using our simulator, we can perform the same experiment . as shown by figure [ fig : n20000 ] ,the results are quite similar to the ones observed in . table [ x = r , y = delivery ] ; yet , there is a small but _ essential _ difference between the two experiments : kleinberg approximated his grid by a torus while we stay true to the original model .why do we use the word _ essential _ ? both shapes have the same asymptotic behavior ( proofs in are straightforward to adapt ) , so why should we care ?it seems to us that practical robustness is an essential feature if one wants to accept kleinberg s grid as a reasonable model for routing in social networks . to the best of our knowledge ,no theoretical work gives quantitative results on this robustness , so we need to rely on numerical evaluation .but when kleinberg uses a torus approximation , we can not rule out that the observed robustness is a by - side effect of the toroidal topology .the observation of a similar phenomenon on a flat grid discards this hypothesis .in fact , it suggests ( without proving ) that the robustness with respect to the exponent for grids of finite size may be a general phenomenon .we propose now to investigate this robustness in deeper details .we have evaluated the following values , which outline , for a given , the values of that can be considered reasonable for performing greedy routing : * the value of that minimizes , denoted ; * the smallest value of such that , denoted ; * the smallest and largest values of such that , denoted and respectively .the results are displayed in figure [ fig : interval ] .all values but are computed by bisection . for , we use a golden section search . finding a minimum requires more accuracy , so the search of is set to use runs per estimation . luckily , as the computation operates by design through near - optimal values , we can increase the accuracy with reasonable running times .table [ x = n , y = rsup2d2 ] ; table [ x = n , y = rmin ] ; table [ x = n , y = rinfd2 ] ; table [ x = n , y = rinf2d2 ] ; + [ style = dashed , no markers ] coordinates ( 128,2 ) ( 16777216,2 ) ; besides confirming that is asymptotically the optimal value , figure [ fig : interval ] shows that the range of reasonable values for finite grids is quite comfortable .for example , considering the range of values where is less than twice , we observe that : * for ( less than four million nodes ) , any between 0 and 2.35 works ; * for ( less than 270 million nodes ) , the range is between 0.85 and 2.26 . * even for ( about 280 trillions nodes ), all values of between 1.58 and 2.16 can still be considered _efficient enough_. table [ x = n , y = delivery ] ; ; table [ x = n , y = delivery ] ; ; table [ x = n , y = delivery ] ; ; our simulator can be used to verify the theoretical bounds proposed in .for example , figure [ fig : varyingr ] shows for equal to , 2 , and .as predicted , seems to behave like for and like for the two other cases .yet , the exponents found differ from the ones proposed in . for both and , we observe , while the lower bound is .intrigued by the difference , we want to compute as a function of . however , we see in figure [ fig : varyingr ] that a curve appears to have a positive slope in a logarithmic scale , even for large values of . this may distort our estimations . to control the possible impact of this distortion , we estimate the exponent at two distinct scales : * ] , using the estimation .the results are displayed in figure [ fig : exponent ] .the range of was extended to ] .we propose to set , which corresponds to about 72,000,000 potential subjects .exponent : : in , kleinberg investigates how to relate the -harmonic distribution with real - life observations .he surveys multiple social experiments and discusses the correspondence with the exponent of his model , which gives estimates of between 1.75 and 2.2 .neighborhood : : the default value means that there are no more than five `` acquaintances '' per node .this is quite small compared to what is observed in real - life social networks . for example, the famous dunbar s number , which estimates the number of _ active _ relationships , is 150 .more recent studies seem to indicate that the average number of acquaintances is larger , ranging from 250 to 1500 ( see and references within ) .we propose to set and so that the neighborhood size is about 600 , the value reported in .regarding the partition between local links ( ) and shortcuts ( ) , we consider three typical scenarios : + * , ( shortcut scenario : the neighborhood is almost exclusively made of shortcuts , and local links are only here to ensure the termination of greedy routing ) .* , ( balanced scenario ) .* , ( local scenario , with a value of not too far from dunbar s number ) .having set all parameters , we can evaluate the performance of greedy routing .the results are displayed in figure [ fig : six ] .we observe that the expected delivery time roughly stands between five and six for a wide range of exponents .* ] for the balanced scenario . * $ ] for the local scenario .except for the local scenario , which leads to slightly higher routing times for , the six degrees of separation are achieved for all values of that are consistent with the observations surveyed in .this allows to answer our question : the augmented grid proposed by kleinberg is indeed a good model to explain the _ six degrees of separation _ phenomenon .table [ x = r , y = apl ] ; table [ x = r , y = apl ] ; table [ x = r , y = apl ] ; + [ style = dashed , no markers ] coordinates ( 0,6 ) ( 3,6 ) ;we proposed an algorithm to evaluate the performance of greedy routing in kleinberg s grid .fueled by a dynamic rejection sampling approach , the simulator based on our solution performs several orders of magnitude faster than previous attempts .it allowed us to investigate greedy routing under multiple perspective .* we noted that the performance of greedy routing is less sensitive to the choice of the exponent than predicted by the asymptotic behavior , even for very large grids . *we observed that the bounds proposed in are not tight except for and .we conjectured that the tight bounds are the 2-dimensional equivalent of bounds proposed in for the 1-dimensional ring .* we claimed that the model proposed by kleinberg in is a good model for the _ six degrees of separation _ , in the sense that it is very simple _ and _ accurate .our simulator is intended as a tool to suggest and evaluate theoretical results , and possibly to build another bridge between theoretical and empirical study of social systems .we hope that it will be useful for researchers from both communities . in a future work ,we plan to make our simulator more generic so it can handle other types of graph and augmenting schemes .10 stavros athanassopoulos , christos kaklamanis , ilias laftsidis , and evi papaioannou .an experimental study of greedy routing algorithms . in_ high performance computing and simulation ( hpcs ) , 2010 international conference on _ , pages 150156 , june 2010 .lali barrire , pierre fraigniaud , evangelos kranakis , and danny krizanc .efficient routing in networks with long range contacts . in_ proceedings of the 15th international conference on distributed computing _ , disc 01 , pages 270284 , london , uk , 2001 .ithiel de sola pool and manfred kochen .contacts and influence ., 1:551 , 1978 .r. i. m. dunbar . ., 22(6):469493 , june 1992 .david easley and jon kleinberg .the small - world phenomenon . in _ networks , crowds , and markets : reasoning about a highly connected world _ , chapter 20 , pages 611644 .cambridge university press , 2010 .pierre fraigniaud , cyril gavoille , adrian kosowski , emmanuelle lebhar , and zvi lotker . . ,410(21 - 23):19701981 , 2009 .pierre fraigniaud , cyril gavoille , and christophe paul .eclecticism shrinks even small worlds ., 18(4):279291 , 2006 .pierre fraigniaud and george giakkoupis . on the searchability of small - world networks with arbitrary underlying structure . in _ proceedings of the 42nd acm symposium on theory of computing ( stoc ) _ ,pages 389398 , june 68 2010 .pierre fraigniaud and george giakkoupis .greedy routing in small - world networks with power - law degrees ., 27(4):231253 , 2014 .frigyes karinthy .lncszemek , 1929 .jon kleinberg .navigation in a small world ., august 2000 .jon kleinberg .the small - world phenomenon : an algorithmic perspective .in _ in proceedings of the 32nd acm symposium on theory of computing _ , pages 163170 , 2000 .david liben - nowell , jasmine novak , ravi kumar , prabhakar raghavan , and andrew tomkins .geographic routing in social networks ., 102(33):1162311628 , 2005 .gurmeet singh manku , moni naor , and udi wieder .know thy neighbor s neighbor : the power of lookahead in randomized p2p networks . in _ proceedings of the thirty - sixth annual acm symposium on theory of computing ( stoc ) _ ,pages 5463 .acm , 2004 .chip martel and van nguyen . analyzing kleinberg s ( and other ) small - world models . in_ proceedings of the twenty - third annual acm symposium on principles of distributed computing _ , podc 04 , pages 179188 , new york , ny , usa , 2004 .tyler h. mccormick , matthew j. salganik , and tian zheng .how many people do you know ? : efficiently estimating personal network size ., 105(489):5970 , 2010 .stanley milgram .the small world problem ., 67(1):6167 , 1967 .michael mitzenmacher and eli upfal . .cambridge university press , new york , ny , usa , 2005 .william h. press , saul a. teukolsky , william t. vetterling , and brian p. flannery . minimization or maximization of functions . in _numerical recipes 3rd edition : the art of scientific computing _ , chapter 10 .cambridge university press , new york , ny , usa , 3 edition , 2007 .jeffrey travers and stanley milgram .an experimental study of the small world problem ., 32:425443 , 1969 .john von neumann .various techniques used in connection with random digits . , 12:3638 , 1951 .barry wellman .is dunbar s number up ? , 2011 ..... using statsbase # for using julia built - in sample function function shortcuts_bulk(n , probas , bulk_size ) radii = sample(1:(2*n-2 ) , probas , bulk_size ) shortcuts = tuple{int , int } [ ] for i = 1:bulk_size radius = radii[i ] angle = floor(4*radius*rand())-2*radius push!(shortcuts , ( ( radius - abs(angle ) ) , ( sign(angle)*(radius - abs(radius - abs(angle ) ) ) ) ) ) end return shortcuts end # estimates the expected delivery time for g(n , r , p , q ) over r runs function edt(n , r , p , q , r ) bulk_size = n probas = weights(1./(1:(2*n-2)).^(r-1 ) ) shortcuts = shortcuts_bulk(n , probas , bulk_size ) steps = 0 for i = 1:r # s : start / current node ; a : target node ; d : distance to target s_x , s_y , a_x , a_y = tuple(rand(0:(n-1 ) , 4 ) ... ) d = abs(s_x - a_x ) + abs(s_y - a_y ) while d>0 sh_x , sh_y = -1 , -1 # sh will be best shortcut node d_s = 2*n # d_s will be distance from sh to a for j = 1:q # draw q shortcuts ch_x , ch_y = -1 , -1 # ch will be current shortcut c_s = 2*n # c_s will be distance from ch to a # dynamic rejection sampling while ( ch_x < 0 || ch_x > = n || ch_y < 0 || ch_y >= n ) r_x , r_y = pop!(shortcuts ) ch_x , ch_y = s_x + r_x , s_y + r_y if isempty(shortcuts ) shortcuts = shortcuts_bulk(n , probas , bulk_size ) end end c_s = abs(a_x - ch_x ) + abs(a_y - ch_y ) if c_s< d_s # maintain best shortcut found d_s = c_s sh_x , sh_y = ch_x , ch_y end end if d_s < d - p # follow shortcut if efficient s_x , s_y = sh_x , sh_y d = d_s else # follow local links d = d - p delta_x = min(p , abs(a_x - s_x ) ) delta_y = p - delta_x s_x + = delta_x*sign(a_x - s_x ) s_y + = delta_y*sign(a_y - s_y ) end steps + = 1 end end steps /=r return steps end ....
|
one of the key features of small - worlds is the ability to route messages with few hops only using local knowledge of the topology . in 2000 , kleinberg proposed a model based on an augmented grid that asymptotically exhibits such property . in this paper , we propose to revisit the original model from a simulation - based perspective . our approach is fueled by a new algorithm that uses dynamic rejection sampling to draw augmenting links . the speed gain offered by the algorithm enables a detailed numerical evaluation . we show for example that in practice , the augmented scheme proposed by kleinberg is more robust than predicted by the asymptotic behavior , even for very large finite grids . we also propose tighter bounds on the performance of kleinberg s routing algorithm . at last , we show that fed with realistic parameters , the model gives results in line with real - life experiments .
|
the reed - frost model is one of the simplest stochastic epidemic models .it was formulated by lowell reed and wade frost in 1928 ( in unpublished work ) and describes the evolution of an infection in generations .each infected individual in generation ( ) independently infects each susceptible individual in the population with some probability .the individuals that become infected by the individuals in generation then constitute generation and the individuals in generation are removed from the epidemic process .see for a description of the asymptotic ( as the population size grows to infinity ) behavior of the process . in the original version ,an infective individual infects each susceptible individual in the population with the same probability .realistically however an infective individual has the possibility to infect only those individuals with whom she actually has some kind of social contact .the reed - frost model is easily modified to capture this by introducing a graph to represent the social structure in the population and then let the infection spread on this graph .more precisely , an infective individual infects each neighbor in the graph independently with some probability .when analyzing epidemics on graphs , the graph is usually taken to be unweighted with respect to the infection , that is , transmission takes place along all edges with the same probability . in this paperhowever , inhomogeneity will be incorporated in the transmission probability by aid of weights on the edges .more precisely , each edge in the graph is assigned two weights and that are assumed to take values in [ 0,1 ] .the probability that infects if gets infected is then given by and vice versa .note that it may well be that .we shall mainly consider i.i.d .weights , although we briefly treat weights that are determined by the degrees of the vertices in section 3 . to describe the underlying network, we shall use the so called configuration model .once the graph has been generated , each edge is equipped with two weights as described above .basically , the configuration model takes a probability distribution with support on positive integers as input and generates a graph with this particular degree distribution ; see section 2 for further details .the degree distribution is indeed an important characteristic of a network with a large impact on the properties of the network and it is therefore desirable to be able to control this in a graph model .furthermore , the configuration model exhibits short distances between the vertices , which is in agreement with empirical findings ; see .epidemics on un - weighted graphs generated by the configuration model has previously been studied in .related results have also appeared in the physics literature .an important quantity in epidemic modeling is the epidemic threshold , commonly denoted by .it is defined as a function of the parameters of the model such that a large outbreak in the epidemic has positive probability if and only if .expressions for typically stems from branching process approximations of the initial stages of the epidemic .it is well - known from branching process theory that the process has a positive probability of exploding if and only if the expected number of children of an individual exceeds 1 .a natural candidate for is hence the expected number of new cases caused by a typical infective in the beginning of the time course .for this reason , the epidemic threshold is often referred to as the basic reproduction number .the main goal of the paper is to study how the epidemic threshold is affected by vaccination strategies based on the edges weights . to this end, we assume that a perfect vaccine is available that completely removes vaccinated individuals from the epidemic process .the simplest possible vaccination strategy , usually referred to as random vaccination , is to draw a random sample from the population and then vaccinate the corresponding individuals .an alternative , known as acquaintance vaccination , is to choose individuals randomly and then , for each chosen individual , vaccinate a random neighbor rather than the individual itself .the idea is that , by doing this , individuals with larger degrees are vaccinated .we shall study a version of acquaintance vaccination where , instead of vaccinating a random neighbor , the neighbor with the largest weight on its edge from the sampled vertex is vaccinated . in a human population , this correspond to asking individuals to name their _ closest _ friend ( in some respect ) instead of just naming a random friend .it is demonstrated that this is more efficient than standard acquaintance vaccination , in the sense that the basic reproduction number with the weight based strategy is smaller for a given vaccination coverage . throughout this paperwe shall use the term `` infection '' to refer to the phenomenon that is spreading on the network .we remark that this does not necessarily consist of an infectious disease spreading in a human population , but may also refer to other infectious phenomena such as a computer virus spreading in a computer network , information routed in a communication net or a rumor growing in a social media . in many of these situationsthe connections are indeed highly inhomogeneous .furthermore , depending on what type of spreading phenomenon that is at hand , the term vaccination can refer to different types of immunization .epidemics on weighted graphs have been very little studied so far and there are few theoretical results .see however for an approach based on generating function and for simulation studies .we mention also the recent work on first passage percolation on random graphs by bhamidi et al . . there ,each edge in a graph generated according to the configuration model is equipped with an exponential weight and the length and weight of the weight - minimizing path between two vertices are studied .interpreting the weights as the traversal times for an infection , this can be related to the time - dynamics of an epidemic .the rest of the paper is organized so that the graph model and the epidemic model are described in more detail in section 2 . in section 3 , expressions for the epidemic thresholds are given and calculated for some specific weight distributions .section 4 is devoted to vaccination : in section 4.1 , a weight based acquaintance vaccination strategy for weights with a continuous distribution is described and an expression for the epidemic threshold is derived .section 4.2 treats a strategy for a two - point weight distribution .the findings are summarized in section 5 , where also some directions for further work are given .we shall throughout refrain from giving rigorous details for the underlying branching process approximations , but instead focus on heuristic derivations of the epidemic quantities .indeed , what needs to be proved is basically that the branching process approximations hold long enough so that conclusions for the branching processes are valid also for the epidemic processes .this however is not affected by weights on edges ( as long as these are not functions of the structure of the graph ) and hence rigorous details can presumably be filled in by straightforward modifications of the arguments in ( the degree based weights mentioned in section 3 might however require some more work ) .we consider a population of size represented by vertices . the graph representing the connections in the population is generated by the configuration model . to produce the graph , a probability distribution with support on the non - negative integersis fixed and each vertex is independently equipped with a random number of half - edges according to this distribution .these half - edges are then paired randomly to create the edges in the graph , that is , first two half - edges are picked at random and joined , then another two half - edges are picked at random from the set of remaining half - edges and joined , etc. if the total number of half - edges is odd , a half - edge is added at a randomly chosen vertex to pair with the last half - edge .this procedure gives a multi - graph , that is , a graph where self - loops and multiple edges between vertices may occur .if has finite second moment however , there will not be very many of these imperfections .in particular , the probability that the resulting graph is simple will be bounded away from 0 as ; see ( * ? ? ?* lemma 5.5 ) or ( * ? ? ?* theorem 7.10 ) .if has finite second moment we can hence condition on the event that the graph is simple , and work under this assumption .another option is to erase self - loops and merge multiple edges , which asymptotically does not affect the degree distribution if has finite second moment ; see ( * ? ? ?* theorem 7.9 ) .henceforth we shall hence assume that has finite second moment and ignore self - loops and multiple edges .when the graph has been generated , each edge is assigned two weights and that are assumed to take values in [ 0,1 ] .this can be thought of as if each one of the half - edges that is used to create the edge independently receives a weight .the epidemic spread is initiated in that one randomly chosen vertex is infected .this vertex constitutes generation 1 .the epidemic then propagates in that each vertex in generation ( ) infects each susceptible neighbor independently with probability .generation then consists of the vertices that are infected by the vertices in generation and the vertices in generation are removed from the epidemic process .we shall mainly restrict to the case where the weights are taken to be independent . however , we mention also the possibility to let them be functions of the degrees of the vertices : * independent weights . *the weights are taken to be i.i.d .copies of a random variable that takes values in [ 0,1 ] . the distribution of can be defined in many different ways : * as an intrinsic distribution on [ 0,1 ] , for instance a uniform distribution or , more generally , a beta distribution .* by letting be an integer valued random variable , indicating for instance how many times a given vertex contacts a given neighbor during some time period , and then setting , with ] .alternatively , could be interpreted as the resistance involved in a connection and modeled as a decreasing function of . * degree dependent weights . * the weights of an edge could also be modeled as functions of and .we shall consider the case when for some function that takes values in [ 0,1 ] .all outgoing edges from hence have the same weight , and independent trials with this success probability determine whether the edges are used to transmit infection . with increasing ,this setup means that vertices with large degree have a larger probability of infecting their neighbors , for instance in that they tend to be more active . with decreasing ,high degree vertices are instead less likely to infect their neighbors , which might be the case for instance in a situation where high degree vertices have weaker bonds to their acquaintances .as mentioned in the introduction , expressions for epidemic thresholds usually come from branching process approximations of the initial stages of an epidemic . as for epidemics on graphs ,branching process approximations are typically in force as soon as the graph is tree - like , that is , if with high probability the graph does not contain short cycles .this means that the neighbors of a given infective in the beginning of the time course are susceptible with high probability and hence the initial stages of the generation process of infectives is well approximated by a branching process . under the assumption that the degree distribution has finite second moment , the configuration model is indeed tree - like , allowing for such an approximation ; see e.g. for details .the epidemic threshold is then given by the reproduction mean in the approximating branching process , which in turn is given by the expected number of new cases generated by an infective vertex in the beginning of the epidemic .when calculating this , one should not consider the initial infective , since this vertex might be atypical , but rather an infective vertex in , say , the second generation .let be the probabilities defining the degree distribution in the configuration model .then the initial infective has degree distribution , while the neighbors of this vertex have the size biased degree distribution defined by where denotes the mean degree .the infective vertices in the second ( and later ) generations hence have degree distribution .denote by a random variable with this distribution .* independent weights . *consider an infected vertex in the second generation .one neighbor of this vertex must have transmitted the infection and can hence not get reinfected , while the other neighbors are with high probability susceptible .the number of new cases generated by the vertex is hence distributed as if the weights are i.i.d . copies of , then the mean of the indicators is ] . * degree dependent weights . *assume that .since does not depend on , the degree distribution of an infective in the second generation is .conditionally on its degree , the number of new cases generated by an infective in the second generation is bin()-distributed .it follows that .\ ] ] let denote the basic reproduction number for an epidemic with a homogeneous infection probability given by the transmission probability ] , that is , an epidemic where the infection probability for a vertex with degree is averaged over all possible degrees .the basic reproduction number in such an epidemic is given by {{\mathbb e}}[g(d)].\ ] ] * example 3.1 .* first take po( ) .it is not hard to see that then .take .if , this means that only vertices with degree at least transmit the infection .we have ={\mathbb p}(d\geq \theta-1)\ ] ] and =\sum_{k\geq \theta}(k-1)\frac{kp_k}{\mu}=\sum_{k\geq \theta}\frac{\mu^{k-1}}{(k-2)!}e^{-\mu}=\mu{\mathbb p}(d\geq \theta-2).\ ] ] hence with for , we get =e^{-\mu(1-\alpha)}\quad\mbox{and}\quad{{\mathbb e}}[\alpha^{\widetilde{d}}]=\alpha e^{-\mu(1-\alpha)}.\ ] ] furthermore =\sum_{k\geq 1}(k-1)\alpha^k\frac{kp_k}{\mu}= \alpha^2\sum_{k\geq 0}\alpha^kp_k=\alpha^2e^{-\mu(1-\alpha)}.\ ] ] hence * example 3.2 . * now take a distribution with . in this caseexact computations are out of reach but numerical values of the thresholds are easily obtained .we give an example with for ] ( below we use to denote the corresponding probability and to denote probability conditional only on that ) .it remains to quantify this expectation .clearly is affected by the number of times that is sampled to name a neighbor in the vaccination procedure , which in turn is affected by the information that was not used for vaccination . specifically , when we have , for , that let denote a random variable distributed as the :th smallest in a collection of independent weight variables . if ( ) , then the out - edges with the largest weights are used for vaccination .the remaining out - edges are dangerous with a probability given by the expectation of their weights .note however that we do not want to count the edge to , whose weight indeed belongs to the smallest since , by assumption , it is not used for vaccination .the ordering of the weight on the edge to among the remaining out - weights is uniform on .we obtain =\left(1-\frac{1}{k - i}\right)\sum_{j=1}^{k - i}{{\mathbb e}}[w^{(k)}_j].\ ] ] note that , if , then each one of the out - edges from to is dangerous independently with probability , and the above expression reduces to .if , then all out - edges ( except ) are used for vaccination meaning that there are no dangerous out - edges .hence =\sum_{i=0}^{k-2}{\mathbb p}_{a , k}(v_w = i){{\mathbb e}}_{a , k}[h|v_w = i].\ ] ] this concludes the derivation of the reproduction mean ( [ eq : rbw ] ) .calculating the reproduction mean involves calculating expectations of order statistics ( c.f .( [ eq : vvhk ] ) ) . finding analytical expressions for such expectationsis typically not possible .however , the density of the :th smallest observation in a collection of i.i.d .variables with density and distribution function is given by where denotes the gamma function .for a given weight distribution , the mean can hence be calculated by aid of numerical integration .a particularly easy case is when the weights are uniform on [ 0,1 ] . then so that =j/(k+1) ] .then and , using ( [ eq : uni_dan ] ) , the reproduction mean in ( [ eq : rbw ] ) is easily calculated for a given degree distribution .figure 2 shows the basic reproduction number plotted against the vaccination coverage when the degree distribution is po(6 ) .the plot also shows the reproduction number for standard acquaintance vaccination and for uniform vaccination . for a given vaccination coverage, we have , although the difference between standard acquaintance vaccination and the weight based strategy is quite small .note however that , in practical situations also a small gain could be valuable : the vaccination coverage required to push the epidemic threshold below 1 thereby preventing large outbreaks is referred to as the _ critical vaccination coverage_. clearly , when fighting an infectious disease in a large human population for instance , even a very small decrease in the critical vaccination coverage might imply large savings in terms of vaccination costs. * example 4.1.2 .* let the weights have a beta distribution with parameters 0.5 and 2.5 ; see figure 3 . in this caseit is not possible to write down analytical expressions in closed form for but it is easily computed numerically .figure 4 shows the basic reproduction numbers plotted against the vaccination coverage when the degree distribution is po(14 ) . in this casethe weight based strategy performs clearly better than the standard acquaintance vaccination .in particular , the critical vaccination coverage for uniform vaccination and standard acquaintance vaccination is 0.58 and 0.53 respectively , while for the weight based strategy it is decreased to 0.47 .the reason is that the weight distribution is right - skewed : most weights are small but there is a thick right - tail with large weights , and by getting rid of these large weights the mean in the weight distribution is decreased more than in the uniform case. * example 4.1.3 . * finally ,let the weights have the same beta distribution as in the previous example , but take the degree distribution to be a power - law with exponent 3.5 and the same mean 14 as in the poisson distribution .figure 5 displays the basic reproduction numbers in this case .again the weight based strategy performs better than standard acquaintance vaccination . in this case however , the most striking feature is the difference between the uniform vaccination and the acquaintance based strategies : when the degree distribution is a power law , the basic reproduction number is pushed down very effectively by targeting high degree vertices. the finding in the previous section that the weight based strategy performs well for right - skewed weight distributions in the continuous case might lead one to suspect that the strategy is particularly useful for a `` polarized '' discrete distribution . in this sectionwe analyze the simple case when the weights have a two - point distribution . as a motivation we can think of a network having two types of directed transmission links , one that spreads an infection with high probability and one that does so only with a very small probability .it would then be natural to design a strategy that targets vertices with highly infectious connections .assume that where and write and .the strategy is defined so that each vertex is sampled independently with probability and , for each sampled vertex , its neighbors with weight on their edge from are vaccinated : recall that denotes the set of neighbors of a vertex and let that is , is the set of neighbors of for which the weight on the edge attains the larger value .then , if is sampled , the vertices in are vaccinated .no action is taken if is empty . to derive the vaccination coverage ,note that the probability that a randomly chosen vertex in the graph is not chosen for vaccination by a given neighbor equals as in the previous section we obtain the vaccination coverage from the equation the derivation of the epidemic threshold is based on the same branching process as in the previous section , that is , an individual in the branching process consists of an unvaccinated vertex along with an outgoing edge that is not used for vaccination and that is open for transmission . to find an expression for the reproduction mean , which serves as the epidemic threshold , first note that in this case the degree distribution of vertex is not affected by the information that did not chose for vaccination ( recall that the latter event is denoted ) . indeed , whether is vaccinated or not if is sampled is determined only by .hence .conditionally on , the probability that is not vaccinated via any of its other neighbors ( apart from ) is given by .we also need to determine the expected number of dangerous edges from to vertices in conditionally on that and on ( note that , conditionally on the degree , the distribution of the number of dangerous edges from is not affected by the information that is not vaccinated ) . for thiswe need the corresponding probability that is sampled to name a neighbor for vaccination . with denoting the number of times that is sampled to name a neighbor , we get note that this probability does not depend on . if , then the expected number of dangerous edges from ( to other vertices than ) is . if on the other hand , then the neighbors reached by edges with the large weight are vaccinated .the expected number of remaining out - edges from ( to other vertices than ) is and each one of these is open for transmission with probability .the expected number of dangerous edges from is hence .write for the basic reproduction number with the current vaccination strategy .we get we now compare this to the epidemic threshold ( [ eq : rvu ] ) for uniform vaccination and , in particular , to the threshold ( [ eq : rba ] ) for the standard acquaintance vaccination strategy .* example 4.2.1 .* figure 6 shows the basic reproduction numbers when the degree distribution is po(14 ) and the weight distribution is specified by ( most edges hence have a very small weight , but a small fraction has weight 1 , implying almost sure transmission ) .the plot reveals that the weight based strategy clearly outperforms the other strategies in this case .the critical vaccination coverage is lowered from 0.58 with standard acquaintance vaccination to 0.48 with the weight based strategy. * example 4.2.2 . *figure 7 shows the basic reproduction numbers for the same weight distribution as in the previous example when the degree distribution is a power - law with exponent 3.5 and mean 14 .again we see that the weight based strategy is the most efficient .* example 4.2.3 .* finally , figure 8 displays the basic reproduction numbers for the same power - law degree distribution as in the previous example but for a weight distribution specified by . in this case almost nothing is gained by using the weight based strategy compared to standard acquaintance vaccination ( the lines are almost aligned ) .the explanation for this is that , although the weight based strategy targets highly infective links , it does so more `` locally '' in the graph : recall that _ all _ neighbors with large weight on their edges from a sampled vertex are vaccinated .this means that , to achieve a given vaccination coverage , a much smaller sample of vertices is required compared to standard acquaintance vaccination if the probability of the larger weight is reasonably large ; figure 9 shows a plot for the current example .thus the weight based strategy affects fewer parts of the graph and this cancels the positive effect that lies in securing high risk connections .however , the strategy does not perform worse than the standard acquaintance strategy .hence the strategy is still more effective in the sense that it requires a smaller sample of vertices to name neighbors for vaccination to obtain a given vaccination coverage . in situations when there are costs associated with selecting and communication with the sampled vertices , this might be important. have formulated and analyzed a model for epidemic spread on weighted graphs , where the weight of an edge indicates the probability that it is used for transmission .expressions have been derived for the epidemic threshold , specifying when there is a positive probability for an epidemic to take off .the case with independent weights is analogous to the case with a constant infection probability given by the mean weight . for degreedependent out - weights which for instance makes it possible to model a situation where high degree vertices infect their neighbors with a smaller probability however the behavior is different from a homogeneous epidemic .furthermore , we have analyzed a version of the acquaintance vaccination strategy where neighbors of the sampled vertices reached by edges with large weights are vaccinated .the selected vertices hence impose vaccination on the neighbor(s ) that they have the strongest connection(s ) to instead of a random neighbor .two versions of this strategy have been treated : one for continuous weight distributions and one for two - point distributions . in the examples we have looked at, these strategies have been seen to outperform standard acquaintance vaccination , the difference being largest in cases where the weight distribution is highly right - skewed .the reason why the weight based acquaintance strategies perform better than standard acquaintance vaccination is that , in addition to removing the vaccinated neighbors , the ability to spread the epidemic is decreased also for the sampled vertices in that their high - weight connections are secured . as for further work ,there are numerous possibilities . in many situationsit would be desirable to allow for ( typically positive ) correlations between the weights and on a given edge , for instance one might want to assign only one weight per edge , specifying the probability of transmission in any direction .this leads to complications in the current analysis , basically because the information that a vertex is unvaccinated then gives information on the weights on the edges of its neighbors .furthermore , the basic idea in acquaintance vaccination is that , by vaccinating neighbors of the sampled vertices , one reaches vertices with higher degree .a natural further development of this idea would be to vaccinate neighbors with maximal degree , that is , selected vertices are asked to identify their neighbor(s ) with the largest degree among the neighbors ( assuming that they have this information ) and these neighbors are then vaccinated .unfortunately this seems to lead to complicated dependencies in the resulting epidemic process .we also mention that it would be interesting to investigate the final size of the epidemic .this is usually related to the probability of a large outbreak and quantified via an equation involving the generating function of the reproduction distribution . for the vaccination strategies that we have considered here, this equation would involve the distribution ( [ eq : ord_stat ] ) of order statistics and is hence presumably complicated .but it would be interesting to study the final size by aid of simulation .other possible continuations include investigating how the results are affected by introducing clustering ( triangles and other short cycles ) in the underlying graph , to involve time - dynamic in the vaccination procedure and to generalize the model for the epidemic spread .
|
a reed - frost epidemic with inhomogeneous infection probabilities on a graph with prescribed degree distribution is studied . each edge in the graph is equipped with two weights and that represent the ( subjective ) strength of the connection and determine the probability that infects in case is infected and vice versa . expressions for the epidemic threshold are derived for i.i.d . weights and for weights that are functions of the degrees . for i.i.d . weights , a variation of the so called acquaintance vaccination strategy is analyzed where vertices are chosen randomly and neighbors of these vertices with large edge weights are vaccinated . this strategy is shown to outperform the strategy where the neighbors are chosen randomly in the sense that the basic reproduction number is smaller for a given vaccination coverage . _ keywords : _ reed - frost epidemic , weighted graph , degree distribution , epidemic threshold , vaccination . ams 2000 subject classification : 92d30 , 05c80 .
|
the hierarchical structure formation paradigm is based upon the simple premise that large scale structure in the universe results from the gravitational amplification of small , primordial density fluctuations .the origin of the fluctuations is uncertain , but one explanation is that they are quantum ripples boosted to macroscopic scales by inflation .many clear examples of interacting or merging galaxies , a key feature of hierarchical models , were presented during this meeting .further convincing evidence for the paradigm can be derived by comparing the relative amplitudes of density fluctuations in universe today with those present at some earlier epoch .the cosmic microwave background radiation is a snapshot of the distribution of photons and baryons just a few hundred thousand years after the big bang .fluctuations in the temperature of the background radiation can be related to fluctuations in the distribution of baryons at the epoch of recombination , .the inferred fluctuations are tiny , on the order of one part in a hundred thousand . however ,if an additional component to the mass density of the universe is included , weakly interacting cold dark matter , these fluctuations can subsequently develop into the large scale structure that we measure in the universe today ( peacock et al .2001 ) .an important challenge for theorists is to predict the formation and evolution of galaxies in a model universe in which the formation of structure in the dark matter proceeds in a hierarchical manner .two powerful simulation techniques have been developed to address this issue : direct n - body or grid codes that follow the dynamical evolution of dark matter and gas , and semi - analytic codes that use a set of simple , physically motivated rules to model the complex physics of galaxy formation .these techniques have their advantages and disadvantages ( e.g. limited resolution in case of n - body / grid based codes ; the assumption of spherical symmetry for cooling gas in the semi - analytics ) , and so are complementary tools with which to attack the problem of galaxy formation . a preliminary study comparing the cooling of gas and merging of `` galaxies '' in a smooth particle hydrodynamics simulation with the output of a semi - analytic code has shown that there is reassuringly good agreement between the results obtained using the two techniques ( benson et al .the past decade witnessed an explosion in observations of galaxies at high redshift , mainly as a result of new facilities such as the hubble space telescope and the keck telescopes in the optical , and the opening of other parts of the electromagnetic spectrum , e.g. the sub - millimetre , probed by the scuba instrument on ukirt . in order to interpret these exciting new data ,semi - analytic galaxy formation codes have been developed that model a wide range of physical processes .below , i will outline the scheme developed by the durham group and collaborators ( benson etal 2000a ; cole etal 2000 ; granato etal 2000 ) .similar codes have also been devised by other groups ( e.g. avila - reece & firmani 1998 ; kauffmann etal 1999 ; somerville & primack 1999 ) . *the formation and merging of dark matter haloes , driven by gravitational instability .this process is completely determined by the initial power spectrum of density fluctuations and by the values of the cosmological parameters , and hubble s constant . * the shock heating and virialisation of gas within the gravitational potential wells of dark matter haloes . *the cooling of gas in haloes . *the formation of stars from cooled gas .this process is regulated by the injection of energy into the cold gas by supernovae and stellar winds . *the mergers of galaxies after their host dark matter haloes have merged .there are a number of major improvements in the cole et al .( 2000 ) semi - analytic code over earlier versions : a more accurate technique is used in the monte - carlo generation of dark matter halo merger trees , the chemical enrichment of the ism is followed , disk and bulge scale lengths are computed using a prescription based on conservation of angular momentum and the obscuration of starlight by dust is computed in a self - consistent fashion . the semi - analytic model requires a number of physical parameters to be set . some of these describe the background cosmology and are gradually being pinned down , for example , by measurements of supernovae brightnesses at high redshift or through the production of high resolution maps of the microwave background radiation .other parameters refer to the prescriptions we adopt to model the physics of galaxy formation .their values are set by reference to a subset of data on the local galaxy population , as explained by cole etal . the observational constraint to which we attach the most weight is the field galaxy luminosity function .somewhat disappointingly , and in spite of much effort , this fundamental characterisation of the local galaxy distribution was not well known until this year .fig 5 . of coleet al . ( 2000 ) shows that any semblance of a consensus between the various determinations of this quantity prior to 2000 is lost even after moving just one magnitude faintwards of .however , this situation is now changing beyond recognition . the 2df galaxy redshift survey ( 2dfgrs ) andsloan digital sky survey are pinning down the field galaxy luminosity function to a high level of accuracy .the degree of improvement that is now possible with the 2dfgrs is readily apparent in fig .[ fig : lf ] . in this figure, we compare measurements obtained from the 2dfgrs with a representative determination of the luminosity function made from a redshift survey completed in the last millenium . for the first time, random errors in the luminosity function estimate are unimportant over a wide range of magnitudes . the solid lines in fig .[ fig : lf ] show the luminosity function of the cole etal model .the faint end is influenced by the strength of feedback in low mass haloes .the break at high luminosities is due to long cooling times in more massive dark matter haloes , which have higher virial temperatures and form more recently in hierarchical models . assuming a higher galaxy merger rate would depress the luminosity function at the faint end and weaken the break at the bright end . from a naive point of view , the model in fig .[ fig : lf ] would be incorrectly dismissed as an abject failure due to an unacceptably large value with reference to the 2dfgrs estimate of the luminosity function .however , it is important to appreciate that the parameters in the semi - analytic model are _ physical _ parameters . as such, they have a completely different meaning to the parameters that specify a schechter function fit to these data , which is merely a convenient mathematical shorthand to describe the data points .we are not at liberty to chose any _ ad hoc _ combination of the parameters in the semi - analytic model .for example , changing the strength of feedback in order to reduce the slope of the faint end of the luminosity function also has an impact on the shape of the tully - fisher relation and upon the size of galactic disks . in collaboration with alessandro bressan , gian - luigi granato and laura silva ,we have combined the semi - analytic model of cole etal with the spectro - photometric code of silva et al .( 1998 ) , which treats the reprocessing of radiation by dust .the range of wavelengths spanned by the spectral energy distribution of model galaxies now extends from the extreme ultra - violet through the optical to the far - infrared , sub - millimetre and on to the radio ( granato et al .one highlight of this work is the reproduction of the observed smooth attenuation law for starbursts , starting from a dust mixture that reproduces the milky way extinction law which has a strong bump at ; this implies that the observed attenuation is strongly dependent on the geometry of stars and dust . in fig .[ fig : lf60 ] , we show the model predictions for the luminosity function . above , the model luminosity functionis dominated by galaxies undergoing bursts driven by mergers .this agrees with observations of ultra - luminous iras galaxies , which are all identified as being at some stage of the interaction / merger process ( see sanders contribution ) .one of the key science goals of the 2df and sdss redshift surveys is to produce definitive measurements of galaxy clustering over a wide range of scales for samples selected by various galaxy properties . in order to interpret the information encoded in the measured clustering ,it is necessary to understand how galaxies illuminate the underlying distribution of dark matter .progress has been made towards this end by marrying the semi - analytic galaxy formation technique with high resolution n - body simulations of representative volumes of the universe ( kauffmann et al .1999 ; benson et al . 2000a , b , 2001b ) .in the approach of benson et al . , the masses and positions of dark matter haloes are extracted from an n - body simulation using a standard group finding algorithm .the semi - analytic machinery is then employed to populate the dark haloes with galaxies .the central galaxy is placed at the centre of mass of the dark matter halo and satellite galaxies are placed on random dark matter particles within the halo , resulting in a map of the spatial distribution of galaxies within the simulation volume . fig .[ fig : pk ] compares the power spectrum of bright , optically selected galaxies predicted by the semi - analytic model , with observational determinations and with the power spectrum of the dark matter .the left hand panel shows power spectra in real space . for ,the measured galaxy power spectrum has a lower amplitude than that of the dark matter in the popular model ; the galaxies are said to be ` anti - biased ' with respect to the mass ( gaztaaga 1995 ) .the semi - analytic model provides an excellent match to the data .this is particularly noteworthy as no additional tuning of parameters was carried out to make this prediction once certain properties of the local galaxy population , such as the field galaxy luminosity function , had been reproduced ( see cole et al .2000 for a full explanation of how the model parameters are set ) .furthermore , this level of agreement is not found for the galaxy clustering predicted in cdm models with .the most important factor in shaping the predicted galaxy clustering amplitude is the way in which the efficiency of galaxy formation depends upon dark matter halo mass .this is illustrated by the variation in the mass to light ratio with halo mass shown by fig 8 of benson et al ( 2000a ) : for low mass haloes , galaxy formation is suppressed by feedback , whilst for the most massive haloes , gas cooling times are sufficiently long to suppress cooling .the power of the approach of combining semi - analytic models with n - body simulations is demonstrated on comparing the left hand panel ( real space ) of fig .[ fig : pk ] with the right hand panel , which shows power spectra in redshift space .again , the same model gives a very good match to the observed power spectrum when the effects of peculiar motions are included to infer galaxy positions .however , the impression that one would gain about the bias between dark matter and galaxy fluctuations is qualitatively different ; in redshift space galaxies appear to be unbiased tracers of the dark matter . the apparent contradiction between the implications for bias given by the panels of fig .[ fig : pk ] can be resolved by turning back once more to the models .the pairwise velocity dispersion of model galaxies is lower than that of the dark matter , and as a result is in much better agreement with the observational determination of pairwise motions .again , this difference is driven by a reduction in the efficiency of galaxy formation with increasing dark matter halo mass ( benson et al .2000b ) .once the parameters of the semi - analytic model have been set by comparing the model output with a subset of data for the local galaxy population , firm predictions can be made regarding the evolution of the galaxy distribution ( benson et al .2001b ) .the properties of the distribution of galaxies and the way in which these properties evolve with redshift are intimately connected to the growth of structure in the dark matter , as illustrated by a sequence of high resolution pictures in benson et al .( 2001b ) that show the evolution of galaxies and of the dark matter .an example of this is the morphology - density relation , namely the correlation of the fraction of early type galaxies with local galaxy density .the semi - analytic models reproduce the observed form of the morphology density relation at .remarkably , essentially the same strength of effect is also predicted at .the physical explanation for this result lies in the accelerated dynamical evolution experienced by galaxies that form in overdensities destined to become rich clusters by the present day .a generic prediction of hierarchical clustering models is that bright galaxies should be strongly clustered at high redshift compared to the underlying dark matter ( davis et al .[ fig : pkz ] shows the evolution of the power spectrum for galaxies and for dark matter in a universe .the amplitude of the dark matter power spectrum increases as fluctuations grow through gravitational instability . between and , the amplitude of the dark matter power spectrum increases by an order of magnitude on large scales .the shape of the dark matter power spectrum is significantly modified on small scales ( high ) through nonlinear evolution of the density fluctuations ` cross - talk ' between fluctuations on different spatial scales .however , the amplitude and shape of the galaxy power spectrum show little change over the same redshift interval ( pearce et al . 2000 ; benson et al .the amplitude of the galaxy power spectrum drops by around from to , and by it has been overtaken in amplitude by the mass power spectrum ( baugh et al .the clustering predictions can be readily explained . at ,bright galaxies are only found in the most massive haloes in place at this time .such haloes are much more strongly clustered than the underlying dark matter , hence the large difference in amplitude or bias between the galaxy and dark matter spectra at .the environment of bright galaxies becomes less exceptional as is approached .the similarity in the general evolution of the global star formation rate per unit volume and of the space density of luminous quasars suggests a connection between the physical processes that drive the formation and evolution of galaxies and those that power quasars ( see dunlop s contribution ) . spurred on by mounting dynamical evidence for the presence of massive black holes in galactic bulges ( e.g. magorrian et al .1998 ) , guinevere kauffmann and martin haehnelt have produced the first treatment to follow the properties of qsos within a fully fledged semi - analytic model for galaxy formation ( kauffmann & haehnelt 2000 ; haehnelt & kauffmann 2000 ) .the model of kauffmann & haehnelt assumes that black holes form during major mergers of galaxies , and that during the merger event , some fraction of the cold gas present is accreted onto the black hole to fuel a quasar .the qualitative properties of the observed quasar population are reproduced well by the model , including the rapid evolution in the space density of luminous quasars .there are three key features of the model responsible for the evolution in quasar space density between and : ( i ) a decrease in the merger rate of objects in a fixed mass range over this interval , ( ii ) a reduction in the supply of cold gas from mergers , and ( iii ) an increase in the time - scale for gas accretion onto the black hole .the mass of cold gas available in mergers is reduced at low redshift because the star formation timescale in the model is effectively independent of redshift ; at lower redshifts , gaseous disks have been in place for longer and a larger fraction of the gas has been consumed in quiescent star formation and so less gas is present in low redshift mergers ( see fig 6 of baugh , cole & frenk 1996 ) .if the star formation timescale is allowed to depend upon the dynamical time , gas is consumed more rapidly in the disk and less gas is present in mergers at all redshifts .the kauffmann & haehnelt model predicts strong evolution in the properties of qso hosts with redshift , suggesting that quasars of a given luminosity should be found in fainter hosts at high redshift .this issue is just beginning to be addressed observationally ( see , for example , the contributions of ridgway and kukula ) . at present , it is hard to reach any firm conclusions , though there is apparently little evidence for a strong trend in host luminosity with redshift .the 2df qso redshift survey has recently reported measurements of the clustering in a sample of qsos that is an order of magnitude larger than any previous sample ( hoyle et al .it should be relatively straight forward to obtain predictions for the clustering of quasars from the semi - analytic models to compare with these new data .cmb would like to thank the organisers of this enjoyable meeting for their hospitality and for providing financial support .we acknowledge the contribution of our grasil collaborators , alessandro bressan , gian - luigi granato and laura silva to the work presented in this review .we thank peder norberg and the 2df galaxy redshift survey team for communicating preliminary luminosity function results .< widest bib entry > avila - reese , v. , firmani , c. , 1998 , apj , 505 , 37 .baugh , c.m . ,benson , a.j . ,cole , s. , frenk , c.s . ,lacey , c.g ., 1999 , mnras , 305 , l21 .baugh , c.m . , cole , s. , frenk , c.s ., 1996 , mnras , 283 , 1361 .benson , a.j . , cole , s. , frenk , c.s . ,baugh , c.m . ,lacey , c.g ., 2000a , mnras , 311 , 793 .benson , a.j . ,baugh , c.m . , cole , s. , frenk , c.s . ,lacey , c.g ., 2000b , mnras , 316 , 107 .benson , a.j . , pearce , f.r ., frenk , c.s . ,baugh , c.m . , jenkins , a. , 2001a , mnras , 320 , 261 .benson , a.j . ,frenk , c.s . ,baugh , c.m . , cole , s. , lacey , c.g ., 2001b , mnras submitted , astro - ph/0103092 .davis , m. , efstathiou , g. , frenk , c.s . ,white , s.d.m ., 1985 , apj , 292 , 371 .granato , g.l . ,lacey , c.g . , silva , l. , bressan , a. , baugh , c.m . ,cole , s. , frenk , c.s . , 2000 , apj , 542 , 710 .cole , s. , lacey , c.g . ,baugh , c.m . ,frenk , c.s . , 2000 ,mnras , 319 , 168 .cole , s. , etal ( the 2dfgrs team ) , 2001 , mnras in press .gaztaaga , e. , 1995 , apj , 454 , 561 .gaztaaga , e. , baugh , c.m . , 1998 ,mnras , 294 , 229 .haehnelt , m. , kauffmann , g. , 2000 , mnras , 318 , l35 .hoyle , f. , baugh , c.m . , shanks , t. , ratcliffe , a. , 1999 , mnras , 309 , 659 .hoyle , f. , outram , p.j . , shanks , t. , croom , s.m . , boyle , b.j ., loaring , n.s . ,miller , l. , smith , r.j . , 2001 , mnras submitted , astro - ph/0102163 kauffmann , g. , colberg , j.m . ,diaferio , a. , white , s.d.m . , 1999 ,mnras , 303 , 188 .kauffmann , g. , haehnelt , m. , 2000 , mnras , 311 , 576 .magorrian j. , et al ., 1998 , aj , 115 , 2285 peacock , j.a . , et al . , ( the 2dfgrs team ) , 2001 , nature , 410 , 169 .pearce , f.r ., et al . , ( the virgo consortium ) , 1999 , apj , 521 , l99 .silva , l. , granato , g.l ., bressan , a. , danese , l. , 1998 , apj , 509 , 103 .somerville , r.s ., primack , j.r . , 1999 , mnras , 310 , 1087 .white , s.d.m . , rees , m.j ., 1978 , mnras , 183 , 341 .
|
there is now compelling evidence in favour of the hierarchical structure formation paradigm . semi - analytic modelling is a powerful tool which allows the formation and evolution of galaxies to be followed in a hierarchical framework . we review some of the latest developments in this area before discussing how such models can help us to interpret observations of the high redshift universe .
|
this paper is on the study of blow up phenomena that occur in heterogeneous media consisting of a finite - conductivity matrix and perfectly conducting inhomogeneities ( particles or fibers ) close to touching .this investigation is motivated by the issue of material failure initiation where one has to assess the magnitude of local fields , including extreme electric or current fields , heat fluxes , and mechanical loads , in the zones of high field concentrations .such zones are normally created by large gradient flows confined in very thin regions between particles of different potentials , see e.g. .these media are described by elliptic or degenerate elliptic equations with discontinuous coefficients .the problem of analytical study of solution regularity for such problems has been actively studied since 1999 , and resulted in series of papers investigating different cases based on dimensions , shape of inclusions , applied boundary conditions , etc .the main result up to date can be summarized as follows : _ for two perfectly conducting particles of an arbitrary smooth shape located at distance from each other and away from the external boundary , typically there exists independent of such that _ and corresponding bounds for the case of particles and , see .it is important to note that even though in some referred studies it was mentioned on what parameters the constant in depends upon , the precise asymptotics have not been captured , only bounds for it have been established .moreover , methods in the aforementioned contributions have their limitations , e.g. some of them use methods that work only in , some deal with inhomogeneities of spherical shape only , and the developed techniques , except one by the author , were designed to treat _ linear _ problems only , with no direct extension or generalization to a nonlinear case . in the current paperan approach for gradient estimates for problems with particles of degenerate properties that works for any number of particles of arbitrary shape in any dimensions is presented .the advantage and novelty is that the rate of blow up of the electric field is captured * precisely * as opposed to the existing methods and allows for direct extensions to the nonlinear case ( e.g. -laplacian ) . in particular, it is shown that with explicitly computable constant that depends on dimension , particles array and their shapes , and an applied boundary field .the rest of the paper is organized as follows .chapter [ s : formulation ] provides the problem setting and formulation of main results , proof of which is presented in chapter [ s : proof ] .discussion of possible extensions is done in chapter [ s : extensions ] and conclusions are given in chapter [ s : conclusions ] .proofs of auxiliary facts are shown in appendices .* acknowledgements*. the author thank a. novikov for helpful discussions on the subject of the paper .the current paper focuses only on physically relevant dimensions . to that end , let , be an open bounded domain with ( ) boundary .it contains two particles and with smooth boundaries at the distances from each other ; see figure [ f : domain ] .we assume for some independent of .let model the matrix ( or the background medium ) of the composite , that is , , in which we consider u(x ) & = \mbox{const } , & \displaystyle x\in \partial \mathcal{b}_i,~ i=1,2\\[3pt ] \displaystyle \int_{\partial \mathcal{b}_i}\frac{\partial u}{\partial n } ~ds&=0 , & i=1,2\\[3pt ] u(x ) & = u(x ) , & \displaystyle x\in \gamma \end{array } \right.\ ] ] where a bounded weak solution represents the electric potential in , and is the given applied potential on the external boundary .note takes a constant value , that we denote , on the boundary of particle ( ) .this is a unique constant for which the zero - flux condition , that is the third equation of , is satisfied .the constants , are unknown apriori and should be found in the course of solving the problem .[ f : domain ] the goal is to derive the asymptotics of the solution gradient with respect to the small parameter that defines the close proximity of particles to each other . to formulate the main result of the paper , consider an auxiliary problem defined as follows . construct a line connecting the centers of mass of and and `` move '' particles toward each other along this line until they touch .denote now the newly obtained domain outside of particles by where we consider the following problem : v_o(x ) & = \mbox{const } , & \displaystyle x\in\partial \mathcal{b}_1\cup \partial\mathcal{b}_2\\[3pt ] \displaystyle \int_{\partial\mathcal{b}_1}\frac{\partial v_o}{\partial n}~ds+ \int_{\partial\mathcal{b}_2}\frac{\partial v_o}{\partial n}~ds & = 0 , \\[5pt ] v_o(x ) & = u(x ) , & x\in\gamma \end{array } \right.\ ] ] this problem differs from by that the potential takes the _ same _ constant value on the boundaries of _ both _ particles .we denote this potential by and introduce a number that depends on the external potential : :=%\mathcal{r}_0[u]:= \int_{\partial\mathcal{b}_1}\frac{\partial v_o}{\partial n}~ds.\ ] ] without loss of generality , we assume that particles potential in satisfy , which would mean that for sufficiently small .the following theorem summarizes the main result of this study .[ t : main ] the asymptotics of the electric field for problem is given by \begin{cases } \displaystyle \frac{\mathcal{r}_o}{\mathcal{c}_{12}}\frac{1}{\delta^{1/2 } } , & d=2\\[8pt ] \displaystyle \frac{\mathcal{r}_o}{\mathcal{c}_{12}}\frac{1 } { \delta |\ln \delta| } , & d=3 \end{cases } \quad \quad \mbox{for } ~\delta\ll 1,\ ] ] with defined above in and explicitly computable constant that depends on curvatures of particle boundaries and at the point of the closest distance and defined below in .the proof of theorem [ t : main ] consists of ingredients collected in the following facts . in using the method of barriersit was shown that the electric field of the system associated with the problem : \phi(x ) & = t_i , & \displaystyle x\in\partial \mathcal{b}_i,~i=1,2\\[3pt ] \phi(x ) & = u(x ) , & x\in\gamma \end{array } \right.\ ] ] stated on the same domain with the same boundary potential as in the above problem is given by , \quad \mbox{for } ~ \delta\ll 1.\ ] ] in contrast to , the constants and in are arbitrary , which implies the solution of may not satisfy the integral identities the flux of on as in .in particular , one has [ l : elfield ] the asymptotics of the electric field of is as follows : , \quad \mbox{for } ~ \delta\ll 1,\ ] ] where and are the potentials on and , respectively , for which the zero integral flux condition as in satisfied . with the problem is reduced to finding the asymptotics of the potential difference in terms of the distance parameter , given in the proposition .[ l : potdif ] the asymptotics of the potential difference is given by : , \quad \mbox{for } ~ \delta\ll 1,\ ] ] where is defined by and by : \displaystyle \mathcal{c}_{12 } |\ln \delta| , & d=3 \end{cases}\ ] ] with constant introduced below in that depends on curvatures of particles boundaries at the point of their closest distance . _proof of proposition [ l : potdif ] . _ + the method of proof is based on observation that asymptotics of can be derived by investigating the energy associated with the system and defined by : where solves .a remarkable feature of problem is that potentials and are minimizers of the energy quadratic form of the potentials : this observation is the essence of the so - called iterative minimization lemma , first introduced in .therefore , if we find an approximation of for sufficiently small , we would be able to derive an asymptotics for then .for the energy the following holds true .[ l : energy ] the energy of can be written as ,\ ] ] with asymptotics of coefficients of the quadratic form : , \quad b_1=-b_2=\mathcal{r}_o[1+o(1 ) ] , \quad c_{12}=-g_\delta[1+o(1 ) ] , \quad \mbox{for } ~ \delta\ll 1,\ ] ] and given by , and by .this lemma is proven in appendix [ a : en - lem ] . now substituting asymptotics of coefficients to and dropping the low order terms we define the quadratic form : whose minimizer provides asymptotics of the sought potential difference , namely , =\frac{\mathcal{r}_o}{g_\delta}[1+o(1 ) ] , \quad \mbox{for } ~ \delta\ll 1.\ ] ] this concludes the proof of proposition [ l : potdif ] . _ proof of theorem [ t : main ] . _+ asymptotic relations - and definition yield main result for sufficiently small . to the case of particles*. the presented above approach allows for an extension to any number of particles , where neighbors and are located at the distance from each other , see figure [ f : domainn ] .the notion of `` neighbors '' can be defined based onthe voronoi tesselation with respect to the particles centers of mass , namely , the neighbors are the nodes that share the same vonoroi face . in this case , similarly to above , one has to consider a `` limiting problem '' in the domain where the third condition is replaced to to obtain one can connect centers of mass of neighboring pairs , with lines , and `` move '' all particles alone those lines toward each other until touches at least one of its neighbor , where , , where is the set of indices of neighbors to particle .now , similarly to , introduce numbers := \int_{\partial\mathcal{b}_i}\frac{\partial v_o}{\partial n}~ds , \quad i\in \{1,\ldots , n\}.\ ] ] particles then minimize the energy quadratic form as in and derive asymptotics of its coefficients in terms of and using to obtain the potential difference asymptotics for the neighbors : in is similar to one of and given by \left|\ln \delta_{ij}\right| , & d=3 \end{cases } , % \quad \mathcal{c}_{ij}= \begin{cases } 2\pi \frac{\alpha_i\alpha_j}{\alpha_i+\alpha_j } , & d=2\\[3pt ] 4\pi \frac{a_ia_j}{a_i+a_j } \frac{b_ib_j}{b_i+b_j } , & d=3 \end{cases}\ ] ] with given by formula in appendix [ a : coeff ] where should be replaced by and by . finally , use , \quad \mbox{for } ~ \delta\ll 1,\ ] ] with asymtotics to obtain the blow up of electric field of the composite with more than two particles .extension to the nonlinear case*. one can also generalize the proposed methodology for high - contrast materials with the matrix described by nonlinear constitutive laws such as -laplacian .the system s energy in this case is given by , ( ) , where solves with first and third equations replaced by in and , respectively .note that for a successful application of the described approach , one needs to show that the energy function , whose minimal value is attained at the solution , is differentiable with respect to the potential on .the blow up of the electric field is then , \quad p>2 , \quad d=2,3 , \quad \mbox{for } ~\delta\ll 1,\ ] ] see also . *extension to dimensions *. the described above procedure remains the same if one needs to obtain asymptotics for in dimensions greater than three .for this , one has to derive asymptotics of for first , following method described in appendices [ a : g - lem ] and [ a : coeff ] .for simplicity of presentation we omit this case here .as observed in , in a composite consisting of a matrix of finite conductivity with perfectly conducting particles close to touching the electric field exhibits blow up .this blow up is , in fact , the main cause for a material failure which occurs in the thin gaps between neighboring particles of different potentials .the electric field of such composites is described by the gradient of the solution to the corresponding boundary value problem .the current paper provides a concise and elegant procedure for capturing the singular behavior of the solution gradient _ precisely _ that does not require employing a heavy analytical machinery developed in previous studies .this procedure relies on simple observations about energy of the corresponding system and its minimizers that were sufficient to acquire the sought asymtpotics exactly .the techniques developed and adapted here are independent of dimension , particles shape and their total number , whereas strict dependence on and particles shape was the main limitation of previous contributions on the subject .furthermore , the developed above procedure allows for a straightforward generalization to a _ nonlinear _ case ._ proof . _ consider a family of auxiliary problems defined on the same domain as : v(x ) & = \mbox{const } , & \displaystyle x\in\partial \mathcal{b}_1\cup \partial\mathcal{b}_2\\[3pt ] \displaystyle \int_{\partial\mathcal{b}_1}\frac{\partial v}{\partial n}~ds + \int_{\partial\mathcal{b}_2}\frac{\partial v}{\partial n}~ds & = 0 , \\[5pt ] v(x ) & = u(x ) , & \displaystyle x\in \gamma \end{array } \right.\ ] ] as in the constant value of the potential is the same on both particles that we denote by .however , in contrast to here particles are located at distance from each other while in particles touch at one point . with that , similarly to we introduce the number :=\int_{\partial\mathcal{b}_1}\frac{\partial v}{\partial n}~ds.\ ] ] in , it was shown that asymptotics of $ ] is given by =\mathcal{r}_o[1+o(1 ) ] , \qquad \mbox{for } ~ \delta\ll 1.\ ] ] using the linearity of problem we decompose its solution into with ( ) solving \psi_i(x ) & = \delta_{ij } , & \displaystyle x\in\partial \mathcal{b}_j,~~i , j\in\{1,2\}\\[3pt ] \psi_i(x ) & = 0 , & x\in\gamma \end{array } \right.\ ] ] where is the kroneker delta .invoking , we compute the energy of the system and obtain : (\mathcal{t}_1-\mathcal{t}_2)+ 2\mathcal{c}_{12}(\mathcal{t}_1-\mathcal{t}_\delta)](\mathcal{t}_2-\mathcal{t}_\delta),\ ] ] where are the energies of systems given by and , respectively , and trivial integration by parts yields that where constants depend on , and shape of the particles , but independent of . on the other hand , the problem is regular in the sense that its electric field does not exhibit blow up since there is no potential drop between the particles .hence , that depends on the same parameters as the above constants .finally , in appendix [ a : g - lem ] we show that for sufficiently small : .\ ] ] with notations introduced in , , , iterative minimization lemma , and asymptotics , , we have from : which with yields , where , .+ _ proof ._ to derive an asymptotics of we adopt the method of _ variational bounds _ that has become a classical tool in capturing the leading terms of asymptotics of the energy of the corresponding system .this method is based on two equivalent variational formulations of the corresponding problem that provide upper and lower bounds for the energy matching up the leading order of asymptotics . employing this method we use of a couple observations made in which are vital in capturing the sought asymtptotics .but before , we need to introduce a coordinate system in which the construction will be made .first , we write each point as where \bar{x}=(x , y),&x_d = z , & \mbox{when } ~d=3 \end{array } \right.\ ] ] then , connect the centers of mass of particles with a line and `` move '' and along this line toward each other until they touch , thus , producing domain as above in .the point of their touching defines the origin of our cylindrical coordinate system. the line connecting the centers will be the axis , see figure [ f : neck01 ] .when particles are `` moved back '' at the distance from each other along , we construct a `` cylinder '' of radius that contains this line .this `` cylinder '' is depicted as the red region in figure [ f : neck02 ] that we call a _ neck _ and denote by . also , introduce the distance between boundaries of and , which in the selected coordinate system is a function of .the mentioned above observations about energy estimates are as follows .first , the minimal value of the energy functional in the neck is attained on the system with insulating lateral boundary of the cylinder , that is , where function solves the problem \psi^i_{\pi}(x ) & = \delta_{ij } , & \displaystyle x\in\partial \mathcal{b}_j,~~i , j\in\{1,2\}\\[5pt ] \displaystyle \frac{\partial \psi^i_{\pi}}{\partial n}(x ) & = 0 , & x\in \partial\pi \end{array } \right.\ ] ] on the other hand , since energy is the minimal value of the energy functional attained at the minimizer , its upper bound is given by any test function from the set via hence , the variational bounds for are therefore , the problem is now reduced to construction of an approximation to and finding a function so that the integrals in match up to the leading order for .for this purpose , one can use the _ keller s _ functions defined in by with this , we define a test function by \phi_o^i(x ) & x\in \omega_\delta\setminus \pi \end{cases},\ ] ] where solves \phi_o^i(x ) & = \delta_{ij } , & \displaystyle x\in\partial \mathcal{b}_j,~~i , j\in\{1,2\}\\[5pt ] \phi_o^i(x ) & = \phi_\pi^i , & x\in \partial\pi\\[3pt ] \phi_o^i(x ) & = 0 , & x\in \gamma \end{array } \right.\ ] ] employing the method of barriers to this problem one can show that with constant depending on and but independent of .thus , the _ dual variational principle _ will help to estimate integral , namely , ,\\[12pt ] w_{\pi } & \displaystyle= \left\{j\in l^2(\pi;\mathbb{r}^d):~\nabla\cdot j=0~\mbox{in } \pi,~j\cdot n=0~\mbox{on } \partial\pi\right\}. \end{array}\ ] ] the test flux is chosen \displaystyle \left(0,0,\frac{1}{h(x , y)}\right ) , & d=3 \end{cases}\ ] ] therefore , hence , we have two - sided bounds for : with selected test functions and by and , respectively , it is trivial to show that the difference between the upper and lower bounds is simply this quantity is bounded , hence , the asymptotics of is given by , whose asymptotics in its turn is shown in appendix [ a : coeff ] , see also ) , and is given by : ,\quad \mbox{for } \delta\ll 1.\ ] ] in the cylindrical coordinate system introduced above , that is , the one with the axis coinciding with the line of the closest distance between and , and with the origin at the mid - point of this line , the boundaries and are approximated by parabolas ( ) and paraboloids ( ) : \partial\mathcal{b}_1 : & \displaystyle z=\frac{\delta}{2}+\frac{x^2}{2a_1}+\frac{y^2}{2b_1 } , & \partial\mathcal{b}_2 : & \displaystyle z=-\frac{\delta}{2}-\frac{x^2}{2a_2}-\frac{y^2}{2b_2 } , & d=3\\[5pt ] \end{array}\ ] ] the distance between these paraboloids is \displaystyle \delta+\frac{x^2}{a}+\frac{y^2}{b } , \quad a:=\frac{2a_1a_2}{a_1+a_2},~ b:=\frac{2b_1b_2}{b_1+b_2 } , & d=3 \end{cases } % \begin{array}{r r l l l } % h(\bar{x})=&h(x ) & \displaystyle = \delta+\frac{x^2}{\alpha } , & \displaystyle \alpha:=\frac{2\alpha_1\alpha_2}{\alpha_1+\alpha_2 } , & d=2\\[8pt ] % h(\bar{x})=&h(x , y ) & \displaystyle = \delta+\frac{x^2}{a}+\frac{y^2}{b } , & \displaystyle a:=\frac{2a_1a_2}{a_1+a_2},~ b:=\frac{2b_1b_2}{b_1+b_2 } , & d=3 % \end{array}\ ] ] for sufficiently small neck - width , this distance by is a `` good '' approximation for the actual distance between the boundaries and in the sense that , % \quad \mbox{for } ~w\ll 1,\ ] ] that is , provides the leading asymptotics of from appendix [ a : g - lem ] . going back to , we note that in 2d the parameter is the harmonic mean of the radii of curvatures of parabolas approximating and . similarly , in 3d quantities and are related to the gaussian and mean curvatures of the corresponding paraboloids at the points of the their closest distance via : finally , direct evaluating of the integral yields the main asymptotic term for as and defines of : \begin{cases } \pi \alpha \delta^{1/2 } , & d=2\\[3pt ] \pi a b |\ln \delta | , & d=3 \end{cases}%=g_\delta[1+o(1 ) ] \quad \mbox{for } ~ \delta\ll 1,\ ] ] where , , are defined in in terms of coefficients of the osculating paraboloids at the point of the closest distance between particles surfaces .thus , \pi a b , & d=3 \end{cases}\ ] ] h. ammari , h. kang , h. lee , m. lim , h. zribi : decomposition theorems and fine estimates for electrical fields in the presence of closely located circular inclusions , _ j. diff_ , * 247 * , ( 2009 ) , pp .28972912 .e. s. bao , y. y. li , y. y. , b. yin : gradient estimates for the perfect and insulated conductivity problems with multiple inclusions .partial differential equations _ , * 35:11 * , ( 2010 ) , pp . 19822006 .berlyand , l. , gorb , y. and novikov a. : discrete network approximation for highly - packed composites with irregular geometry in three dimensions , in _ multiscale methods in science and engineering _ , b. engquist , p. lotstedt , o. runborg , eds ., _ lecture notes in computational science and engineering _ * 44 * , springer , ( 2005 ) , pp . 2158 .berlyand , l. , gorb , y. and novikov a. : fictitious fluid approach and anomalous blow - up of the dissipation rate in a 2d model of concentrated suspensions , _ arch ._ , * 193:3 * , ( 2009 ) , pp . 585622. berlyand , l. , kolpakov , a. : network approximation in the limit of small interparticle distance of the effective properties of a high contrast random dispersed composite , _ arch ._ , * 159:3 * , ( 2001 ) , pp. 179227 .
|
a heterogeneous medium of constituents with vastly different mechanical properties , whose inhomogeneities are in close proximity to each other , is considered . the gradient of the solution to the corresponding problem exhibits singular behavior ( blow up ) with respect to the distance between inhomogeneities . this paper introduces a concise procedure for capturing the leading term of gradient s asymptotics precisely . this procedure is based on a thorough study of the system s energy . the developed methodology allows for straightforward generalization to heterogeneous media with a nonlinear constitutive description .
|
there are many problems in science in which the state of a system must be identified from an uncertain equation supplemented by a stream of noisy data ( see e.g. ) .a natural model of this situation consists of a stochastic differential equation ( sde ) : where is an -dimensional vector , is -dimensional brownian motion , is an -dimensional vector function , and is a scalar ( i.e. , an by diagonal matrix of the form , where is a scalar and is the identity matrix ) .the brownian motion encapsulates all the uncertainty in this equation .the initial state is assumed given and may be random as well .as the experiment unfolds , it is observed , and the values of a measurement process are recorded at times ; for simplicity assume , where is a fixed time interval and is an integer .the measurements are related to the evolving state by where is a -dimensional , generally nonlinear , vector function with , is a diagonal matrix , , and is a vector whose components are independent gaussian variables of mean 0 and variance 1 , independent also of the brownian motion in equation ( [ eq : datass ] ) . the task is to estimate on the basis of equation ( [ eq : datass ] ) and the observations ( [ eq : observe ] ) .if the system ( [ eq : datass ] ) is linear and the data are gaussian , the solution can be found via the kalman - bucy filter . in the general case ,it is natural to try to estimate as the mean of its evolving probability density .the initial state is known and so is its probability density ; all one has to do is evaluate sequentially the density of given the probability density of and the data .this can be done by following particles " ( replicas of the system ) whose empirical distribution approximates . in a bayesian filter( see e.g , one uses the pdf and equation ( [ eq : datass ] ) to generate a prior density , and then one uses the new data to generate a posterior density .in addition , one may have to sample backward to take into account the information each measurement provides about the past and avoid having too many identical particles .evolving particles is typically expensive , and the backward sampling , usually done by markov chain monte carlo ( mcmc ) , can be expensive as well , because the number of particles needed can grow catastrophically ( see e.g. ) . in this paperwe offer an alternative to the standard approach , in which is sampled directly without recourse to bayes theorem and backward sampling , if needed , is done by chainless monte carlo .our direct sampling is based on a representation of a variable with density by a collection of functions of gaussian variables parametrized by the support of , with parameters found by iteration .the construction is related to chainless sampling as described in .the idea in chainless sampling is to produce a sample of a large set of variables by sequentially sampling a growing sequence of nested conditionally independent subsets .as observed in , chainless sampling for a sde reduces to interpolatory sampling , as explained below. our construction will be explained in the following sections through an example where the position of a ship is deduced from the measurements of an azimuth , already used as a test bed in .first we explain how to sample via interpolation and iteration in a simple example , related to the example and the construction in .consider the scalar sde we want to find sample paths , subject to the conditions .let denote a gaussian variable with mean and variance .we first discretize equation ( [ scalar ] ) on a regular mesh , where , , , with , and , following , use a balanced implicit discretization : where and is .the joint probability density of the variables is , where is the normalization constant and where are functions of the , and ( see ) .one can obtain sample solutions by sampling this density , e.g. by mcmc , or one can obtain them by interpolation ( chainless sampling ) , as follows .consider first the special case , so that in particular .each increment is now a variable , with the known explicitly .let be a power of .consider the variable . on one hand , where . on the other hand, so that with the pdf of is the product of the two pdfs ; one can check that where , , and ; is the probability of getting from the origin to , up to a normalization constant . pick a sample from the density ; one obtains a sample of by setting . given a sample of one can similarly sample , then , , etc ., until all the have been sampled .if we define , then for each choice of we find a sample such that where the factor on the left is the probability of the fixed end value up to a normalization constant . in this linear problem, this factor is the same for all the samples and therefore harmless .one can repeat this sampling process for multiple choices of the variables ; each sample of the corresponding set of is independent of any previous samples of this set .now return to the general case .the functions , are now functions of the .we obtain a sample of the probability density we want by iteration .first pick , where each is drawn independently from the density ( this vector remains fixed during the iteration ) .make a first guess ( for example , if , pick ) .evaluate the functions at ( note that now , and therefore the variances of the various increments are no longer constants ) .we are back in previous case , and can find values of the increments corresponding to the values of we have . repeat the process starting with the new iterate . if the vectors converge to a vector , we obtain , in the limit , equation ( [ palim ] ) , where now on the right side depends on so that , and both are functions of the final .the left hand side of ( [ palim ] ) becomes : note that now the factor is different from sample to sample , and changes the relative weights of the different samples . in averaging , one should take this factor as weight , or resample as described at the end of the following section . in order to obtain more uniform weights ,one also can use the strategies in .one can readily see that the iteration converges if , where is the lipshitz constant of , is the length of the interval on which one works ( here ) , and is the maximum norm of the vectors .if this inequality is not satisfied for the iteration above , it can be re - established by a suitable underrelaxation .one should course choose large enough so that the results are converged in .we do not provide more details here because they are extraneous to our purpose , which is to explain chainless / interpolatory sampling and the use of reference variables in a simple context .the problem we focus on is discussed in , where it is used to demonstrate the capabilities of particular bayesian filters .a ship sets out from a point in the plane and undergoes a random walk , for , and with given , and , , i.e. , each displacement is a sample of a gaussian random variable whose variance does not change from step to step and whose mean is the value of the previous displacement .an observer makes noisy measurements of the azimuth , recording where the variance is also fixed ; here the observed quantity is scalar and is not be denoted by a boldfaced letter .the problem is to reconstruct the positions from equations ( [ eq1],[eq2 ] ) .we take the same parameters as : , , , .we follow numerically particles , all starting from , as described in the following sections , and we estimate the ship s position at time as the mean of the locations of the particles at that time .the authors of also show numerical results for runs with varying data and constants ; we discuss those refinements in section 6 below .assume we have a collection of particles at time whose empirical density approximates ; now we find increments such that the empirical density of approximates . is known implicitly : it is the product of the density that can be deduced from the sde and the one that comes from the observations , with the appropriate normalization .if the increments were known , their probability ( the density evaluated at the resulting positions ) would be known , so is a function of , . for each particle , we are going to sample a gaussian reference density , obtain a sample of probability , then solve ( by iteration ) the equation to obtain .define and .we are working on one particle at a time , so the index can be temporarily suppressed .pick two independent samples , from a density ( the reference density in the present calculation ) , and set ; the variables , remain unchanged until the end of the iteration .we are looking for displacements , , and parameters , such that : the first equality states what we wish to accomplish : find increments , , functions respectively of , whose probability with respect to is .the factor is needed to normalize this term ( is called below a phase " ) . the second equality says how the goal is reached : we are looking for parameters ( all functions of ) such that the increments are samples of gaussian variables with these parameters , with the assumed probability .one should remember that in our example the mean of is , and similarly for .we are not representing as a function of a single gaussian- there is a different gaussian for every value of . to satisfy the second equality we set up an iteration for vectors for brevity ) that converges to .start with .we now explain how to compute given .approximate the observation equation ( [ eq2 ] ) by where the derivatives are , like , evaluated at , i.e. , approximate the observation equation by its taylor series expansion around the previous iterate .define a variable .the approximate observation equation says that is a variable , with on the other hand , from the equations of motion one finds that is , with and .hence the pdf of is , up to normalization factors , where , , .we can also define a variable that is a linear combination of , and is uncorrelated with : the observations do not affect , so its mean and variance are known .given the means and variances of , one can easily invert the orthogonal matrix that connects them to , and find the means and variances of and of after their modification by the observation ( the subscripts on are labels , not differentiations ) .now one can produce values for : where , are the samples from chosen at the beginning of the iteration .this completes the iteration .this iteration converges to such that , and the phases converge to a limit , where the particle index has been restored .the time interval over which the solution is updated in each step is short , and we do not expect any problem with convergence , either here or in the next section , and indeed there is none ; in all cases the iteration converges in a small number of steps .note that after the iteration the variables are no longer independent- the observation creates a relation between them .do this for all the particles .the particles are now samples of , but they have been obtained by sampling different densities ( remember that the parameters in the gaussians in equation ( [ forward ] ) vary ) .one can get rid of this heterogeneity by viewing the factors as weights and resampling , i.e. , for each of random numbers drawn from the uniform distribution on $ ] , choose a new such that ( where ) , and then suppress the hat .we have traded the resampling of bayesian filters for a resampling based on the normalizing factors of the several gaussian densities ; this is a worthwhile trade because in a bayesian filter one gets a set of samples many of which may have low probability with respect to , and here we have a set of samples each one of which has high probability with respect to a pdf close to .note also that the resampling does not have to be done at every step- for example , one can add up the phases for a given particle and resample only when the ratio of the largest cumulative weight to the smallest such weight exceeds some limit ( the summation is over the weights accrued to a particular particle since the last resampling ) . if one is worried by too many particles being close to each other ( `` depletion '' in the bayesian terminology ), one can divide the set of particles into subsets of small size and resample only inside those subsets , creating a greater diversity . as will be seen in section 6, none of these strategies will be used here and we will resample fully at every step .the algorithm of the previous section is sufficient to create a filter , but accuracy may require an additional refinement .every observation provides information not only about the future but also about the past- it may , for example , tag as improbable earlier states that had seemed probable before the observation was made ; one may have to go back and correct the past after every observation ( this backward sampling is often misleadingly motivated solely by the need to create greater diversity among the particles in a bayesian filter ) . as will be seen below, this backward sampling does not provide a significant boost to accuracy in the present problem , but it is described here for the sake a completeness . given a set of particles at time , after a forward step and maybe a subsequent resampling , one can figure out where each particle was in the previous two steps , and have a partial history for each particle : ( if resamples had occurred , some parts of that history may be shared among several current particles ) .knowing the first and the last member of this sequence , one can interpolate for the middle term as in section 2 , thus projecting information backward .this requires that one recompute . let ; in the present section this quantity is assumed known and remains fixed . in the azimuth problem discussed here , one has to deal with the slight complication due to the fact that the mean of each increment is the value of the previous one , so that two successive increments are related in a slightly more complicated way than usual .the displacement is a variable , and is a variable , so that one goes from to by sampling first a variable that takes us from to an intermediate point , with a correction by the observation half way up this first leg , and then one samples a variable to reach , and similarly for . let the variable that connects to be , so that what replaces is .accordingly , we are looking for a new displacement , and for parameters such that where and , are independent gaussian variables . as in equation ( [ forward ] ) , the first equality embodies what we wish to accomplish- find increments , functions of the reference variables , that sample the new pdf at time defined by the forward motion , the constraint imposed by the observation , and by knowledge of the position at time .the second equality states that this is done by finding particle - dependent parameters for a gaussian density .we again find these parameters as well as the increments by iteration .much of the work is separate for the and components of the equations of motion , so we write some of the equations for the component only .again set up an iteration for variables which converge to .start with . to find given ,approximate the observation equation ( [ eq2 ] ) , as before , by equation ( [ obs ] ) ; define again variables , one in the direction of the approximate constraint and one orthogonal to it ; in the direction of the constraint multiply the pdfs as in the previous section ; construct new means and new variances for at time , taking into account the observation at time , again as before .this also produces a phase .now take into account that the location of the boat at time is known ; this creates a new mean , a new variance , and a new phase , by , , , where . finally , find a new interpolated position ( the calculation for is similar , with a phase ) , and we are done .the total phase for in this iteration is . asthe iterates converge to , the phases converge to a limit .the probability of a particle arriving at the given position at time having been determined in the forward step , there is no need to resample before comparing samples . once one has the values of , a forward step gives corrected values of ; one can use this interpolation process to correct estimates of by subsequent observations for , as many as are useful .before presenting examples of numerical results for the azimuth problem , we discuss the accuracy one can expect .a single set of observations for our problem relies on 160 samples of a variable .the maximum likelihood estimate of given these samples is a random variable with mean and standard deviation .we estimate the uncertainty in the position of the boat by picking a set of observations , then making multiple runs of the boat where the random components of the motion in the direction of the constraint are frozen while the ones orthogonal to it are sampled over and over from the suitable gaussian density , then computing the distances to the fixed observations , estimating the standard deviation of these differences , and accepting the trajectory if the estimated standard deviation is within one standard deviation of the nominal value of .this process generates a family of boat trajectories compatible with the given observations . in tablei we display the standard deviations of the differences between the resulting paths and the original path that produced the observations after the number of steps indicated there ( the means of these differences are statistically indistinguishable from zero ) .this table provides an estimate of the accuracy we can expect .it is fair to assume that these standard deviations are underestimates of the uncertainty- a variation of a single standard deviation in is a strict constraint , and we allowed no variation in .table i + intrinsic uncertainty in the azimuth problem [ cols="^,^,^",options="header " , ]we have exhibited a non - bayesian filtering method , related to recent work on chainless sampling , designed to focus particle paths more sharply and thus require fewer of them , at the cost of an added complexity in the evaluation of each path .the main features of the algorithm are a representation of a new pdf by means of a set of functions of gaussian variables and a resampling based on normalization factors .the construction was demonstrated on a standard ill - conditioned test problem .further applications will be published elsewhere .we would like to thank prof .r. kupferman , prof .r. miller , and dr .j. weare for asking searching questions and providing good advice .this work was supported in part by the director , office of science , computational and technology research , u.s .department of energy under contract no .de - ac02 - 05ch11231 , and by the national science foundation under grant dms-0705910 .
|
particle filters for data assimilation in nonlinear problems use particles " ( replicas of the underlying system ) to generate a sequence of probability density functions ( pdfs ) through a bayesian process . this can be expensive because a significant number of particles has to be used to maintain accuracy . we offer here an alternative , in which the relevant pdfs are sampled directly by an iteration . an example is discussed in detail . * non - bayesian particle filters * + + * alexandre j. chorin and xuemin tu * + + department of mathematics , university of california at berkeley + and + lawrence berkeley national laboratory + berkeley , ca , 94720 * keywords * particle filter , chainless sampling , normalization factor , iteration , non - bayesian
|
the conventional metropolis method simulates the gibbs canonical ensemble at a fixed temperature and allows for easy calculations of the ( internal ) energy and functions thereof .however , some of the most important quantities of statistical physics , free energy and entropy , can only be obtained by tedious integrations .one way to overcome this problem is by multicanonical ( muca ) simulations , which calculate canonical expectation values over a temperature range in a single simulation by using the weight factor where is the number of states with energy and the microcanonical entropy . in an extension of the microcanonical terminology onmay call microcanonical inverse temperature ( in natural units with boltzmann constant ) and microcanonical , dimensionless free energy , see appendix [ recursion ] .muca simulations became popular with the interface tension calculation of the 10-state potts model , when the method emerged as the winner of largely disagreeing estimates , which after their publication became resolved by exact values .similar simulation concepts can actually be traced back to the work by torrie and valleau in the 1970s . in recent yearsthe muca method has found many applications , besides for first order phase transitions mainly for complex systems including spin glasses and peptides , see for a brief review and a summary of related methods .the scope of this article is limited to the ising model and its generalization in form of -state potts models , for a review see .fortran routines which work in arbitrary integer dimensions are provided , but we confine our demonstrations to , to allow for comparison with rigorous analytical calculations .the ising model simulation is seen to match the exact finite lattice results of ferdinand and fisher , while for the potts model one finds agreement with the rigorously known transition temperature and latent heat of baxter .details of the model are summarized in the first part of section [ preliminaries ] . in the second part of this section the downloading of the fortran code and its use are explained . in section [ muca ]muca simulations are treated .the temperature dependence of the standard thermodynamic quantities energy , specific heat , free energy and entropy is calculated for the ising model as well as for the 10-state potts model and the canonically re - weighted histograms are shown .special attention is given to the analysis procedure , which has to be able to handle sums of very large numbers .this is done by using only the logarithms until finally the quotient of two such numbers is obtained .jackknife binning is used to minimize bias problems which occur in the re - weighting of the simulation data to canonical ensembles .using the provided fortran code and following the instructions allows for a step by step reproduction of the figures and all other numerical results presented in this article .this could be a desirable standard for more involved simulations too .some conclusions are given in the final section [ conclusions ] .we introduce the potts models on -dimensional hypercubic lattices with periodic boundary conditions . for this paperwe stay close to the notation used in the accompanying computer programs and define the energy via the action variable where is the kronecker delta .the sum is over the nearest neighbor lattice sites and is the _ potts state _ of configuration at site . forthe -state potts model takes on the values .as the variable iact takes on integer values , it allows for convenient histograming of its values during the updating process .occasionally , we use the related mean values each configuration ( microstate of the system ) defines a particular arrangements of all states at the sites and , vice versa , each arrangement of the states at the sites determines uniquely a configuration : the expectation value of an observable is defined by where the sum is over all microstates and the partition function normalizes the expectation value of the unit operator to . as there are possible potts states at each site , the total number of microstates is including in a muca simulation allows for the normalization of the partition function necessary to calculate the canonical free energy and the entropy as a function of the temperature .our definition([o ] ) of agrees with the one commonly used for the ising model , but disagrees by a factor of two with the one used for the potts model in : for the potts models a number of exact results are known in the infinite volume limit .the critical temperature is the phase transition is second order for and first order for . at the average energy per potts state is where , by reasons of consistency with the ising model notation , also our definition ( [ actm ] ) of differs by factor of two from the one used in most potts model literature .for the first order transitions at equation ( [ potts_es ] ) gives the average of the limiting energies from the ordered and the disordered phase .the exact infinite volume latent heat and the entropy jumps were also calculated by baxter , whereas the interfacial tensions were derived more recently .( 100,100 ) ( 0 , 0 ) figure [ fig_fort ] shows the directory tree in which the fortran routines are stored .muca is the parent directory and on the first level we have the directories exercises , forlib , forprog and work .the master code is provided in the directories forlib and forprog .forlib contains the source code of a library of functions and subroutines .the library is closed in the sense that no reference to non - standard functions or subroutines outside the library is ever made .the master versions of the main programs and certain routines ( which need input from the parameter files discussed below ) are contained in the subdirectory forprog .the demonstrations of this article are contained in the subdirectories of exercises . to download the code , start with the url www.hep.fsu.edu/~ and click the research link , then the link multicanonical simulations . on this pagefollow the link fortran code and get either the file muca.tar ( kb ) or the file muca.tgz ( kb ) . on most unix platforms you obtain the directory structure of figure [ fig_fort ] from muca.tar by typing alternatively from muca.tgz by typing either tar -zxvf muca.tgz or gunzip muca.tgz followed by ( [ tar ] ) .you obtain the results of this paper by compiling and running the code prepared in subdirectories of exercises , e.g. f77 -o program.f followed by ./a.out . due to the include and parameter file structure used ,the programs and associated routines of forprog compile only in subdirectories which are two levels down from the muca parent directory .this organization should be kept , unless you have strong reasons to change the dependencies .the present structure allows to create work directories for various projects , with the actual runs done in the work subdirectories .note that under ms windows with the ( no longer marketed ) ms fortran compiler a modification of the include structure of the code turned out to be necessary .if such problems are encountered , one solution is to copy all needed files to the subdirectory in question and to modify all include statements accordingly .each exercises subdirectory contains a file with instructions , which should be followed .the subdirectories e1 prepare some code to check for the correctness of the conventional metropolis code and the subdirectories e2 prepare examples of multicanonical simulations , which are discussed in the next section .the simulation parameters are set in the files or a subset thereof , which are also kept in each of the subdirectories of exercises .the dimension nd of the system and the maximum lattice length ml are defined in lat.par . in lat.datthe lattice lengths for all directions are assigned to the array nla , which is of dimension nd .this allows for asymmetric lattices .the number of potts states , nq , is defined in potts.par .the parameters of the conventional metropolis simulation are defined in mc.par : these are the value beta , which defines the initial weights in case of a muca simulation , the number of equilibrium sweeps nequi , the number of measurement repetitions nrpt and the number of measurement sweeps nmeas .additional parameters of the muca recursion are defined in muca.par : the maximum number of recursions nrec_max , the number of sweeps between recursions nmucasw and the maximum number of tunnelings ( [ muca_tunneling ] ) maxtun , which terminates the recursion unless nrec_max is reached first . whenever data for a graphical presentation are generated by our code , it is in a form suitable for _ gnuplot _ , which is freely available .gnuplot driver files fln.plt are provides in the solution directories , such that one obtains the the plot by typing gnuplot fln.plt on unix and linux platforms ( under ms windows follow the gnuplot menu ) .a conventional , canonical simulation calculates expectation values at a fixed temperature and can , by re - weighting techniques , only be extrapolated to a vicinity of this temperature .in contrast , a single muca simulation allows to obtain equilibrium properties of the gibbs ensemble over a range of temperatures , which would require many canonical simulations .this coined the name multicanonical .the muca method requires two steps : 1 .obtain a _ working estimate _ of the weights .working estimate means that the approximation to ( [ wspectral ] ) has to be good enough to ensure movement in the desired energy range , but deviations of from ( [ wspectral ] ) by a factor of , say , ten are tolerable .2 . perform a markov chain mc simulation with the final , fixed weights .canonical expectation values are found by re - weighting to the gibbs ensemble . to obtain working estimates of the weight factors ( [ wspectral ] ) , a slightly modified version of the recursion of ref . is used .as the analytical derivation of the modified recursion has so far only been published in conference proceedings , it is for the sake of completeness included in appendix [ recursion ] of this paper .the fortran implementation is given by the subroutine of forprog .one subtlety is that two histogram arrays hup and hdn are introduced to keep separately track of the use of upper and lower entries of nearest neighbor pairs .detailed explanations of the code will be part of a book .the question whether more efficient recursions exists is far from being settled .for instance , f. wang and landau made recently an interesting proposal .exploratory comparisons with the recursion used in the present paper reveal similar efficiencies . in between the recursionsteps the metropolis updating routine is called , which implements the standard metropolis algorithm for general weights .the random number generator of marsaglia et al. is integrated to ensure identical results on distinct platforms .first , we illustrate the muca recursion for the ising model .we run the recursion in the range these values of namin and namax are chosen to cover the entire range of temperatures , from the completely disordered ( ) region to the groundstate ( ) . in many applicationsthe actual range of physical interest is smaller , namin and namax should correspondingly be adjusted , because the recursion time increases quickly with the range .the recursion is completed after maxtun tunneling events have been performed .tunneling event _ is defined as an updating process which finds its way from this notation comes from the applications of the method to first order phase transitions , for which namin and namax are separated by a free energy barrier in the canonical ensemble .although the muca method removes this barrier , the terminus tunneling was kept .the requirement that the process tunnels also back is included in the definition , because a one way tunneling is not indicative for the convergence of the recursion .most important , the process has still to tunnel when the weights are frozen for the second stage of the simulations note that things work differently for the wang - landau recursion .it has no problems to tunnel in its initial stage , but its estimates of the spectral density are still bad , such that the tunneling process gets stuck as soon as the weights are fixed .for most applications ten tunnelings during our recursion part lead to acceptable weights .if the requested number of tunnelings is not reached after a certain maximum number of recursion steps , the problem will disappear in most cases by rerunning ( eventually several times ) with different random numbers .otherwise , the number of sweeps between recursions should be enlarged , because our recursion is strictly only valid when the system is in equilibrium .one may even consider to discard some sweeps after each recursion step to reach equilibrium , but empirical evidence indicates that the improvement ( if any ) does not warrant the additional cpu time .the disturbance of the equilibrium is weak when the weight function approaches its fixed point . in the default setting of our programswe take the number of sweeps between recursion steps inversely proportional to the acceptance rate , because equal number of accepted moves is a better relaxation criterion than an equal number of sweeps . in the subdirectory e2_01 of exercises a lattice ising model simulationis prepared for which we requested ten tunneling events .we we find them after 787 recursions and 64,138 sweeps , corresponding to an average acceptance rate of 20 * 787/64138=0.245 ( the acceptance rate can be calculated this way , because the number of accepted sweeps triggers the recursion ) .almost half of the sweeps are spent to achieve the first tunneling event .subsequently , an muca production run of 10,000 equilibrium and sweeps with measurements is carried out . on a ghz linuxpc the entire runtime ( recursion plus production ) is about thirty seconds . in the subdirectorye2_02 a similar simulation is prepared for the 10-state potts model on a lattice .let us assume that we have performed a muca simulation which covers the action histogram needed for a temperature range in practice this means that the parameters namax and namin in muca.par have to be chosen such that holds , where is the canonical expectation value of the action variable ( [ iact ] ) .the conditions may be relaxed to equal signs , if is used for all action values and for all action values .given the muca time series , where labels the generated configurations , the definition ( [ o ] ) of the canonical expectation values leads to the muca estimator \over \sum_{i=1}^n \exp \left [ -\beta\,e^{(i ) } + b(e^{(i)})\,e^{(i)}-a(e^{(i ) } ) \right ] } .\ ] ] this formula replaces the muca weighting of the simulation by the boltzmann factor of equation ( [ o ] ) .the denominator differs from by a constant factor , which drops out because the numerator differs by the same constant factor from the numerator of ( [ o ] ) .if only functions of the energy ( in our computer programs the action variable ) are calculated , it is sufficient to keep histograms instead of the entire time series . for an operator equation ( [ o_muca ] ) simplifies then to \over \sum_e h_{mu}(e)\ , \exp \left [ -\beta\,e + b(e)\,e - a(e ) \right ] } .\ ] ] where is the histogram sampled during the muca production run and the sums are over all energy values for which has entries . when calculating error bars for estimates from equations ( [ o_muca ] ) or ( [ f_muca ] ) , we employ jackknife estimators to reduce bias problems .a computer implementation of equations ( [ o_muca ] ) and ( [ f_muca ] ) requires care .the differences between the largest and the smallest numbers encountered in the exponents can be really large . to give one example , for the ising model on a lattice and the groundstate configuration contributes , whereas for a disordered configuration is possible . clearly , overflow disasters will result , if we ask fortran to calculate numbers like . when the large terms in the numerator and denominator take on similar orders of magnitude , one can avoid them by subtracting a sufficiently large number in all exponents of the numerator as well as the denominator , resulting in a common factor which divides out . instead of overflows one encounters harmless underflows of the type .we implement the idea in a more general fashion , which remains valid when the magnitudes of the numerator and the denominator disagree .we avoid altogether to calculate large numbers and deal only with the logarithms of sums and partial sums .we first consider sums of positive numbers and discuss the straightforward generalization to arbitrary signs afterwards . for with and we calculate from the values and , without ever storing either or or .the basic observation is that \\ \nonumber & = & \max \left ( \ln a,\ln b \right ) + \\\nonumber \ln \ { \ , 1 & + & \exp \left [ \min ( \ln a,\ln b ) - \max ( \ln a,\ln b ) \right]\ , \}\end{aligned}\ ] ] holds . by constructionthe argument of the exponential function is negative , such that an underflow occurs when the difference between and becomes too big , whereas it becomes calculable when this difference is small enough .to handle alternating signs one needs in addition to equation ( [ lnc ] ) an equation for where and still holds .assuming , equation ( [ lnc ] ) converts for into \}\end{aligned}\ ] ] and , because the logarithm is a strictly monotone function , the sign of is positive for and negative for .the computer implementation of equations ( [ lnc ] ) and ( [ lnabsc ] ) is provided by the fortran function addln.f and the fortran subroutine addln2.f of forlib , respectively .the subroutines potts_zln.f and potts_zln0.f of forlib rely on this to perform the jackknife re - weighting analysis for various physical observables .( 150,155 ) ( 0 , 0 ) ( 150,155 ) ( 0 , 0 ) ( 150,155 ) ( 0 , 0 ) ( 150,155 ) ( 0 , 0 ) ( 150,155 ) ( 0 , 0 ) we are now ready to analyze the muca data for the energy per spin of the ising model on a lattice , which we compare in figure [ fig_2di_es ] with the exact results of ferdinand and fisher .the code is prepared in the subdirectory e2_03 .the same numerical technique allows us to to calculate the _ specific heat _, which is defined by figure [ fig_2di_c ] compares the thus obtained muca data with the exact results of ferdinand and fischer .the code is also prepared in subdirectory e2_03 .figure [ fig_2di_muh ] shows the energy histogram of the muca simulation together with its canonically re - weighted descendants at , and .the fortran code is prepared in the subdirectory e2_04 .the normalization of the muca histogram is adjusted such that it fits reasonably well into one figure with the three re - weighted histograms . in figure [ fig_2di_muh ]it is remarkable that the error bars of the canonically re - weighted histograms are not just the scale factors times the error bars of the muca histogram , but in fact much smaller .this can be traced to be an effect of the normalization .the sum of each canonical jackknife histogram is normalized to the same number and this reduces the spread .relying on the 10-state potts model data of the run prepared in subdirectory e2_02 , we reproduce in subdirectory e2_05 the action variable actm results plotted in figure [ fig_2d10q_act ] .around we observe a sharp increase of actm from 0.433 at to 0.864 at , which signals the first order phase transition of the model . in figure [ fig_2d10q_muh ]we plot the canonically re - weighted histogram at together with the muca histogram using suitable normalizations , as prepared in subdirectory e2_06 .the ordinate of figure [ fig_2d10q_muh ] is on a logarithmic scale and the canonically re - weighted histogram exhibits the double peak structure which is characteristic for first order phase transitions .the muca method allows then to estimate the interface tension of the transition by calculating the minimum to maximum ratio on larger lattices , see . at the potts partition function is given by equation ( [ k ] ) .muca simulations allow for proper normalization of the partition function by including in the temperature range ( [ beta_range ] ) .the normalized partition function yields important quantities of the canonical ensemble , the _ helmholtz free energy _ and the _ entropy _ where is the internal energy ( [ iact ] ) .( 150,155 ) ( 0 , 0 ) ( 150,155 ) ( 0 , 0 ) figure [ fig_free_energy ] shows the free energy density per site for the ising model as well as for the 10-state potts model .the ising model analysis is prepared in the subdirectory e2_03 and the analysis in e2_05 . as in previous figures ,the ising model data are presented together with the exact results , whereas we compare the 10-state potts model data with the asymptotic behavior . for large the partition function of our -state potts models approaches and , therefore , finally , in figure [ fig_entropy ] we plot for the entropy density the ising model the muca results together with the exact curve .further , entropy data for the 10-state potts model are included in this figure . in that casewe use instead of , such that both graphs fit into the same figure .the analysis code for the entropy is contained in the same subdirectories as used for the free energy . in all the figures of this section excellent agreement between the numerical and the analytical results is found .there are many ways to extend the multicanonical simulations of this paper .the interested reader is simply referred to the literature .the purpose of this article is to serve a start - up kit for the computer implementation of some of the relevant steps .there appears to be need for this , because to get the first program up and running appears to be a major stumbling block in the way of using the method .in the opinion of the author , multicanonical simulations have the potential to replace canonical simulations as the method of first choice for studies of small to medium - sized systems .as seen here , once the recursion necessary for the first part of a muca simulation is programmed , the entire thermodynamics of the system follows from the second part of the simulation . however , the slowing down for larger system sizes is rather severe .quite a number of similar methods exists , see for a summary .a sound comparison would require that the goals of the simulations and their benchmarks are defined first .so far the community has not set such standards .* acknowledgments : * i would like to thank alexander velytsky for useful discussions and for contributing figure [ fig_fort ] .this work was in part supported by the u.s .department of energy under the contract de - fg02 - 97er41022 .we first discuss the weights ( [ wspectral ] ) . by definition ,the microcanonical temperature is and we define the dimensionless , microcanonical free energy by it is determined by relation ( [ t_micro ] ) up to an ( irrelevant ) additive constant .we consider the case of a discrete minimal energy and choose / \epsilon\ ] ] as the definition of .the identity implies inserting yields \ , e\ ] ] and is fixed by defining .once is given , follows .a convenient starting condition for the initial simulation is because the system moves freely in the disordered phase .other choices are of course possible .our fortran implementation allows for with defined in the parameter file mc.par .the energy histogram of the simulation is given by .to avoid we replace for the moment \ , , \ ] ] where is a number .our final equations allow for the limit . in the following subscripts used to indicate quantities which are are not yet our final estimators from the simulation .we define where the ( otherwise irrelevant ) constant is introduced to ensure that is an estimator of the microcanonical entropy inserting this relation into ( [ be ] ) gives / \epsilon\ , .\ ] ] the estimator of the variance of is obtained from = \sigma^2 [ b^n ( e ) ] + \ ] ] / \epsilon + \sigma^2 [ \ln \hat{h}^n(e ) ] / \epsilon\ .\ ] ] now =0 $ ] as is the fixed function used in the simulation and the fluctuations are governed by the sampled histogram = \ ] ] = \left [ \ln ( h^n + \triangle h^n ) - \ln ( h^n ) \right]^2\ ] ] where is the fluctuation of the histogram , which is known to grow with the square root of the number of entries .hence , under the assumption that autocorrelation times of neighboring histogram entries are identical , the equation = { c'\over h^n(e+\epsilon)}+{c'\over h^n(e)}\ ] ] holds , where is an unknown constant .the assumption would be less strong if it were made for the energy - dependent acceptance rate histogram instead of the energy histogram . in the present models the energy dependence of the acceptance rate is rather smooth between nearest neighbors andthere is less programming effort when using only energy histograms .equation ( [ sigma2_b ] ) shows that the variance is infinite when there is zero statistics for either histogram , or .the statistical weight for is inversely proportional to its variance and the over - all constant is irrelevant .we define } \\ \nonumber & = & { h^n ( e+\epsilon)\ h^n ( e ) \over h^n ( e+\epsilon ) + h^n ( e ) } \end{aligned}\ ] ] which is zero for or .the simulation is carried out using .it is now straightforward to combine and according to their respective statistical weights into the desired estimator : where the normalized weights and are determined by the recursion we can eliminate from equation ( [ bn ] ) by inserting its definition ( [ b0 ] ) and get / \epsilon\ .\ ] ] notice that it is now save to perform the limit .finally , equation ( [ recursion ] ) can be converted into a direct recursion for ratios of the weight factor neighbors .we define and get ^{\hat{g}^n_0(e)}\ , .\ ] ] t.nagasima , y.sugita , a. mitsutake , and y. okamoto , invited talk given at the 2002 taiwan statphys conference , to be submitted to physica a. g. marsaglia , a. zaman and w.w .tsang , stat .* 8 * , 35 ( 1990 ) .
|
the purpose of this article is to provide a starter kit for multicanonical simulations in statistical physics . fortran code for the -state potts model in dimensions can be downloaded from the web and this paper describes simulation results , which are in all details reproducible by running prepared programs . to allow for comparison with exact results , the internal energy , the specific heat , the free energy and the entropy are calculated for the ising ( ) and the potts model . analysis programs , relying on an all - log jackknife technique , which is suitable for handling sums of very large numbers , are introduced to calculate our final estimators . multicanonical algorithm , fortran code , all - log jackknife technique , internal energy , free energy , entropy . 05.10.ln , 05.50.+q .
|
big data analysis , a frontier field in science and engineering , has broad applications ranging from biomedicine and smart health to social behavior quantification and energy optimization in civil infrastructures .for example , in biomedicine , vast electroencephalogram ( eeg ) or electrocorticogram ( ecog ) data are available for the analysis , detection , and possibly prediction of epileptic seizures ( e.g. , refs . ) . in a modern infrastructure viewed as a complex dynamical system, large scale sensor networks can be deployed to measure a number of physical signals to monitor the behaviors of the system in continuous time . in a modern city , smart camerasare placed in every main street to monitor the traffic flow at all time . in a community, data collected from a large number of users carrying various mobile and networked devices can be used for community activity prediction . in wireless communication ,big data sets are ubiquitous . in all these cases ,monitoring , sensing , or measurements typically result in big data sets , and it is of considerable interest to detect behaviors that deviate from the norm or the expected . in this paper, we develop a general and systematic framework to detect hidden and anomalous dynamical events , or simply anomalies , from big data sets .the mathematical foundation of our framework is hilbert transform and instantaneous frequency analysis .the reason for this choice lies in the fact that complex dynamical systems are typically nonlinear and non - stationary . for such systems , the traditional fourier analysis is limited because , fundamentally , they are designed for linear and stationary systems .windowed fourier analysis may be feasible to generate patterns in the two - dimensional frequency - time plane pertinent to characteristic events , but two - dimensional feature identification is difficult .in contrast , the features generated by the emd methodology are one - dimensional , which are easier to be identified computationally. the hilbert transform and instantaneous frequency - based analysis have proven to be especially suited for data from complex , nonlinear , and non - stationary dynamical systems .the challenge is to develop a mathematically justified and computationally reasonable framework to uncover and characterize `` unusual '' dynamical indicators that may potentially be precursors to a large scale , catastrophic dynamical event of the system .the general principle underlying the development of our big data - based detection framework is as follows .first , we develop an efficient procedure for pre - processing big data sets to exclude erroneous data segments and statistical outliers .next , we exploit a method based on a separation of time scales , the empirical mode decomposition ( emd ) method , to detect anomalous dynamical features of the system . due to its built - in ability to obtain from a complex , seemingly random time series a number of dominant components with distinct time scales , the method is anticipated to be especially effective for anomaly detection .we pay particular attention to the challenges associated with big data sets .finally , we perform statistical analysis to identify and characterize the anomalies and articulate their implications . as a concrete example to illustrate the general principle of our big data analysis framework , we address the detection of high frequency oscillations ( hfos ) , which are local oscillatory field potentials of frequencies greater than 100 hz and usually have a duration less than one second .oscillations between 100 and 200 hz are called ripples and occur most frequently during episodes of awake immobility and slow wave sleep . the hfos in this range are believed to play an important role in information processing and consolidation of memory .pathologic hfos ( with frequency larger than 200 hz , or fast ripples ) reflect fields of hyper - synchronized action potentials within small discrete neuronal clusters responsible for seizure generation .they can be recorded in association with interictal spikes only in areas capable of generating recurrent spontaneous seizures .thus detecting fast ripple can be useful in locating the seizure onset zone in the epileptic network , and this was verified previously using data sets from a wide variety of patients . in particular , it was found that almost all fast - ripple hfos were recorded in seizure - generating structures of patients suffering from medial or polar temporal - lobe epilepsy , indicating that the ripples are a specific , intrinsic property of seizure - generating networks in these brain areas .the pathologic hfos and their spatial extent can potentially be used as biomarkers of the seizure onset zone , facilitating decisions as to whether surgical treatment would be necessary .besides their role in locating the seizure onset zone , hfos may also reflect the primary neuronal disturbances responsible for epilepsy and provide insights into the fundamental mechanisms of epileptogenesis and epileptogenicity .traditional methods such as the fourier transform and spectral analysis assume stationarity and/or approximate the physical phenomena with linear models. these approximations may lead to spurious components in their time - frequency distribution diagrams if the underlying signal is non - stationary and nonlinear . empirical mode decomposition ( emd )is a technique to specifically deal with non - stationary and nonlinear signals .given such a signal , emd decomposes it into a small number of modes , the intrinsic mode functions ( imfs ) , each having a distinct time or frequency scale and preserving the amplitude of the oscillations in the frequency range .the decomposed modes are orthogonal to each other , and the sum of all modes gives the original data .the ease and accuracy with which the emd method processes non - stationary and nonlinear signals have led to its widespread use in various applications such as seismic data analysis , chaotic time series analysis , neural signal processing in biomedical sciences , meteorological data analysis , and image analysis .we develop an emd based method to detect hfos . due to its built - in ability to pick out from a complex , seemingly random time series a number of dominant components of distinct time scales ,the method is especially effective for the detection of hfos .we finally perform a statistical analysis and find a striking phenomenon : hfos occur in an on - off intermittent manner with algebraic scaling .in addition to hfos , our framework can detect population spikes , oscillations in the frequency range from 10 to 50 hz , as well as distinct and independent imfs . since pathologic hfos reveal dynamical coherence within small discrete neuronal clusters responsible for seizure generation , a good understanding and accurate detection of hfos may bring the grand goal of early seizure prediction one step closer to reality and would also improve the localization of the seizure onset zone to facilitate decision making with regard to surgical treatment .not only does our method illustrate , in a detailed and concrete way , an effective way to analyze big data sets , our finding also has potential impact in biomedicine and human health .there were existing works on applying the emd / hilbert transform method to neural systems .earlier the method was applied to analyzing biological signals and performing curve fitting , and a combination of emd , hilbert transform , and smoothed nonlinear energy operator was proposed to detect spikes hidden in human eeg data .subsequently , it was demonstrated that the methodology can be used to analyze neuronal oscillations in the hippocampus of epileptic rats in vivo with the result that the oscillations are characteristically different during the pre - ictal , seizure onset and ictal periods of the epileptic eeg in different frequency bands . in another work ,the emd / hilbert transform method was applied to detecting synchrony episodes in both time and frequency domains .the method was demonstrated to be useful for decomposing neuronal population oscillations to gain insights into epileptic seizures , and emd was used for extracting single - trial cortical beta oscillatory activities in eeg signals .the outputs of emd , i.e. , the imfs , were demonstrated to be useful for eeg signal classification .our work differs from these previous works in that we address the issue of detecting hfos and uncovering the underlying scaling law .high sampling ( 12khz ) , multichannel ( 32 - 64 channels ) , continuous recordings of local field potentials in freely moving rodents presents unique technical challenges .although most channels continue to record over a 4 - 6 week periods , over time the integrity of the signal degrades and electrode recording may come off and on line . to this end, it is important to pre - process data files to exclude gaps in data .this in itself is challenging due to the large size of each dataset ( about 5 terabytes ) , variability during recordings of local field potentials , and gaps in data . here , we develop a fully automated statistical method .the resulting `` data - mining '' algorithm is general and we expect it to be useful for dealing with other massive data sets . for our studywe examine eeg data taken from a rat model of the approach to epilepsy .the typical size of a binary file in our database is about mb .each file belongs to a certain channel ( specified by a channel number ) and a specific time duration ( specified by a file number ) .we regard the channel and file numbers as two orthogonal dimensions and plot the contour of a suitable statistical quantity ( to be discussed below ) in the two dimensional plane , so the data of one rat ( terabytes ) can be represented by a single contour plot .the whole process can be programmed to be highly parallelized , providing a global overview of the raw eeg data .let , be the value of the eeg signal for a single sample , where is the number of samples in a binary file . in the experiment ,each value is recorded as a -bit integer , so ] and ] ) , there are six imfs and their frequencies are about 5 khz , 2 khz , 1 khz , 500 hz , 200 hz , and 100 hz , respectively ( for imfs 1 - 6 ) , we will need to add the following small sinusoidal signals : =y[i ] + 0.9 \times \sin{(2\pi i/(12207/100 ) ) } + 0.5 \times \sin{(2\pi i/(12207/200 ) ) } + 0.25 \times \sin{(2\pi i/(12207/500 ) ) } + 0.125 \times \sin{(2\pi i/(12207/1000 ) ) } + 0.0625 \times \sin{(2\pi i/(12207/2000 ) ) } + 0.03 \times \sin{(2\pi i/(12207/5000))}$ ] , where the amplitudes of the imfs are typically larger than 10 .the perturbation signals thus will not have any practical influences on the imf results for normal data .however , when there is a discontinuity with a linear relaxation in time , the corresponding imfs will contain the added small sinusoidal oscillations instead of generating divergence or large anomalies [ fig . [ fig : emdtrick](g - l ) ] .in addition , when the original data is contaminated by a small segment of zeros , without adding the small oscillations , the resulting imfs will oscillate wildly in this region with amplitudes orders of magnitude larger than those of the normal data sets [ fig . [fig : emdtrick](a - f ) ] .this is because , for obtaining each imf , emd looks for the local maxima and local minima and then approximate the data with cubic spline connecting the maxima or minima . when a segment of zeros is encountered , there are no local maxima or minima so that the emd extrapolates with cubic spline using the maxima or minima outside this region . for the first imf ,since the frequency is the highest ( about 5 khz ) , even a zero segment of about 0.1 second would correspond to about 500 maxima or minima .thus the extrapolation will generate extremely large , artificial oscillations .the remainder obtained by subtracting imf 1 from the original data will compensate the large oscillations in imf 1 , but they will propagate to subsequent imfs .the conclusion is that , adding the small sinusoidal perturbing signals causes essentially no difference in the original signal ( about 1 part in 1000 ) , but the artificial anomalies can be effectively eliminated .human datasets were not used .experiments were performed on 2-month old male sprague dawley rats .this study was conducted in accordance with federal and university of florida institutional animal care and use committee policies regarding the ethical use of animals in research .( iacuc protocol d710 ) .no fieldwork was performedall data used in the study have been uploaded onto google drive and are publicly available .the link is https://drive.google.com/drive/folders/0b7s5nqou?usp=sharing .we have no competing interests .ycl , lh , prc , wld and mls conceived and designed the research .the data was acquired in prc s lab .lh and xn developed the computational method and performed simulations . all analyzed data .ycl and lh drafted the manuscript .all authors gave final approval for publication .the national institutes of biomedical imaging and bioengineering ( nibib ) through collaborative research in computational neuroscience ( crcns ) grant numbers r01 eb004752 and eb007082 , the wilder center of excellence for epilepsy research , and the children s miracle network supported this research .this work was also supported by the us army research office under grant no .w911nf-14 - 1 - 0504 .lh was supported by nnsf of china under grants no . 11135001 , no .11375074 , and no .y.c.l . would like to acknowledge support from the vannevar bush faculty fellowship program sponsored by the basic research office of the assistant secretary of defense for research and engineering and funded by the office of naval research through grant no .n00014 - 16 - 1 - 2828 .talathi ss , hwang du , spano ml , simonotto j , furman md , myers sm , winters j , ditto wl , carney pr .2008 non - parametric early seizure detection in an animal model of temporal lobe epilepsy . _j. neural eng . _ * 5 * , 8598 .komalapriya c , romano m , thiel m , schwarz u , kurths j , simonotto j , furman m , ditto wl , carney pr .2009 analysis of high - resolution microelectrode eeg recordings in an animal model of spontaneous limbic seizures .. chaos _ * 19 * , 605617 .talathi ss , hwang du , ditto wl , spano m , carney pr .2009 circadian phase - induced imbalance in the excitability of population spikes during epileptogenesis in an animal model of spontaneous limbic epilepsy ._ neurosci .lett . _ * 455 * , 145149 .cadotte aj , demarse tb , mareci th , parekh m , talathi ss , hwang du , ditto wl , ding m , carney pr .2010 granger causality relationships between local field potentials in an animal model of temporal lobe epilepsy . _ j. neurosci .* 189*. an n , zhao wg , wang jz , shang d , zhao ed .2013 using multi - output feedforward neural network with empirical mode decomposition based signal filtering for electricity demand forecasting ._ energy _ * 49 * , 279288 .huang ne , shen z , long sr , wu mc , shih hh , zheng q , yen nc , tung cc , liu hh . 1998 the empirical mode decomposition and the hilbert spectrum for nonlinear and non - stationary time series analysis .lond . a _ * 454 * , 903995 .staba rj , wilson cl , bragin a , fried i , jr je .2002 quantitative analysis of high - frequency oscillations ( 80500 hz ) recorded in human epileptic hippocampus and entorhinal cortex ._ j. neurophysio . _ * 88 * , 17431752 .worrell ga , gardner ab , stead sm , hu s , goerss s , cascino gj , meyer fb , marsh r , litt b. 2008 high - frequency oscillations in human temporal lobe : simultaneous microwire and clinical macroelectrode recordings ._ brain _ * 131 * , 928937 .crpon b , navarro v , hasboun d , clemenceau s , martinerie j , baulac m , adam c , quyen mlv .2010 mapping interictal oscillations greater than 200 hz recorded with intracranial macroelectrodes in human epilepsy ._ brain _ * 133 * , 3345 .haegelen c , perucca p , chatillon ce , andrade - valenca l , zelmann r , jacobs j , collins dl , dubeau f , olivier a , gotman j. 2013 high - frequency oscillations , extent of surgical resection , and surgical outcome in drug - resistant focal epilepsy ._ epilepsia _ * 54 * , 848857 .liang h , lin q , chen jdz .2005 application of the empirical mode decomposition to the analysis of esophageal manometric data in gastroesophageal reflux disease ._ ieee trans . biomed ._ * 52 * , 16921701 .bajaj v , pachori rb .2012 eeg signal classification using empirical mode decomposition and support vector machine . in _ proceedings of the international conference on soft computing for problem solving ( socpros 2011 ) _ , india .-axis is normalized by the maximum of .the four panels correspond to : ( a ) a corrupted file with a large number of zeros ( file ) , ( b ) a bad recording with repetitions of oscillating patterns ( file ) , ( c ) a normal files without transitions ( file ) , and ( d ) a file containing a seizure ( file ) . ] for rat004 channel 02 .red circles denote the normal files ; green squares are the files with large numbers of zeros ; blue crosses are corrupted files ; pink diamonds are small files ; cyan triangles are small files with many zeros ; black star is small corrupted files .the arrow marks the file which has the first seizure .the inset shows the enlarged area around file on a linear scale . ] for rat001 .left panel shows the whole range .different types of data are classified according to the value of . in the right panel , the contour is for values of ( good data ) , and the remaining values of are set to so that the dark blue area marks all abnormal data . ]( in arbitrary units ) varying in time of a particular emd mode of interest ( imf5 , around 200 hz ) for channel 11 in ca1 of eeg recording of a rat over a 2-month period .the all - blue region indicates corrupted files .each file is a 7 hours recording at the sampling frequency khz .thus the vertical axis `` file # '' indicates time .the distribution is calculated and then normalized by the maximum value for each file .the rat underwent surgery between file 28 and file 29 , and the first seizure occurred in file 99 , as indicated by the red arrows .the comb - like structure indicates the circadian periodicity .( b ) normalized distribution of the frequency of the mode . ] second segment of normalized eeg data containing an hfo and a population spike .( b)-(e ) are the imfs in the frequency range of interest .the hfo is revealed in imf 2 and the population spike is revealed in imf 3 and imf 4 . ]periods ( indicated by the blue dashed boxes ) .the time step for the moving window is .( b ) for each imf , we locate the on - intervals , find hfos , and combine adjacent hfos if they are too close to each other .the blue dashed line is the threshold chosen for the segment of the amplitude function .( c ) classifying hfos in terms of their frequencies , e.g. , ripples ( solid blue triangles ) , fast - ripples ( open magenta triangles ) , and then combining overlapping hfos across different imfs , as shown in the blue dashed box . ] . *normalized distribution of the amplitude for files 1 - 28 for the same mode as in fig .[ fig : distribution ] , where is the value of amplitude at the peak of the distribution . for example , by setting and assuming that , can be determined to be 61 .if takes a smaller value , then will be larger . ] of imf 5 ( channel 11 , fig .[ fig : distribution ] ) : ( a - e ) for files 1 - 28 , 29 - 51 , 52 - 94 , 100 - 172 , 175 - 223 respectively .the numbers of on - intervals are 344310 , 314698 , 431674 , 498947 , and 510096 for ( a - e ) , respectively .an algebraic distribution is observed with different exponents for different segments .the exponent for the solid , dotted , and dash - dotted lines are -3.7 , -4.5 , and -5.5 , respectively .the threshold is chosen such that for all the segments . ] of imf 5 ( channel 6 of rat 9 ) : ( a - f ) for files 1 - 19 , 32 - 57 , 63 - 72 , 78 - 97 , 98 - 118 , and 119 - 149 , corresponding to the pre - stimulation state , post - stimulation state , evolving towards seizure , status epilepticus phase , epilepsy latent period , and spontaneous / recurrent seizure period , respectively .the numbers of on - intervals are 354499 , 561669 , 300291 , 458649 , 293118 , and 438919 for ( a - f ) , respectively .an algebraic distribution is observed with different exponents for different segments , where the exponents are , , for dash - dotted line , solid line , and dotted line , respectively .the criterion for choosing the threshold is the same as in fig .[ fig : onoff ] ]
|
we develop a framework to uncover and analyze dynamical anomalies from massive , nonlinear and non - stationary time series data . the framework consists of three steps : preprocessing of massive data sets to eliminate erroneous data segments , application of the empirical mode decomposition and hilbert transform paradigm to obtain the fundamental components embedded in the time series at distinct time scales , and statistical / scaling analysis of the components . as a case study , we apply our framework to detecting and characterizing high frequency oscillations ( hfos ) from a big database of rat eeg recordings . we find a striking phenomenon : hfos exhibit on - off intermittency that can be quantified by algebraic scaling laws . our framework can be generalized to big data - related problems in other fields such as large - scale sensor data and seismic data analysis .
|
we consider the following stochastic pde , in , \,dt\\ & & \qquad { } + h_t(x , u_t(x),\nabla u_t(x))\cdot\overleftarrow { db}_t = 0,\nonumber\end{aligned}\ ] ] over the time interval ] is a given function such that , we may roughly say that the solution of the obstacle problem for ( [ lpde1 ] ) is a function ; h^1 ( \mathbb{r}^d ) ) ] .then the obstacle problem for the equation ( [ spde1 ] ) is defined as a pair , where is a random regular measure and ; h^1 ( \mathbb r^d ) ) ] and the norm for a function \times\mathbb{{r}}^d ) ] consisting of all functions such that is continuous in .the natural norm on is the lebesgue measure in will be sometimes denoted by .the space of test functions which we employ in the definition of weak solutions of the evolution equations ( [ spde1 ] ) or ( [ lpde1 ] ) is ) \otimes \mathcal{c}_c^{\infty } ( \mathbb{r}^d ) ] denotes the space of real functions which can be extended as infinite differentiable functions in the neighborhood of ] , is a backward local martingale under .let . if is such that < \infty ] . moreover , for each , and , one has = \mathbb{e}^m [ w^i_t ; a \cap b ] .\ ] ] we note that is uniformly distributed , and consequently for each , the set satisfies < \infty.\ ] ] this shows that the class of the sets to which applies the statement is rather large .the vector has the distribution , under the measure then one deduce that has the distribution and we may write , for , & = & \int _ { \mathbb{r}^d}\int_{\mathbb{r}^d } \varphi_1 ( y ) \varphi_2 ( x+y ) q_{t - s } ( y ) \,dy \,dx\\ & = & \biggl(\int_{\mathbb{r}^d } \varphi_2 ( x ) \,dx \biggr ) \biggl ( \int_{\mathbb{r}^d } \varphi_1 ( y ) q_{t - s } ( y)\,dy \biggr).\end{aligned}\ ] ] this relation shows that the vector has the distribution , under .then the obvious inequality latexmath:[ ] and is a solution of the deterministic equation ( [ lpde1 ] ) .let us denote by then one has the following representation ( theorem 3.2 in ) .the following relation holds -a.s . for each : in , one uses the backward martingale defined under an arbitrary , with a probability measure in , in order to express the integral .though formally the definition looks different , one easily sees that it is the same object . in this section, we shall be concerned with some facts related to the time space brownian motion , with the state space corresponding to the generator its associated semigroup will be denoted by we may express it in terms of the gaussian density of the semigroup in the following way : where is a bounded borel measurable function , and so we may also write if . the corresponding resolvent has a density expressed in terms of the density too , as follows : or in particular , this ensures that the excessive functions with respect to the time space brownian motion are lower semicontinuous .in fact , we will not use directly the time space process , but only its semigroup and resolvent . for related facts concerning excessive functions ,the reader is referred to or .some further properties of this semigroup are presented in the next lemma .the semigroup acts as a strongly continuous semigroup of contractions on the spaces and obviously , it is enough to check the following relations : first , we note that for each function and one has this property is obvious for a function and then it is obtained by approximation for any function in then the relation easily follows . from it ,one deduces the strong continuity of on in order to prove the same property in the space , one should start with the relation which holds for each and then repeat , with obvious modifications , the previous reasoning .the next definition restricts our attention to potentials belonging to which is the class of potentials appearing in our parabolic case of the obstacle problem .[ potential ] ( i ) a function \times\mathbb{r}^{d}\rightarrow\overline{\mathbb{r}} ] such that is finite and continuous on and \mbox { s.t . }( t , w_{t } ( \omega ) ) \in d_{\varepsilon } \ } \bigr ) < \varepsilon.\ ] ] \(ii ) a function \times\mathbb{r}^{d}\rightarrow [ 0,\infty ] ] is continuous .next , we will present the basic properties of the regular potentials .do to the expression of the semigroup in terms of the density , it follows that two excessive functions which represent the same element in should coincide .[ potentielregulier ] let then has a version which is a regular potential if and only if there exists a continuous increasing process }]-adapted and such that , < \infty ] the process is uniquely determined by these properties .moreover , the following relations hold : for each test function where is the measure defined by \times\mathbb{r}^{d } ) . \ ] ] we first remark that the uniqueness of the increasing process in the representation ( i ) follows from the uniqueness in the doob meyer decomposition .let us now assume that is a regular potential which is a version of we will use an approximation of constructed with the resolvent . by the resolvent equation , onehas let us set and since is excessive , one has and is an increasing sequence of excessive functions with limit in fact are potentials and their trajectories are continuous .on the other hand , the trajectories are continuous on by the quasi - continuity of the process is a super - martingale , and because in , it is a potential and the trajectories have null limits at .therefore , this approximation also holds uniformly on the trajectories , on the closed interval , ] the martingales given by the conditional expectations , m_{t}=\mathbb{e}^{m } [ a_{t}/\mathcal{f}_{t } ] . ] the inequality ensures the conditions to pass to the limit and get passing to the limit in the relations ( [ ast ] ) and ( [ astast ] ) one deduces the relations ( i ) , ( ii ) and ( iii ) . in order to check the relation ( iv ) from the statement, we observe that the relation is fulfilled by the functions where is arbitrary in in order to get the relation ( iv ) , it would suffice to pass to the limit with in this relation .the only term which poses problems is the last one .the uniform convergence on the trajectories implies that , -a.s . , the measures weakly converge to therefore , one has on the other hand , one has by it s formula and doob s inequality, one has the preceding estimate ensures the possibility of passing to the limit and deducing that and thus we obtain the relation ( iv ) .let us now consider the converse .assume that and is a continuous increasing process adapted to } ] has a continuous version in \times\mathbb{r}^{d} ] is a cdlg supermartingale , and more precisely a potential . by the relation ( i ) , this process admits a continuous version .it follows that itself is continuous and , as a consequence , one has the following convergence , uniformly on the trajectories: on the other hand , by the representation ( i ) one has which leads to this relation implies that is quasicontinuous , and hence it is a regular potential , completing the proof . it is known in the probabilistic potential theory that the regular potentials are associated to continous additive functionals ( see , section iv.3 or , theorem 5.4.2 ) . in the above theorem , the additive aspect is not evident .in fact , it is hidden in the relation ( i ) of theorem [ potentielregulier ] .this relation implies that , for is measurable with respect to the completion of ) . ] a natural question now is whether one radon measure on \times\mathbb{r}^{d} ] such that relation holds .then one has for each and . ] clearly , it is sufficient to prove the lemma for such that then we set for ] and let be a decreasing function such that on the interval ] is called regular provided that there exists a regular potential such that the relation ( iv ) from the above theorem is satisfied . as a consequence of the preceding lemma, we see that the regular measures are always represented as in the relation ( v ) of the theorem , with a certain increasing process .we also note the following properties of a regular measure , with the notation from the theorem . 1 .a set \times\mathbb{r}^{d } ) ] is polar , in the sense that , ( t , w_{t } ( \omega ) ) \in b \ }\bigr ) = 0,\ ] ] then 3 . if \times\mathbb{r}^{d}\rightarrow\overline{\mathbb{r}} ] , are a.s .continuous , then one has be a standard -dimensional brownian motion on a probability space .so takes values in . over the time interval ] where is the completion in of .we denote by the space of -valued predictable and -adapted processes such that the trajectories are in a.s . and in the remainder of this paper , we assume that the final condition is a given function in and the functions appearing in equation ( [ spde1 ] ) , \times\omega\times\mathbb{r}^d \times\mathbb{r}\times \mathbb{r}^{d } \rightarrow\mathbb{r } , \\ & & g= ( g_1,\ldots , g_d ) \dvtx [ 0,t]\times\omega\times\mathbb{r}^d \times \mathbb{r}\times\mathbb{r}^{d } \rightarrow\mathbb{r}^{d } , \\ & & h= ( h_1,\ldots , h_{d^1 } ) \dvtx [ 0,t]\times\omega\times\mathbb{r}^d \times\mathbb{r}\times\mathbb{r}^{d } \rightarrow\mathbb{r}^{d^1},\end{aligned}\ ] ] are random functions predictable with respect to the backward filtration} ] and satisfies we recall that a usual solution ( nonreflected one ) of the equation ( [ spde1 ] ) with final condition , is a processus such that for each test function and any ] and the solutions satisfying the equation\ , dt+\widehat{h}_{t } ( \widehat{u}_{t},\nabla\widehat{u}_{t } ) \cdot \overleftarrow{d\widehat{b}}_{t}=0,\ ] ] over the interval , ] this can be checked just by direct calculations using the above definition of a solution .moreover , if one writes in the form where is a matrix with the entries , then one has in the sense of the order induced by the cone of nonnegative definite matrices .this implies that one has for any then it easy to deduce that fulfils condition ( iii ) of assumption [ assh ] with a constant on the other hand , one can see that satisfies condition ( ii ) with so that the condition ensures which is condition ( iv ) of our assumption [ assh ] .therefore , we conclude that our framework covers the case of an equation that involves an elliptic operator like because the properties of the solution are immediately obtained from those of the solution in this section , we are going to prove the quasi - continuity of the solution of the linear equation , that is , when do not depend of and . to this end, we first extend the double stochastic it s formula to our framework .we start by recalling the following result from ( stated for linear spde ) .[ fk ] let be a solution of the equation where are predictable processes such that \,dt < \infty\quad\mbox{and}\quad \|\phi\|_2 ^ 2 < \infty.\ ] ] then , for any , one has the following stochastic representation , -a.s . , \\[-8pt ] & & { } - \frac{1}{2}\int_{s}^{t}g*dw -\int_{s}^{t}h_r ( w_{r } ) \cdot\overleftarrow{db}_r.\nonumber\end{aligned}\ ] ] we remark that and are independent under and therefore in the above formula the stochastic integrals with respect to and act independently of and similarly the integral with respect to acts independently of . in particular , the process } ] and we introduce the notation . as a consequence of this theorem , we have the following result .[ fk2 ] under the hypothesis of the preceding theorem , one has the following stochastic representation for , -a.e ., for any , \ , ds \nonumber\\[-8pt]\\[-8pt ] & & { } + \int_{t}^{t } ( u_r g_r ) ( w_r ) * dw_r - 2 \sum_{i}\int_{t}^{t } ( u_r \partial_{i}u_r ) ( w_{r } ) \,dw_{r}^{i}\nonumber\\ & & { } + 2 \int_{t}^{t } ( u_r h_r ) ( w_{r } ) \cdot\overleftarrow { db}_r.\nonumber\end{aligned}\ ] ] moreover , one has the estimate \nonumber\\[-8pt]\\[-8pt ] & & \qquad \leq c \biggl [ \|\phi\|_2 ^ 2 + \mathbb { e } \int_t^t [ \|f_s\|_2 ^ 2 + \|g_s\|_2 ^ 2 + \|h_s\|_2 ^ 2 ] \,ds \biggr]\nonumber\end{aligned}\ ] ] for each ] which is a quasicontinuous version of , in the sense that for each there exits a predictable random set \times\omega\times\mathbb{r}^d ] has continuous trajectories , -a.s .let us choose with , so that the sobolev space is continuously imbedded in the space of hlder continuous functions , with - \frac{d}{2} ] . by theorem 8 in , applied with respect to the hilbert space , one deduces that the solution has the trajectories continuous in which implies that they are in \times \mathbb r^d) ] with values in the set of regular measures on \times\mathbb{r}^d ] -predictable random set \times\omega\times\mathbb{r}^{d } ] has continuous trajectories , -a.s .there exists a continuous increasing process } ] , for any , and such that the following relations are fulfilled a.s ., with any and ] , , , .the proof of this proposition results from the approximation procedure used in the proof of theorem [ potentielregulier ] .let .the process } ] , \ , ds - ( \phi , \varphi_t ) + ( u_t , \varphi_t ) \nonumber\\ & & \qquad = \int_{t}^{t } [ ( f_s ( u_{s},\nabla u_s ) , \varphi_s ) - ( g_s ( u_{s},\nabla u_s ) , \nabla\varphi_s ) ] \,ds \\ & & \quad\qquad { } + \int_{t}^{t } ( h_s ( u_{s},\nabla u_s ) , \varphi_s ) \cdot \overleftarrow{db}_{s } + \int_{t}^{t}\int_{\mathbb{r}^d}\varphi_s(x ) \nu(ds , dx),\nonumber\end{aligned}\ ] ] if is a quasicontinuous version of then one has we note that a given solution can be written as a sum where satisfies a linear equation with determined by , while is the random regular potential corresponding to the measure . by propositions [ quasicontinuit : edps ] and [ quasicontinuit : bis ] ,the conditions ( ii ) and ( iii ) imply that the process always admits a quasicontinuous version , so that the condition ( iv ) makes sense .we also note that if is a quasicontinuous version of , then the trajectories of do not visit the set , -a.s .here is the main result of our paper .[ maintheorem ] assume that the assumptions [ assh ] , [ asshd2 ] and [ assho ] hold. then there exists a unique weak solution of the obstacle problem for the spde ( [ spde1 ] ) associated to . in order to solve the problem, we will use the backward stochastic differential equation technics .in fact , we shall follow the main steps of the second proof in , based on the penalization procedure .the uniqueness assertion of theorem [ maintheorem ] results from the following comparison result .[ comparaison ] let be similar to and let be the solution of the obstacle problem corresponding to and the solution corresponding to assume that the following conditions hold : , -a.e . , -a.e . , -a.e .then one has , -a.e . the proof is identical to that of the similar result of el karoui et al .( , theorem 4.1 ) .one starts with the following version of it s formula , written with some quasicontinuous versions of the solutions in the term involving the regular measures we remark that the inclusion and the fact that the set is not visited by , imply that , a.s .therefore , and then one concludes the proof by gronwall s lemma . for ,let be a solution of the following spde with final condition .now set and .clearly for each , is lipschitz continuous in uniformly in with lipschitz coefficient .for each , theorem 8 in ensures the existence and uniqueness of a weak solution of the spde ( [ spde : n ] ) associated with the data .we denote by , and .we shall also assume that is quasi - continuous , so that is -a.e .then solves the bsde associated to the data \\[-8pt ] & & { } + \int_{t}^{t}h_r ( w_{r},y^n_{r},z^n_{r } ) \cdot\overleftarrow{db}_r\nonumber\\ & & { } -\sum_{i}\int _ { t}^{t}z^n_{i , r}\,dw_{r}^{i } .\nonumber\end{aligned}\ ] ] we define and establish the following lemmas .[ penalization : estimate1 ] the triple satisfies the following estimates where , are a positive constants and can be chosen small enough such that . by using it s formula ( [ ito : bsde ] ) for get using assumption [ assh ] and taking the expectation in the above equation under , we get +\gamma \mathbb{e}\mathbb{e}^m [ ( k_{t}^{n}-k_{t}^{n})^{2}],\end{aligned}\ ] ] where , are a arbitrary constants and is a constant which can be from line to line .we have used the inequality and then we have applied schwartz s inequality .we also have used the fact that the measure the forward backward integral as well the other stochastic integrals with respect to the brownian terms have null expectation under .finally , gronwall s lemma leads to the desired inequality.=1 & \leq & c ' [ \mathbb { e } \mathbb{e}^m | y_{t}^{n } | ^{2 } + \|\phi\|_2 ^ 2 ] \nonumber\\ & & { } + c_{\varepsilon } \biggl [ \mathbb { e } \mathbb{e}^m \int_{t}^{t } [ | y_{s}^{n } | ^{2 } + | z_{s}^{n } | ^{2 } ] \ , ds\\ & & \hspace*{25pt } { } + \mathbb{e }\int_{t}^{t } [ \|f_s^0\|_2^{2}+ \|g_s^0\|_2^{2 } + \| h_s^0\|_2^{2 } ] \ , ds \biggr].\nonumber\end{aligned}\ ] ] let now be the weak solutions of the following linear type equations with final condition set and . then by the estimate ( [ estimationyz ] ) , one has \,ds ] .it then follows from gronwall s lemma that \,dr \biggr].\end{aligned}\ ] ] coming back to the equation ( [ bsde : n ] ) and using bukholder gundy inequality and the last estimates , we get our statement . in order to prove the strong convergence of the sequence , we shall need the following result .[ essentiel ] = 0.\ ] ] let be the sequence of solutions of the penalized spde defined in ( [ spde : n ] ) . from lemma [ mainestimate ], it follows that the sequence is bounded in \times\omega\times\mathbb{r}^d ; \mathbb{r}^{1+d + d^1 } ) ] of the process } \lim_{k\to\infty } \sup_{0\leq t \leq t } \(c ) in the case where , , , the representation of is given by now the proof is similar to that of the preceding case .we treat only the second term in the last expression .we set .integration by parts formula gives on the other hand , the convergence implies that the backward martingale } ] in .the other terms in the above expression of may be handled similarly by integration by parts and taking into account corollary [ coefficientgn ] . using again lemma [ mc ] , as in the preceding case, we get the relation ( [ ychapeau ] ) in the form -a.s .\(d ) in the case where , , , the representation of is given by on account of lemma [ coefficienthn ] , the same arguments used in the previous cases work again .now it is easy to see that the relation ( [ ychapeau ] ) holds for the general case . on the other hand ,( [ monotone ] ) and ( [ ychapeau ] ) clearly imply the relation and then , since is bounded in , one gets the relation of our statement .we have also the following result . [convergence : yzk ] there exists a progressively measurable triple of processes } ] satisfies ] , -a.s .thus , there exists a predictable real valued process } ] a.s . and by lemma [ mainestimate ] and fatou s lemma , one gets moreover , from the dominated convergence theorem one has the relation ( [ ito : bsde ] ) gives , for , \,ds \nonumber\\ & & \quad\qquad { } + 2 \int_t^t ( y_s^n - y_s^p ) \,d ( k_s^n - k_s^p ) \\ & & \quad\qquad { } - 2 \int_t^t \langle z_s^n - z_s^p , g_s ( w_s , y_s^n , z_s^n ) - g_s ( w_s , y_s^p , z_s^p ) \rangle\,ds \nonumber\\ & & \quad\qquad { } + \int_t^{t } ( y_s^n - y_s^p ) [ g_s ( x_s , y_s^n , z_s^n ) - g_s ( w_s , y_s^p , z_s^p ) ] * dw\nonumber \\ & & \quad\qquad { } - 2 \sum_{i}\int_{t}^{t } ( y_s^n - y_s^p ) ( z_{i , s}^n - z_{i , s}^p ) \,dw_{s}^{i}\nonumber\\ & & \quad\qquad { } + 2 \int_{t}^{t } ( y_s^n - y_s^p ) [ h_s(w_s , y_s^n , z_s^n ) - h_s(w_s , y_s^p , z_s^p ) ] \cdot\overleftarrow{db}_s \nonumber\\ & & \quad\qquad { } + \int_t^t |h_s(w_s , y_s^n , z_s^n ) - h_s(w_s , y_s^p , z_s^p)|^2 \,ds .\nonumber\end{aligned}\ ] ] by standard calculation , one deduces that therefore from lemma [ essentiel ] , ( [ convergence : y ] ) and ( [ cauchy : z ] ) one gets \\[-8pt ] & & \qquad { } + \mathbb{e } \mathbb{e}^m \int_0^t |z_t^n - z_t^p|^2 \,dt \longrightarrow0 \qquad \mbox{as } n , p \to\infty.\nonumber\end{aligned}\ ] ] the rest of the proof is the same as in el karoui et al .( , pages 721722 ) , in particular we get that there exists a pair of progressively measurable processes with values in such that \longrightarrow0 \qquad \mbox{as } n \to\infty.\nonumber\end{aligned}\ ] ] it is obvious that } ] , then , -a.s . , ,\ ] ] which yields that .finally , we also have since on the other hand the sequences and converge uniformly ( at least for a subsequence ) , respectively , to and and as a consequence of the last proof , we obtain the following generalization of the rbsde introduced in .[ rbdsde : definition ] the limiting triple of processes } ] , } ] and hence has a limit in this space . also from the preceding lemma , it follows that weakly converges to , -a.e .this implies that where is the regular measure defined by writing the equation ( [ spde : n ] ) in the weak form and passing to the limit one obtains the equation ( [ weak : rspde ] ) with and this .the arguments we have explained after definition [ o - spde ] ensure that admits a quasicontinuous version .then one deduces that } ] , -a.e .therefore , the inequality implies , -a.e . and the relation implies the relation ( iv ) of definition [ o - spde ][ coefficientf ] let \times\mathbb{r}^d ; \mathbb{r } ) ] be such that .then the solutions of the equations with final condition , satisfy the relation .[ coefficientgn ] let \times\mathbb { r}^d ; \mathbb { r}^d ) ] .then and is in \times\mathbb { r}^d ; \mathbb{r } ) ] with respect to and such that and let be the solutions of the equations \ , dt + h_t^n \cdot \overleftarrow{db}_t = 0,\ ] ] with final condition , for each .then one has we regularize the process by setting for , ] be a function such that the process } ] with continuous trajectories on ] .let be the solution of the equation with the terminal condition .let } ] , for each .then the following holds : = 0.\ ] ] let us set and observe that this function is a solution of the equation with and terminal condition . writing the representation of theorem [ fk ] with for , one obtains and this leads to the representation of our process given by .\ ] ] then one has .\ ] ]let us denote by obviously , one has . on the other hand, one has for any fixed , this follows from lemma [ mc ] . from the inequality ( [ vn ] ) , one deduces that -a.s . , and hence from the dominated convergence theorem , one gets ^ 2= 0. ] and , . then one has and the first inequality follows from the relation . in order to check the second relation , one dominates the expression of the left - hand side by and then apply the first relation to dominate the first term .the next lemma is a classical result in convex analysis , known as mazur s theorem ( see , remark 5 , page 38 ) . we state here the result with some notation that is useful for our proof .let be a banach space and a sequence of elements in .we call finite family of coefficients of a convex combination a family where is a finite subset of , for each and .the convex combination that corresponds to such a family of coefficients is the point expressed in terms of our sequence by .[ mazur ] let be a weakly convergent sequence of elements in with limit . then there exits a sequence of families of coefficients of convex combinations , , such that the corresponding convex combinations , converge strongly to
|
we prove an existence and uniqueness result for the obstacle problem of quasilinear parabolic stochastic pdes . the method is based on the probabilistic interpretation of the solution by using the backward doubly stochastic differential equation . and . .
|
there have been many successful attempts to extend some crucial ideas and techniques of statistical mechanics to plenty of different fields , including several economic topics .the most studied subject in this context is the financial risk , related to the fluctuations of the prices of stocks and other products ( see ref . ) , while only more recently new kinds of risks like credit risk and operational risk have been investigated . in particular the rise of interest in operational risk has started after that the new basel capital accord , also known as basel ii , has prescribed banks to cope with it .operational risk is defined by basel ii as `` the risk of [ money ] loss [ in banks ] resulting from inadequate or failed internal processes , people and systems or from external events '' . in this contextthe main goal is to determine the _ capital requirement _ , i. e. the capital that a bank has to put aside every year to cover operational losses .the capital requirement is usually identified with the value - at - risk ( var ) over the time horizon of one year with level of confidence , defined as the percentile of the yearly loss distribution , meaning that a loss larger than the var occurs with probability in one year .perhaps the most widespread approach to operational risk is the loss distribution approach ( lda ) . in the context of the lda lossesare classified by the business line in which the loss occurs and by its cause in couples ; the loss distribution of each couple is fitted from data of historical losses assuming that no correlations exist between the losses occurred in different couples .however it is easy to provide an example to show that such an hypothesis is not realistic ; let us suppose that a failure occurs in the transaction control system at the time and repaired at the time , generating a loss equal to the cost of reparation ; however from the time to the time some transactions may fail or may be wrongly authorized , resulting in other losses .the example shows that a crucial mechanism for the generation of losses is given by interactions which are non local in time ( the time interval $ ] may last months ) and non symmetrical ( a failed transaction does not cause a failure in the transaction control system ) .there are several proposal to include the correlations among different couples in the lda ( see refs . ) , but no one has reached a general consensus .moreover the lda is limited to give a static and purely statistical description of the losses , independent from the dynamical mechanisms behind their generation .the approach presented in this contribution is based on a totally different framework : the bank is regarded as a dynamical system whose degrees of freedom are variables representing the losses occurred in different processes ( that can be thought as abstractions of the couples of the lda ) whose state is updated according to an equation of motion that includes several mechanisms for the generation of losses .the model can be considered a generalization of the one introduced in refs . and consists of positive real variables , , representing the amount of the monetary loss occurred at the time in the -th process .the reason for defining positive is that the databases of operational losses collected by banks have only positive entries : in other words the observable quantity is intrinsically positive . in the context of operational riskthe most important quantity is the cumulative loss up to the time : , since it can be taken as a measure of the capital requirement over the time horizon .the values of the variables are updated according to a discrete time equation of motion that includes two different mechanisms for the generation of losses : spontaneous generation via a noise term and interaction with other processes ; the possibility that the banks invest a fixed amount of money for unit of time to keep a process working is also taken into account .the equation of motion is : \ ; + \ ; \theta_i \ ; + \ ; \xi_i(t ) \right ] , \ ] ] where for and it is equal to zero elsewhere , while for and it is equal to zero elsewhere ; the ramp function ensures that stays positive at all the times . as it can be seen from eq .( [ eq : motion ] ) , the value of depends on the interplay among the terms of the argument of the ramp function : the positive terms tend to generate a loss , while the negative terms tend to avoid the occurrence of a loss .the first term accounts for _ potential _ losses generated from the interaction with other processes and it is build in the following way : if , each loss occurred between the time steps and in the -th process generates a _ potential _ loss of amount in the -th process at time ; measures how much non - local in time the coupling between the -th and the -th process is ; in general both and are not symmetrical .the noise term accounts for the spontaneous generation of losses ( like those generated by failures or human errors ) and thus must have a positive support ; it is -correlated in time , does not depend on time and its distribution is exponential : ; provided that the variance of is finite , the qualitative results do not depend on its distribution and the quantitative results in ref . can be easily extended .if , it can be interpreted as the amount of money per unit of time invested on the -th process to keep it running : in fact the sum of the the interaction term and the noise has to be greater than a threshold equal to to effectively generate a loss . in ref . it is shown that the model can be exactly solved , in the sense that all the moments of the distribution of can be calculated , provided that the matrix of couplings satisfies the following hypothesis .let us associate to each process a node in a graph and , if , i. e. if the state of the -th process is influenced by the state of the -th process , let us draw a directed edge starting from the -th node and ending to -th node ; if such graph has no loops , i. e. if it is a directed acyclic graph ( see ref . for some basic definitions about graphs ) , then the matrix of couplings is said to have no casual loops , and all the moments of can be calculated . if has no causal loops it is also true that is the sum of independent and identically distributed variables of finite variance and thus , via the central limit theorem , the asymptotic distribution of is gaussian with and is one of the results still holding for any distribution of , provided that its variance is finite .in ref . it is shown that , in principle , some of the model parameters can be estimated from real data , i. e. from a database of operational losses . such a database is a collection of the past losses registered inside the bank and , in order to be suitable for the estimation of the model parameters , for each loss both the process in which it has occurred and the time at which it has occurred must have been recorded . in the estimation procedurethe inverse of the frequency with which the losses have been recorded in the database is taken as the length of a time step of the model , so that the database of historical losses can be interpreted as a realization of eq . .there two possible approaches to the estimation and in both of them the matrix must be known ; in the first one must be also known and and are estimated ; in the second one must have no causal loops and , in addition to and to the non zero elements of , the exact solution is exploited to estimate also ; it has to be stressed that the knowledge of the graph associated with implies the knowledge only about which elements of are equal to zero .let us point out that some constraints on the possible values of the estimated parameters exist : for both the estimation approaches must be negative , which is precisely the case we are interested in .additional bounds on the values of the elements of exist ( see ref . ) and their interpretation is that the control exerted by the bank on the processes via is so strong that the interactions alone ( without the noise ) are not sufficient to generate a loss .the forecasting power of the model is investigated using a simulated database of operational losses ; the first step is to generate a trajectory ( that will be called original trajectory ) of time steps from eq .( [ eq : motion ] ) , to interpret it as a database of operational losses and to estimate the parameters only from the first time steps ( ) ; the second step is to use the estimated parameters to calculate and by means of the exact solution ( in the case in which has no casual loops ) or by sampling a great number of trajectories from eq .( [ eq : motion ] ) and compare it to the original trajectory , also to the part not used to estimate the parameters .if it reduces to a validation test for the estimation of the parameters . the whole procedure is carried over in ref . for , using the first approach to the estimation of parameters : we suggest to consult that reference for all the details , including the parameters used to generate the original trajectory which have been chosen to be compatible with the bounds illustrated in section [ sec : estimation ] . herewe present only the results relative to the process and discussed in ref . , the processes with the most complicated interactions . in fig .[ fig : cumul ] it is shown that the cumulative loss relative to the original trajectory is indistinguishable from within an error smaller than , both for and for , showing that the model has a remarkable forecasting power .analogous results are obtained for the other processes . ,the average of obtained estimating the parameters from the original trajectory , for ( dashed line ) and ( dash - dotted line ) ; the limits of the semi - transparent regions are , for ( dark grey ) and ( light grey ) ; is reproduced with an uncertainty which is far less than and the error regions overlap almost completely.,title="fig : " ] , the average of obtained estimating the parameters from the original trajectory , for ( dashed line ) and ( dash - dotted line ) ; the limits of the semi - transparent regions are , for ( dark grey ) and ( light grey ) ; is reproduced with an uncertainty which is far less than and the error regions overlap almost completely.,title="fig : " ] as briefly discussed in section [ sec : intro ] , the most widely used measure of the capital requirement is the var with level of confidence over the time horizon of one year . for our modelthe var over the time horizon is the percentile of the distribution of and , since the distribution of is gaussian , for large the var is approximately equal to ; once the link between the length of a time step and the real time has been established as pointed out in section [ sec : estimation ] , the var over the desired time horizon can be calculated .however in our case the estimation procedure has been carried out with simulated data and , since no real time scale is available , it is reasonable to calculate the var over the time horizon . in ref . it is shown that the relative error between the vars over the time horizon relative to and is for all the processes , showing that also the capital requirement can be reliably forecast .
|
operational risk is the risk relative to monetary losses caused by failures of bank internal processes due to heterogeneous causes . a dynamical model including both spontaneous generation of losses and generation via interactions between different processes is presented ; the efforts made by the bank to avoid the occurrence of losses is also taken into account . under certain hypotheses , the model can be exactly solved and , in principle , the solution can be exploited to estimate most of the model parameters from real data . the forecasting power of the model is also investigated and proved to be surprisingly remarkable .
|
branching networks are an important category of all networks with river networks being a paradigmatic example . probably as much as any other natural phenomena , river networks are a rich source of scaling laws .central quantities such as drainage basin area and stream lengths are reported to closely obey power - law statistics .the origin of this scaling has been attributed to a variety of mechanisms including , among others : principles of optimality , self - organized criticality , invasion percolation , and random fluctuations .one of the difficulties in establishing any theory is that the reported values of scaling exponents show some variation . with this variation in mind ,we have in extensively examined hack s law , the scaling relationship between basin shape and stream length .such scaling laws are inherently broad - brushed in their descriptive content . in an effort to further improve comparisons between theory and data and , more importantly , between networks themselves , we consider here a generalization of horton s laws .defined fully in the following section , horton s laws describe how average values of network parameters change with a certain discrete renormalization of the network . the introduction of these laws by horton may be seen as one of many examples that presaged the theory of fractal geometry .in essence , they express the relative frequency and size of network components such as stream segments and drainage basins . here , we extend horton s laws to functional relationships between probability distributions rather than simply average values .the recent work of peckham and gupta was the first to address this natural generalization of horton s laws .our work agrees with their findings but goes further to characterize the distributions and develop theoretical links between the distributions of several different parameters .we also present empirical studies that reveal underlying scaling functions with a focus on fluctuations and further consider deviations due to finite - size effects .we examine continent - scale networks : the mississippi , amazon , congo , nile and kansas river basins . as in , we also examine scheidegger s model of directed , random networks . both real and model networks provide important tests and motivations for our generalizations of horton s laws .we begin with definitions of stream ordering and horton s laws .thereafter , the paper is divided into two main sections . in section [ sec : horton.postform ] , we first sketch the theoretical generalization of horton s laws . estimates of the horton ratios are carried out in section [ sec : horton.hortonratios ] and these provide basic parameters of the generalized laws . empirical evidence from real continent - scale networksis then provided along with data from scheidegger s random network model in section [ sec : horton.generalization ] . in section [ sec : horton.moments ] we derive the higher order moments for stream length distributions and in section [ sec : horton.devs ] , we consider deviations from horton s laws for large basins . in the appendix[ sec : horton.theory ] , we expand on some of the connections outlined in section [ sec : horton.generalization ] , presenting a number of mathematical considerations on these generalized horton distributions . this paper is the second in a series of three on the geometry of river networks . in the first address issues of scaling and universality and provide further motivation for our general investigation . in the third article of the series we extend the work of the present paper by examining how the detailed architecture of river networks , i.e. , how network components fit together .stream ordering was first introduced by horton in an effort to quantify the features of river networks .the method was later improved by strahler to give the present technique of horton - strahler stream ordering .stream ordering is a method applicable to any field where branching , hierarchical networks are important . indeed ,much use of stream ordering has been made outside of the context of river networks , a good example being the study of venous and arterial blood networks in biology .we describe two conceptions of the method and then discuss empirical laws defined with in the context of stream ordering .a network s constituent stream segments are ordered by an iterative pruning .an example of stream ordering for the mississippi basin is shown in figure [ fig : horton.order_paths_mispi10 ] .a source stream is defined as a section of stream that runs from a channel head to a junction with another stream ( for an arboreal analogy , think of the leaves of a tree ) .these source streams are classified as the first order stream segments of the network .next , remove these source streams and identify the new source streams of the remaining network .these are the second order stream segments .the process is repeated until one stream segment is left of order .the order of the network is then defined to be .once stream ordering on a network has been done , a number of natural quantities arise .these include , the number of basins ( or equivalently stream segments ) for a given order ; , the average main stream length ; , the average stream segment length ; , the average basin area ; and the variation in these numbers from order to order .horton and later schumm observed that the following ratios are generally independent of order : since the main stream length averages are combinations of stream segment lengths we have that the horton ratio for stream segment lengths is equivalent to . because our theory will start with the distributions of , we will generally use the ratio in place of .horton s laws have remained something of a mystery in geomorphology the study of earth surface processes and form due to their apparent robustness and hence perceived lack of physical ( or geological ) content .however , statements that horton s laws are `` statistically inevitable '' , while possibly true , have not yet been based on reasonable assumptions .furthermore , many other scaling laws can be shown to follow in part from horton s laws .thus , horton s laws being without content would imply the same is true for those scaling laws that follow from them .other sufficient assumptions include uniform drainage density ( i.e. , networks are space - filling ) and self - affinity of single channels .the latter can be expressed as the relation where is the longitudinal diameter of a basin .scaling relations may be derived and the set of relevant scaling exponents can be reduced to just two : as given above and the ratio .note that one obtains so that only the two horton ratios and are independent .horton ratios are thus of central importance in the full theory of scaling for river networks .horton s laws relate quantities which are indexed by a discrete set of numbers , namely the stream orders .they also algebraically relate mean quantities such as .hence we may consider a generalization to functional relationships between probability distributions .in other words , for stream lengths and drainage areas we can explore the relationships between probability distributions defined for each order . furthermore , as we have noted , horton s laws can be used to derive power laws of continuous variables such as the probability distributions of drainage area and main stream length : these derivations necessarily only give discrete points of power laws .in other words , the derivations give points as functions of the discrete stream order and are uniformly spaced logarithmically and we interpolate the power law from there . the distributions for stream lengths and areas must therefore have structures that when combined across orders produce smooth power laws .for the example of the stream segment length , horton s laws state that the mean grows by a factor of with each integer step in order . in considering , the underlying probability distribution function for , we postulate that horton s laws apply for every moment of the distribution and not just the mean .this generalization of horton s laws may be encapsulated in a statement about the distribution as the factor of indicates the that , i.e. , the frequency of stream segments of order decays according to horton s law of stream number given in equation ( [ eq : horton.hortonratios ] ) .similarly , for , and , we write and where constants , , and are appropriate normalizations .we have used the subscripted versions of the lengths and areas , , , and , to reinforce that these parameters are for points at the outlets of order basins only .the quantity is the number of streams of order within a basin of order .this will help with some notational issues later on .the form of the distribution functions , , and and their interrelationships become the focus of our investigations . since scaling is inherent in each of these postulated generalizations of horton s laws , we will often refer to these distribution functions as _ scaling functions_. we further postulate that distributions of stream segment lengths are best approximated by exponential distributions .empirical evidence for this will be provided later on in section [ sec : horton.generalization ] .the normalized scaling function of equation ( [ eq : horton.ellwfreq ] ) then has the form where we have introduced a new length scale and stated its appearance with the notation .the value of is potentially network dependent .as we will show , distributions of main stream lengths , areas and stream number are all dependent on and this is the only additional parameter necessary for their description . note that is both the mean and standard deviation of , i.e. , for exponential distributions , fluctuations of a variable are on the order of its mean value .we may therefore think of as a _ fluctuation length scale_. note that the presence of exponential distributions indicates a randomness in the physical distribution of streams themselves and this is largely the topic of our third paper . since main stream lengths are combinations of stream segment lengths , i.e. , we have that the distributions of main stream lengths of order basins are approximated by convolutions of the stream segment length distributions . for this step ,it is more appropriate to use conditional probabilities such as where the basin order is taken to be fixed .we thus write where denotes convolution .details of the form obtained are given in appendix [ subsec : horton.mainstreamdist ] .the next step takes us to the power law distribution for main stream lengths .summing over all stream orders and integrating over we have where we have returned to the joint probability for this calculation .the integral over is replaced by a sum when networks are considered on discrete lattices .note that the probability of finding a main stream of length is independent of any sort of stream ordering since it is defined on an unordered network .the details of this calculation may be found in appendix [ subsec : horton.powerlawdists ] where it is shown that a power law follows from the deduced form of the with .we now examine the usual horton s laws in order to estimate the horton ratios .these ratios are seen as intrinsic parameters in the probability distribution functions given above in equations ( [ eq : horton.ellwfreq ] ) , ( [ eq : horton.lwfreq ] ) , ( [ eq : horton.awfreq ] ) and ( [ eq : horton.nwfreq ] ) . .horton ratios for the mississippi river . for each range of orders , estimates of the ratios are obtained via simple regression analysis . for each quantity , a mean , standard deviation andnormalized deviation are calculated .all ranges with are used in these estimates but not all are shown the values obtained for are especially robust while some variation is observed for the estimates of and .good agreement is observed between the ratios and and also between and . [ cols="^,^,^,^,^,^,^ " , ]figure [ fig : horton.nalomega_mispi10](a ) shows the stream order averages of , , and for the mississippi basin .deviations from exponential trends of horton s laws are evident and indicated by deviations from straight lines on the semi - logarithmic axis .such deviations are to be expected for the smallest and largest orders within a basin . for the smallest orders , the scale of the grid used becomes an issue but even with infinite resolution , the scaling of lengths , areas and number for low orders can not all hold at the same time . for large orders ,the decrease in sample space contributes to these fluctuations since the number of samples of order streams decays exponentially with order as .furthermore , correlations with overall basin shape provide another source of deviations .nevertheless , in our theoretical investigations below we will presume exact scaling .note also that the equivalence of and is supported by figure [ fig : horton.nalomega_mispi10](b ) where the stream numbers have been inverted for comparison .similar agreement is found for the amazon and nile as shown in tables [ tab : horton.mispi10orders ] , [ tab : horton.amazonorders ] , and [ tab : horton.nileorders ] which we now discuss .table [ tab : horton.mispi10orders ] shows the results of regression on the mississippi data for various ranges of stream orders for stream number , area and lengths .tables [ tab : horton.amazonorders ] and [ tab : horton.nileorders ] show the same results carried out for the amazon and nile .each table presents estimates of the four ratios , , and .also included are the comparisons and , both of which we expect to be close to unity . for each quantity , we calculate the mean , standard deviation and normalized deviation .note the variation of exponents with choice of order range .this is the largest source of error in the calculation of the horton ratios .therefore , rather than taking a single range of stream orders for the regression , we examine a collection of ranges . also , the deviations for high and low orders observed in figures [ fig : horton.nalomega_mispi10](a ) and [ fig : horton.nalomega_mispi10](b ) do of course affect measurements of the horton ratios . in all cases, we have avoided using data for the smallest and largest orders .for the three example networks given here , the statements and are well supported .the majority of ranges give and very close to unity .the averages are also close to one and are different from unity mostly by within 1.0 and uniformly by within 1.5 standard deviations . the normalized deviations , ie ., , for the four ratios are all below .no systematic ordering of the is observed .of all the data , the values for in the case of the mississippi are the most notably uniform having . throughoutthere is a slight trend for regression on lower orders to overestimate and on higher orders to underestimate the average ratios , while reasonable consistency is found at intermediate orders .thus , overall the ranges chosen in the tables give a reasonably even set of estimates of the horton ratios and we will use these averages as our estimates of the ratios .we now present horton distributions for the mississippi , amazon , and nile river basins as well as the scheidegger model .scheidegger networks may be thought of as collections of random - walker streams and are fully defined in and extensively studied in .the forms of all distributions are observed to be the same in the real data and in the model . the first distribution is shown in figure [ fig : horton.ellw_collapse_mispi2](a ) .this is the probability density function of , fourth order stream segment lengths , for the mississippi river .distributions for different orders can be rescaled to show satisfactory agreement .this is done using the postulated horton distribution of stream segment lengths given in equation ( [ eq : horton.ellwfreq ] ) .the rescaling is shown in figure [ fig : horton.ellw_collapse_mispi2](b ) and is for orders .note the effect of the exponential decrease in number of samples with order is evident for since is considerably scattered .nevertheless , the figure shows the form of these distributions to be most closely approximated by exponentials .we observe similar exponential distributions for the amazon , the nile and the scheidegger model .the fluctuation length scale is found to be approximately meters for the mississippi , meters for the amazon and meters for the nile . since is based on the definition of stream ordering , comparisons of are only sensible for networks that are measured on topographies with the same resolution .the above values of are approximate and our confidence in them would be improved with higher resolution data .nevertheless they do suggest that fluctuations in network structure increase as we move from the mississippi through to the nile and then the amazon .the distributions of main stream lengths for the amazon river is shown in figure [ fig : horton.lw_collapse_amazon2](a ) .since main stream lengths are sums of stream segment lengths , their distribution has a single peak away from the origin .however , these distributions will not tend towards a gaussian because the individual stream length distributions do not satisfy the requirements of the central limit theorem .this is because the moments of the stream segment length distributions grow exponentially with stream order .as the semi - logarithmic axes indicate , the tail may be reasonably well ( but not exactly ) modeled by exponentials .there is some variation in the distribution tails from region to region .for example , corresponding distributions for the mississippi data do exhibit tails that are closer to exponentials .however , for the present work where we are attempting to characterize the basic forms of the horton distributions , we consider these deviations to be of a higher order nature and belonging to the realm of further research . in accord with equation ( [ eq : horton.lwfreq ] ) , figure [ fig : horton.lw_collapse_amazon2](b ) shows the rescaling of the main stream length distributions for .the ratios used , and are taken from table [ tab : horton.amazonorders ] .given the scatter of the distributions , it is unreasonable to perform minimization techniques on the rescaled data itself in order to estimate and .this is best done by examining means , as we have done , and higher order moments which we discuss below .furthermore , varying and from the above values by , say , does not greatly distort the visual quality of the `` data collapse . ''similar results for the scheidegger model are shown in figure [ fig : horton.figsche_lengthomega ]. the scheidegger model may be thought of as a network defined on a triangular lattice where at each lattice site one of two directions is chosen as the stream path .figure [ fig : horton.figsche_lengthomega](a ) gives a single example distribution for main stream lengths of order basins .the tail is exponential as per the real world data .figure [ fig : horton.figsche_lengthomega](b ) shows a collapse of main stream length distributions for orders through .in contrast to the real data where an overall basin order is fixed ( ) , there is no maximum basin order here .the distributions in figure [ fig : horton.figsche_lengthomega](b ) have an arbitrary normalization meaning the absolute values of the ordinate are also arbitrary . otherwise , this is the same collapse as given in equation ( [ eq : horton.lwfreq ] ) . for the scheidegger model ,our simulations yield and .for all distributions , we observe similar functional forms for real networks and the scheidegger model , the only difference lying in parameters such as the horton ratios .figure [ fig : horton.aw_collapse_nile2 ] shows more horton distributions , this time for drainage area as calculated for the nile river basin . in figure [ fig : horton.aw_collapse_nile2 ] , an example distribution for sub - basins is presented .the distribution is similar in form to those of main stream lengths of figure [ fig : horton.lw_collapse_amazon2 ] , again showing a reasonably clear exponential tail .rescaled drainage area distributions for are presented in figure [ fig : horton.aw_collapse_nile2](b ) .the rescaling now follows equation ( [ eq : horton.awfreq ] ) .note that if and were not equivalent , the rescaling would be of the form since we have asserted that , equation ( [ eq : horton.awfreq2 ] ) reduces to equation ( [ eq : horton.awfreq ] ) .the horton ratio used here is which is in good agreement with , the respective standard deviations being and .both figures are taken from the data of table [ tab : horton.nileorders ] . as stated in section [ sec : horton.postform ] , the horton distributions of and must combine to form power law distributions for and ( see equations [ eq : horton.pa ] and [ eq : horton.l - lom ] ) .figure [ fig : lw_powerlawsum2_mispi ] provides empirical support for this observation for the example main stream lengths of the mississippi network .the distributions for , 4 and 5 main stream lengths are individually shown .their combination together with the distribution of gives the reasonable approximation of a power law as shown .the area distributions combine in the same way .note that the distributions do not greatly overlap .each point of the power law is therefore the addition of significant contributions from only two or three of the separate distributions .the challenge here then is to understand how rescaled versions of , being the basic form of the , fit together in such a clean fashion .the details of this connection are established in appendix [ subsec : horton.powerlawdists ] . in considering the generalized horton distributions for number and area ,we observe two main points : a calculation in the vein of what we are able to do for main stream lengths is difficult ; and , the horton distributions for area and number are equivalent . in principle , horton area distributions may be derived from stream segment length distributions .this follows from an assumption of statistically uniform drainage density which means that the typical drainage area drained per unit length of any stream is invariant in space .apart from the possibility of changing with space which we will preclude by assumption , drainage density does naturally fluctuate as well .thus , we can write where the sum is over all orders and all stream segments and is the average drainage density .however , we need to know for an example basin , how many instances of each stream segment occur as a function of order .for example , the number of first order streams in an order basins is .given the distribution of this number , we can then calculate the distribution of the total contribution of drainage area due to first order streams .but the distributions of are not independent so we can not proceed in this direction .we could potentially use the typical number of order streams , .then the distribution of total area drained due to order streams would approach gaussian because the individual distribution are identical and the central limit theorem would apply . however , because the fluctuations in total number of stream segments are so great , we lose too much information with this approach .indeed , the distribution of area drained by order stream segments in a basin reflects variations in their number rather than length .again , we meet up with the problem of the numbers of distinct orders of stream segment lengths being dependent. one final way would be to use tokunaga s law .tokunaga s law states that the number of order side branches along an ( absorbing ) stream segment of order is given by where .the parameter is the average number of side streams having order for every order absorbing stream .this gives a picture of how a network fits together and may be seen to be equivalent to horton s laws .now , even though we also understand the distributions underlying tokunaga s law , similar technical problems arise . on descending into a network , we find the number of stream segments at each level to be dependent on all of the above. nevertheless , we can understand the relationship between the distributions for area and number .what follows is a generalization of the finding that .the postulated forms for these distributions were given in equations ( [ eq : horton.awfreq ] ) and ( [ eq : horton.nwfreq ] ) .consider , the number of first order streams in an order basin .assuming that , on average , first order streams are distributed evenly throughout a network , then this number is simply proportional to .as an example , figure [ fig : horton.figsche_number_area ] shows data obtained for the scheidegger model . for the scheidegger model ,first order streams are initiated with a probability when the flow at the two upstream sites is randomly directed away , each with probability .thus , for an area , we expect and find .for higher internal orders , we can apply a simple renormalization .assuming a system with exact scaling , the number of streams is statistically equivalent to .since the latter is proportional to we have that where the constant of proportionality is the density of order streams , clearly , this equivalence improves as number increases , i.e. , the difference increases .while we do not have exact forms for the area or number distributions , we note that they are similar to the main stream length distributions .since source streams are linear basins with the width of a grid cell , the distribution of is the same as the distribution of and , a pure exponential .hence , is also an exponential . for increasing ,the distribution of becomes single peaked with an exponential tail , qualitatively the same as the main stream length distributions .finally , we discuss the higher order moments for the generalized horton distributions .figure [ fig : horton.figmoments_l_mispi ] presents moments for distributions of main stream lengths for the case of the mississippi .these moments are calculated directly from the main stream length distributions .a regular logarithmic spacing is apparent in moments for orders ranging from to . to see whether or not this is expected , we detail a few small calculations concerning moments starting from the exponential form of stream segment lengths given in equation ( [ eq : horton.elldist ] ) . as noted previously , for an exponential distribution , ,the mean is simply . in general , the moment of an exponential distribution is assuming scaling holds exactly for across all orders , the above is precisely . note that . since the characteristic length of order streams is , we therefore have since main stream lengths are sums of stream segment lengths , so are their respective moments .hence , we can now determine the log - space separation of moments of main stream length .using stirling s approximation that we have + c,\ ] ] where is a constant .the term inside the square brackets in equation [ eq : horton.logsp_mom ] creates small deviations from linearity for .thus , in agreement with figure [ fig : horton.figmoments_l_mispi ] , we expect approximately linear growth of moments in log - space .in this last section , we briefly examine deviations from scaling within this generalized picture of horton s laws . the basic question is given an approximate scaling for quantities measured at intermediate stream orders , what can we say about the features of the overall basin ? as noted in the previous section , all moments of the generalized horton distributions grow exponentially with order . coupling this with the fact that , i.e. , the number of samples of order basins decreases exponentially with , we observe that a basin s and will potentially differ greatly from values predicted by horton s laws . to illustrate this , figure [ fig : horton.lw_blownup_congo ] specifically shows the distributions and scaled up to give for the congo river .the actual congo s length measured at this 1000 meter resolution is represented by the solid line and is around 57% of the distribution s mean as indicated by the dashed line .nevertheless , we see that the measured length is within a standard deviation of the predicted value . in table[ tab : horton.maxlvariation ] , we provide a comparison of predicted versus measured main stream lengths and areas for the basins studied here .the mean for the scaled up distributions overestimates the actual values in all cases except for the nile .also , apart from the nile , all values are within a standard deviation of the predicted mean .the coefficients of variation , and , all indicate that fluctuations are on the order of the expected values of stream lengths and areas .thus , we see that by using a probabilistic point of view , this generalized notion of horton s laws provides a way of discerning the strength of deviations about the expected mean . in general, stronger deviations would imply that geologic conditions play a more significant role in setting the structure of the network .the objective of this work has been to explore the underlying distributions of river network quantities defined with stream ordering .we have shown that functional relationships generalize all cases of horton s laws .we have identified the basic forms of the distributions for stream segment lengths ( exponential ) and main stream lengths ( convolutions of exponentials ) and shown a link between number and area distributions .data from the continent - scale networks of the mississippi , amazon , and nile river basins as well as from scheidegger s model of directed random networks provide both agreement with and inspiration for the generalizations of horton s laws .finally , we have identified a fluctuation length scale which is a reinterpretation of what was previously identified as only a mean value .we see the study of the generalized horton distributions as integral to increasing our understanding of river network structure .we also suggest that practical network analysis be extended to measurements of distributions and the length scale with the aim of refining our ability to distinguish and compare network structure . by taking account of fluctuations inherent in network scaling laws ,we are able to see how measuring horton s laws on low - order networks is unavoidably problematic .moreover , as we have observed , the measurement of the horton ratios is in general a delicate operation suggesting that many previous measurements are not without error .the theoretical understanding of the growth and evolution of river networks requires a more thorough approach to measurement and a concurrent improvement in the statistical description of river network geometry .the present consideration of a generalization of horton s laws is a necessary step in this process giving rise to stronger tests of both real and synthetic data . in the following paper ,we round out this expanded picture of network structure by consdering the spatial distribution of network components .the work was supported in part by nsf grant ear-9706220 and the department of energy grant de fg02 - 99er 15004 .the authors would like to express their gratitude to h. cheng for enlightening and enabling discussions .in this appendix we consider a series of analytic calculations .these concern the connections between the distributions of stream segment lengths , ordered basin main stream lengths and main stream lengths .we will idealize the problem in places , assuming perfect scaling and infinite networks while making an occasional salubrious approximation .also , we will treat the problem of lengths fully noting that derivations of distributions for areas follow similar but more complicated lines .we begin by rescaling the form of stream segment length distributions the normalization stems from the requirement that which is made purely for aesthetic purposes . as we have suggested in equation ( [ eq : horton.elldist ] ) and demonstrated empirical support for , is well approximated by the exponential distribution . for low and also we have noted that deviations do of course occur but they are sufficiently insubstantial as to be negligible for a first order treatment of the problem .we now derive a form for the distribution of main stream lengths .as we have discussed , since , we have the convolution ( [ eq : horton.l - ellconv ] ) .the right - hand side of equation ( [ eq : horton.l - ellconv ] ) consists of exponentials as per equation ( [ eq : horton.elldist ] ) so we now consider the function given by where .we are specifically interested in the case when no two of the are equal , i.e. , for all . to compute this -fold convolution, we simply examine the for and and identify the emerging pattern .for we have , omitting the prefactors for the time being , providing .convolving this with we obtain note that we have added in a factor of for the appropriate normalization .in addition , one observes that for all since all convolutions of pairs of exponentials vanish at the origin .furthermore , the tail of the distribution is dominated by the exponential corresponding to the largest stream segment .the next step is to connect to the power law distribution of main stream lengths , ( see figure [ fig : lw_powerlawsum2_mispi ] and the accompanying discussion ) . on considering equation ( [ eq : horton.l - lom ] )we see that the problem can possibly be addressed with some form of asymptotic analysis . before attacking this calculation however, we will simplify the notation keeping only the important details of the .our main interest is to see how equation ( [ eq : horton.l - lom ] ) gives rise to a power law .we transform the outcome of equation ( [ eq : horton.pl_om , omdist ] ) by using , , , and , neglecting multiplicative constants and then summing over stream orders to obtain the integration over has been omitted meaning that the result will be a power law with one power lower than expected .we now show that this sum of exponentials in equation ( [ eq : horton.nexpconvgeomratio2 ] ) does in fact asymptotically tend to a power law .we first interchange the order of summation replacing with to give we thus simply have a sum of exponentials to contend with .the coefficients appear unwieldy at first but do yield a simple expression after some algebra which we now perform : in reaching the last line we have shifted the indices in several places . in the last bracketed term we have set and then while in the first bracketed term , we have used .immediately of note is that the last term is independent of and may thus be ignored .the first bracketed term does depend on but converges rapidly .writing we have that . taking to be fixed and large enough such that is approximated well by for , we then have as , clearly approaches a product of and a constant .therefore , the first bracketed term in equation ( [ eq : horton.ci ] ) may also be neglected in an asymptotic analysis .hence , as , the coefficients are simply given by and we can approximate as , boldly using the equality sign , where comprises the constant part of the and factors picked up by shifting the lower limit of the index from to .we have also used here the identification we turn now to the asymptotic behavior of , this being the final stretch of our analysis there are several directions one may take at this point .we will proceed by employing a transformation of that is sometimes referred to as the sommerfeld - watson transformation and also as watson s lemma .given a sum over any set of integers , say , it can be written as the following integral where is a contour that contains the points on the real axis where and none of the points of the same form with .calculation of the residues of the simple poles of the integrand return us to the original sum .we first make a change of variables , . substituting this and into equation ( [ eq : horton.s(u)-transform ] )we have the transformed contour is depicted in figure [ fig : horton.contour - swtf2 ] . as ,the contribution to integral from the neighborhood of dominates .the introduction of the sin and cos terms has created an interesting oscillation that has to be handled with with some care .we now deform the integration contour into the contour of figure [ fig : horton.contour - swtf3 ] focusing on the interval along the imaginary axis $ ] .choosing this path will simplify the cos and sin expressions which at present have logs in their arguments .the integral is now given by where writing with , we have and the following for the cos and sin terms : and the term in the integrand becomes where .the integral now becomes now , since ( taking ) , we can expand the expression as follows the integral in turn becomes the basic -th integral in this expansion is substituting and replacing the upper limit with we have here , we have rotated the contour along the imaginary -axis to the real -axis and identified the integral with the gamma function .the integral can now be expressed as .\ ] ] we now need to show that the higher order terms are negligible . note that their magnitudes do no vanish with increasing but instead are highly oscillatory terms . using the asymptotic form of the gamma function we can estimate as follows for large that hence , vanishes exponentially with .for the first few values of taking and , we have and showing that these corrections are negligible . hence we are able estimate to first order as thus we have determined that a power law follows from the initial assumption that stream segment lengths follow exponential distributions .this equivalence has been drawn as an asymptotic one , albeit one where convergences have been shown to be rapid .the calculation is clearly not the entire picture as the solution does contain small rapidly - oscillating corrections that do not vanish with increasing argument . a possible remaining problem and one for further investigation is to understand how the distributions for main stream lengths fit together over a range that is not to be considered asymptotic .nevertheless , the preceding is one attempt at demonstrating this rather intriguing breakup of a smooth power law into a discrete family of functions built up from one fundamental scaling function .
|
the structure of a river network may be seen as a discrete set of nested sub - networks built out of individual stream segments . these network components are assigned an integral stream order via a hierarchical and discrete ordering method . exponential relationships , known as horton s laws , between stream order and ensemble - averaged quantities pertaining to network components are observed . we extend these observations to incorporate fluctuations and all higher moments by developing functional relationships between distributions . the relationships determined are drawn from a combination of theoretical analysis , analysis of real river networks including the mississippi , amazon and nile , and numerical simulations on a model of directed , random networks . underlying distributions of stream segment lengths are identified as exponential . combinations of these distributions form single - humped distributions with exponential tails , the sums of which are in turn shown to give power law distributions of stream lengths . distributions of basin area and stream segment frequency are also addressed . the calculations identify a single length - scale as a measure of size fluctuations in network components . this article is the second in a series of three addressing the geometry of river networks .
|
following spectacular advances made over the last years , the satsolving technology has many successful applications nowadays there is a wide range of problems being solved by reducing them to sat . very often solving based on reduction to satis more efficient than using a problem - specific solution .therefore , satsolvers are already considered to be a _swiss army knife _ for solving many hard cspand np - complete problems and in many areas including software and hardware verification , model checking , termination analysis , planning , scheduling , cryptanalysis , electronic design automation , etc . .typically , translations into satare performed by specialized , problem - specific tools .however , using a general - purpose system capable of reducing a wide range of problems to satcan simplify this task , make it less error prone , and make this approach more easily accessible and more widely accepted .there are already a number of approaches for solving combinatorial and related problems by general - purpose systems that reduce problems to underlying theories and domains ( instead of developing special purpose algorithms and implementations ) .a common motivation is that it is much easier to develop a problem specification for a general system than a new , special - purpose solver .the general problem solving systems include libraries for general purpose programming languages , but also modelling and programming languages built on top of specific solvers .most modelling languages are highly descriptive and have specific language constructs for certain sorts of constraints .specific constraints are translated to underlying theories by specific reduction techniques .some modelling systems use satas the target problem and some of them focus on solving np - complete problems by reduction to sat .in this paper we present a novel approach for solving problems by reducing them to sat .the approach can be seen also as a general - purpose constraint programming system ( for finite domains ) .the approach consists of a new specification / modelling language ursa(from _ uniform reduction to sat _ ) and an associated interpreter .in contrast to other modelling languages , the proposed language combines features of declarative and imperative programming paradigms .what makes the language declarative is not how the constraints are expressed , but only the fact that a procedure for finding solutions does not need to be explicitly given . on the other hand , the system has features of imperative languages and may be seen as an extension of the imperative programming paradigm , similarly as some constraint programming systems are extensions of logic programming paradigm .in contrast to other modelling languages , in the proposed specification language loops are represented in the imperative way ( not by , generally more powerful , recursion ) , destructive updates are allowed , there is support for constraints involving bitwise operators and operators for arithmetic modulo .there are problems for which , thanks to these features , the modelling process is simpler and specifications are more readable and easier to maintain than for other languages and constraint systems . however , of course , the presented system does not aim to replace other constraint systems and languages , but rather to provide a new alternative with some distinctive features .the used uniform approach enables a simple syntax and semantics of the ursalanguage , a simple , uniform reduction to satand , consequently , a simple architecture of the whole system .this enables a straightforward implementation of the proposed system and a rather straightforward verification of its correctness .this is very important because , although it is often easier for a declarative program than for a corresponding imperative program to verify that it meets a given specification , this still does not lead to a high confidence if the constraint solving system itself can not be trusted .the presented approach is accompanied with an open - source implementation , publicly available on the internet .a limited experimental comparison suggest that the system ( combined with state - of - the - art satsolvers ) yields good performance , competitive to other modern approaches . [ [ overview - of - the - paper . ] ] overview of the paper .+ + + + + + + + + + + + + + + + + + + + + + in section [ sec : background ] we give relevant definitions ; in section [ sec : representation ] we provide motivation and basic ideas of the proposed approach . in section [ sec : ursa_language ] we describe the specification language ursa , in section [ sec : ursa_semantics ] its semantics , in section [ sec : ursa_interpreter ] the corresponding interpreter , and in section [ sec : pragmatics ] pragmatics of the language . in section [ sec : related_work ] we discuss related techniques , languages and tools . in section [ sec : future_work ] we discuss directions for future work and in section [ sec : conclusions ] we draw final conclusions .in this section we give a brief account of the satand cspproblems and related notions .[ [ propositional - logic . ] ] propositional logic .+ + + + + + + + + + + + + + + + + + + + we assume standard notions of propositional logic : _ literal _ , _ clause _ , _ propositional formula _ , _ conjunctive normal form _ ( cnf ) , _ valuation _ ( or _ assignment _ ) , _ interpretation _ , _ model _ , _ satisfiable formula _ , etc .we denote by and the boolean constants _ true _ and _ false _ and the logical connectives by ( _ negation _ ) , ( _ conjunction _ ) , ( _ disjunction _ ) , ( _ exclusive disjunction _ ) , ( _ implication _ ) , ( _ equivalence _ ) .two formulae and are said to be _ equivalent _ if and have the same truth value in any valuation .two formulae and are said to be _ weakly equivalent _ ( or _ equisatisfiable _ ) if whenever is satisfiable then is satisfiable and vice versa . [[ constraint - satisfaction - problem . ] ] constraint satisfaction problem .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + a constraint satisfaction problem ( csp ) is defined as a triple , where is a finite set of variables , , , , is a set of domains , , , for these variables , and is a set of constraints , , , . in a finite - domain csp ,all sets from are finite .constraints from may define combinations of values assigned to variables that are _ allowed _ or that are _prohibited_. a problem instance is satisfiable if there is an assignment to variables such that all constraints are satisfied .such assignment is called a _ solution_.a constraint optimization problem is a cspin which the goal is to find a solution maximizing ( or minimizing ) a given _ objective function _ over all allowed values of the given variables .[ [ satproblem - and - satsolvers . ] ] satproblem and satsolvers .+ + + + + + + + + + + + + + + + + + + + + + + + + + satis the problem of deciding if a given propositional formula in cnfis satisfiable , i.e. , if there is any assignment to variables such that all clauses are true .obviously , satis a special case of csp , with all variables ranging over the domain and with constraints given as clauses .satwas the first problem shown to be np - complete , and it still holds a central position in the field of computational complexity .stochastic satsolvers can not prove the input instance to be unsatisfiable , but may find a solution ( i.e. , a satisfying variable assignment ) for huge satisfiable instances quickly . on the other hand , for a given satinstance , a complete satsolver always finds a satisfying variable assignment or shows that there is no such assignment .most of the state - of - the - art complete satsolvers are cdcl ( conflict - driven , clause - learning ) based extensions of the davis - putnam - logemann - loveland algorithm ( dpll ) . in recent years , a tremendous advance has been made in satsolving technology .these improvements involve both high - level and low - level algorithmic techniques .the advances in satsolving make possible deciding satisfiability of some industrial satproblems with tens of thousands of variables and millions of clauses .there are two basic components of the presented approach : problem specification : a problem is specified by a test ( expressed in an imperative form ) that given values of relevant variables are indeed a solution to the problem .problem solving : all relevant variables of the problem are represented by finite vectors of propositional formulae ( corresponding to vectors of bits and to binary representation , in case of numerical values ) ; the specification is symbolically executed over such representation and the assertion that given values make a solution is transformed to an instance of the satproblem and passed to a satsolver . if the formula is satisfiable , its model is transformed back to variables describing the problem , giving a solution to the problem .let us consider problems of the following general form : _ find ( if it exists ) a set of values such that given constraints are met _( variations of this form include : only checking if such values exists , and finding all values that meet the given conditions ) .a problem of this form can be specified by a test that checks if a given set meets the given constraints ( with one assertion that combines all the constraints ) .the test can be formulated in a language designed in the style of imperative programming languages and such a test is often easy to formulate .[ ex : trivial ] let us consider a trivial problem : if equals , find a value for such that equals .a simple check in an imperative form can be specified for this problem if a value of is given in advance , one could easily check whether it is a solution of the problem .indeed , one would assign to and finally check whether equals .such test can be written in the form of an imperative c - like code ( where assert(b ) checks whether b is true ) as follows : .... v = u+1 ; assert(v==2 ) ; .... the example above is trivial , but specifications may involve more variables and more complex operations , including conditional operations and loops , as illustrated by the following example . the most popular way of generating pseudorandom numbers is based on linear congruential generators .a generator is defined by a recurrence relation of the form : and is the _ seed _ value ( ) .one example of such relation is : it is trivial to compute elements of this sequence .the check that is indeed equal to the given value if the seed is equal to can be simply written in the form of an imperative c - like code as follows ( assuming that numbers are represented by bits ) : .... nx = nseed ; for(ni=1;ni<=100;ni++ ) nx = nx*1664525 + 1013904223 ; assert(nx==3998113695 ) ; .... however , the following problem , realistic in simulation and testing tasks , is a non - trivial programming problem ( unless problem - specific , algebraic knowledge is used ) : given , for example , the value compute .still , the very same test shown above can serve as a specification of this problem .this example illustrates one large family of problems that can be simply specified using the proposed approach problems that are naturally expressed in terms of imperative computations and that involve destructive assignments .such problems are often difficult to express using other languages and systems . for the above specification , since in constraint programming systems the destructive assignment is not allowed , in most specification languages one would have to introduce variables for all elements of the sequence from to and the constraints between any succeeding two . also , other systems typically do not support modular arithmetic constraints and integers of arbitrary length .note that the specifications given above also cover the information on what variables are unknown and have to be determined so that the constraints are satisfied those are variables that appear within commands before they were defined .so , the above code is a full and precise specification of the problem , up to the domains of the variables . for boolean variables ,the domain is , while for numerical variables a common domain interval ( e.g. , ] ( for a given ) .then , for all admissible values for all unknowns , the specification can be executed .all sets of values satisfying the constraints should be returned as solutions .if there are unknown numerical variables and unknown boolean variables , then the search space would be of the size .let us consider the specification given in example [ ex : trivial ] .if the domain for and is the interval ] is assumed as the domain for numerical values ) .of course , instead of a brute - force search over this set of valuations , a satsolver should be used ( and it will typically perform many cut - offs and search over just a part of the whole search space ) .representation of numerical variables by propositional formulae corresponds to their binary representation .each formula corresponds to one bit of the binary representation .if the range of a numerical variable is ] ( where the last position corresponds to the least significant bit ) .if a bit of the number is not known , then it is represented by a propositional variable , or , if it depends on some conditions , by a propositional formula .we will discuss only representations of unsigned integers , but representations of signed integers can be treated in full analogy ( moreover , floating point numbers can also be modelled in an analogous way ) .boolean variables are represented by unary vectors of propositional formulae .results of arithmetic and bitwise logical operations over numbers represented by vectors of formulae can be again represented by propositional formulae in terms of formulae occurring in the input arguments .if the numbers are treated as unsigned , all arithmetic operations are performed modulo .for instance , if is represented by ] , then ( modulo ) is represented by ] and is represented by ] .note that representations of all standard arithmetic , boolean , and relational operations produce polynomial size formulae .if a problem specification is executed over the variables represented by vectors of propositional variables and using the corresponding interpretation of involved operations , then the assertion of the specification generates a propositional formula . any satisfying valuation ( if it exists ) for that formula would yield ( ground ) values for numerical and boolean unknowns that meet the specification , i.e. , a solution to the problem .let us again consider example [ ex : trivial ] .if ` u ` is represented by ] ) , then , by the condition ` v = u+1 ` , ` v ` is represented by ] . from the assertion ` v==2 ` , it follows that ] . in other words ,the formula should be checked for satisfiability .it is satisfiable , and in its only model maps to and maps to .hence , the representation for a required value of ` u ` is $ ] , i.e. , ` u ` equals 1 . a system based on the ideas presented above , could be used not only for combinatorial problems , but for a very wide range of problems it can be used for computing such that , given and a computable function with a finite domain and a finite range ( i.e. , for computing inverse of ) .a definition of in an imperative form can serve as a specification of the problem in verbatim .having such a specification of the function is a weak and realistic assumption as it is easy to make such specification for many interesting problems , including np - problems .if is a function such that when is a witness for some instance of an np - problem , can serve as a specification for this problem and the required answer is _ yes _ if and only if there is such that . concerning the type of numbers involved, the approach can be applied for any finite representation of signed or unsigned , integer or floating point numbers . in the proposed approach , all computations ( over integers )are performed modulo . in the case of non - modular constraints , the base can be set to a sufficiently large value .the approach ( in the presented form ) can not be used for computing such that , for arbitrary computable function .the first limitation is a finite representation of variables .the second is that conditional commands in the specification could involve only conditions with ground values at the time when the condition is evaluated . however, this restriction is not relevant for many ( or most of ) interesting problems .overall , the domain of the proposed approach covers all problems with boolean and numerical unknowns , over boolean parameters and numerical parameters with finite domains , that can be stated in the specification language that makes the part of the approach .in this section we describe the language ursathat serves as a specification language in the spirit of the approach presented above .a description of the syntax of the ursalanguage is given , in ebnf representation , in table [ tab : ursasyntax ] ( var denotes the syntactical class of numerical variables , expr denotes the syntactical class of numerical expressions , expr denotes the syntactical class of boolean expressions , etc ) .an ursaprogram is a sequence of statements ( and procedure definitions ) .there are two types of variables numerical , with identifiers starting with ` n ` and boolean , with identifiers starting with ` b ` .the same convention holds for identifiers of arrays .variables are not declared , but introduced dynamically .there are functions ( ` bool2num ` and ` num2bool ` ) for converting boolean values to numerical values and vice versa , and the ` sgn ` function corresponding to signum function .arithmetic , bitwise , relational and compound assignment operators , applied over arithmetic variables / expressions , are written in the c - style .for example , bitwise conjunction over numerical variables ` n1 ` and ` n2 ` is written ` n1 & n2 ` , bitwise left shift of ` n1 ` for ` n2 ` is written ` n1 < < n2 ` , and ` n1 + = n2 ` is equivalent to ` n1 = n1+n2 ` .logical operators , applied over boolean variables / expressions , are written in the c - style , with additional operator ` ^^ ` for logical exclusive disjunction , in the spirit of other c logical operators .there are also compound assignment operators for logical operators , such as ` & & = ` ( added for symmetry and convenience , although they do not exist in c ) .the operator ite is the conditional operator : ite(b , n1,n2 ) equals n1 if b is true , and equals n2 otherwise .there are no user - defined functions , but only user - defined procedures ..ebnf description of ursalanguage [ cols="<,^,<",options="header " , ] the above result do nt include the systems that translate problem specifications to satand which are the systems closest to ursa . namely , these systems translate inputs to sat(so it can be considered that they share the solving mechanism ) , but they use different satsolvers. a fair comparison would be thus to use these systems only as translators to satand then use the same satsolver ( for instance , clasp ) for finding all models of the generated satformulae .it is interesting to consider size of generated formulae and solving times ( of course , smaller formulae does not necessarily lead to shorter solving times ) .fzntini was used with flatzinc specifications obtained from the minizinc specification used by g12/fd ( with integers encoded with 5 bits ) and with flatzinc specifications obtained from a minizinc specification made in the style of the direct encoding , we will denote them by 1 and 2 .sugar was used only with a specification that employs the order encoding .ursawas used with the specifications ` queens-1 ` ( with integers encoded with 5 bits ) , ` queens-2 ` ( with the number of bits equal the instance dimension ) , and ` queens-3 ` , ( with integers encoded with 4 bits ) .table [ table : sugar_ursa ] presents the obtained experimental results .all recorded times were obtained for the `` quiet '' mode of the satsolver ( without printing the models ) .times for generating formulae were negligible ( compared to the solving phase ) for all systems , so we do nt report them here .for related specifications , ursa s ` queens-1 ` gave much smaller formulae ( probably thanks to techniques mentioned in section [ subsec : translation_to_sat ] ) and somewhat better performance than fzntini 1 , which suggests that fzntini does not benefit much from information about the global structure of the problem .the formulae generated by sugar were significantly smaller than in the above two cases , and led to much better solving efficiency .however , it was outperformed by the remaining entrants .the ursa s specifications ` queens-2 ` and ` queens-3 ` gave similar results .the specification ` queens-3 ` produced formulae with the smallest number of clauses .fzntini 2 produced formulae with the smallest number of variables .the best results in terms of the solving times were obtained also by fzntini 2 . it can be concluded that ursacan produce , with suitable problem specifications , propositional formulae comparable in size and in solving times with formulae produced by related state - of - the - art systems .+ variables & 3012 & 3825 & 4735 & 5742 & 6846 & 8047 & 9345 + clauses & 9128 & 11628 & 14460 & 17567 & 21000 & 24713 & 28770 + all solutions & 0.15 & 0.79 & 3.20 & 14.53 & 111.78 & & + + variables & 841 & 1052 & 1286 & 1560 & 1819 & 2139 & 2468 + clauses & 3352 & 4217 & 5179 & 6295 & 7390 & 8712 & 10089 + all solutions & 0.08 & 0.53 & 2.83 & 17.60 & 98.04 & & + + variables & 220 & 284 & 356 & 436 & 524 & 620 & 724 + clauses & 1138 & 1653 & 2253 & 3012 & 3924 & 5003 & 6263 + all solutions & 0.02 & 0.06 & 0.31 & 1.58 & 9.59 & 68.07 & 411.15 + + variables & 542 & 739 & 978 & 1263 & 1598 & 1987 & 2434 + clauses & 3319 & 5008 & 7280 & 10258 & 14077 & 18884 & 24838 + all solutions & 0.01 & 0.04 & 0.12 & 0.70 & 4.01 & 23.17 & 138.55 + + variables & 176 & 225 & 280 & 341 & 408 & 481 & 560 + clauses & 800 & 1110 & 1490 & 1947 & 2488 & 3120 & 3850 + all solutions & 0.01 & 0.03 & 0.12 & 0.69 & 3.82 & 21.09 & 136.45 + + variables & 128 & 162 & 200 & 242 & 288 & 338 & 392 + clauses & 872 & 1236 & 1690 & 2244 & 2908 & 3692 & 4606 + all solutions & 0.01 & 0.02 & 0.07 & 0.33 & 1.52 & 8.39 & 52.12 + [ [ additional - experiments . ] ] additional experiments. + + + + + + + + + + + + + + + + + + + + + + + in additional experiments , only the systems that performed the best on the queens problem were considered ( with only one constraint logic programming system kept ) : b - prolog , clasp , fzntini , g12/fd , and ursa .the following problems were considered ( for all problems all solutions were sought ) :-12 pt**golomb ruler:**the problem ( actually , one of its variation ) is as follows , given a value check if there are numbers , , , such that and all the differences , are distinct ( problem 6 at csplib ) .the experiments were performed for , with the largest that make the problem unsatisfiable and with the smallest that make the problem satisfiable . -12pt**magic square:**a magic square of order is a matrix containing the numbers from to , with each row , column and main diagonal equal the same sum .the problem is to find all magic squares of order ( problem 19 at csplib ) .the experiments were performed for and .-12 pt**linear recurrence relations:**linear homogeneous recurrence relations of degree are of the form : for .given , , and , can be simply calculated , but finding explicit formula for requires solving a nonlinear characteristic equation of degree , which is not always possible .so , the following problem is nontrivial : given , , and , find .for the experiment , we used the relation , .we generated instances with the ( only ) solution and the systems were required to seek all possible values for .additional constraints ( used explicitly or implicitly ) for all considered systems ) were and , for .ursaand fzntini were used with 32 bit length for numerical values .-12 pt**non - linear recurrence relations:**in non - linear homogenous recurrence relations of degree k , the link between and , , , , is not necessarily linear . for the experiment we used the relation , .we generated instances with the ( only ) solution and the systems were required to seek all possible values for .additional constraints ( used explicitly of implicitly ) for all considered systems ) were and , for . for all problem instances ,the size of all relevant values were smaller than .ursaand fzntini were used with 32 bit length for numerical values .the ursawas used with the following specification for the golomb ruler problem ( for the instance , ) : .... nm=7 ; nl=25 ; brulerendpoints = num2bool(nruler & nruler > > nl & 1 ) ; nmarks=2 ; bdistancediff = true ; for(ni=1 ; ni<=nl-1 ; ni++ ) { nmarks + = ( nruler > >ni ) & 1 ; n = ( nruler & ( nruler < < ni ) ) ; bdistancediff & & = ( n & ( n-1))==0 ; } assert_all(brulerendpoints & & nmarks==nm & & bdistancediff ) ; .... the above specification employs a binary representation of the ruler ( ` nruler ` ) where each bit set denotes a mark .the value ` nruler & nruler > > nl & 1 ` equals ` 1 ` if and only if the first and ` nl`th bit are set ( as the ruler endpoints ) .the value ` nmarks ` counts the bits set ( e.g. , the marks ) and it should equal ` nm ` . if the ruler ` nruler ` is a golomb ruler , then whenever it is shifted left ( for values ` 1 ` , , ` nl-1 ` ) and bitwise conjunction is performed with the original ruler giving the value `n ` , there will be at most one bit set in ` n ` ( since all the differences between the marks are distinct ) .there is at most one bit set in ` n ` if and only if the value ` n & ( n-1 ) ` equals 0 .this specification , employing a single loop , illustrate the expressive power of bitwise operations supported in the ursalanguage .for the magic square problem , ursawas used with the following specification : .... ndim=4 ; nn = ndim*ndim ; bcorrectsum = ( 2*nsum*ndim = = nn*(nn-1 ) ) ; bdomain = true ; bdistinct = true ; for(ni=0;ni < ndim;ni++ ) { for(nj=0;nj < ndim;nj++ ) { bdomain & & = ( nt[ni][nj]<nn ) ; for(nk=0;nk < ndim;nk++ ) for(nl=0;nl <ndim;nl++ ) bdistinct & & = ( ( ni==nk & & nj==nl ) || nt[ni][nj]!=nt[nk][nl ] ) ; } } bsum = true ; nsum1=0 ; nsum2=0 ; for(ni=0;ni < ndim;ni++ ) { nsum1 + = nt[ni][ni ] ; nsum2 + = nt[ni][ndim - ni-1 ] ; nsum3=0 ; nsum4=0 ; for(nj=0;nj <ndim;nj++ ) { nsum3 + = nt[ni][nj ] ; nsum4 + = nt[nj][ni ] ; } bsum & & = ( nsum3==nsum ) ; bsum & & = ( nsum4==nsum ) ; } bsum & & = ( nsum1==nsum ) ; bsum & & = ( nsum2==nsum ) ; assert_all(bcorrectsum & & bdomain & & bdistinct & & bsum ) ; .... for the linear recursive relation , ursawas used with the following specification ( the specification for the non - recursive relations is analogous ) : .... n = 30 ; ny = 20603361 ; n1=1 ; n2=1 ; n3=nx ; for(ni=4 ; ni< = n ; ni++ ) { ntmp = n1+n2+n3 ; n1=n2 ; n2=n3 ; n3=ntmp ; bdomain & & = ( n3<=ny ) ; } assert_all(bdomain & & n3==ny ) ; .... table [ table : additional_experiments ] shows experimental results ( with translation times included ) .fzntini was used with clasp as an underlying solver ( the built - in solver gave poorer results ) .the number of variables and clauses generated by ursawere , in these benchmarks , smaller than for fzntini .for lparse / clasp , the translation time was significant , and some of the poor results for some benchmarks are due to large domains ( while the system works with relations rather than functions ) . for recurrence relations ,g12/fd reported model inconsistency when it approached its limit for integers , while b - prolog just failed to find a solution for larger instances .overall , on this set of benchmarks , ursagave better results than clasp and fzntini and on some benchmarks outperformed all other tools .+ 5/10 & 0.01 & 5.36 & 0.20 & 0.06 & 0.01 + 6/16 & 0.01 & 44.68 & 1.16 & 0.08 & 0.02 + 7/24 & 0.01 & 350.11 & 9.53 & 0.10 & 0.10 + 8/33 & 0.08 & & 111.90 & 0.24 & 0.69 + 9/43 & 0.69 & & & 1.18 & 4.89 + 10/54 & 5.34 & & & 7.84 & 35.55 + 11/71 & 105.54 & & & 93.2 & 571.90 + 5/11 & 0.01 & 6.33 & 0.32 & 0.07 & 0.01 + 6/17 & 0.01 & 57.40 & 1.43 & 0.08 & 0.01 + 7/25 & 0.01 & 429.98 & 14.15 & 0.10 & 0.13 + 8/34 & 0.09 & & 106.26 & 0.26 & 0.87 + 9/44 & 0.78 & & & 1.27 & 5.87 + 10/55 & 6.86 & & & 6.44 & 37.28 + 11/72 & 115.81 & & & 125.50 & 450.41 + + 3 & 0.01 & 0.05 & 0.04 & 0.01 & 0.01 + 4 & 4.74 & 462.21 & & 10.26 & 93.01 + + 4 & 0.00 & 0.01 & 0.01 & 0.01 & 0.00 + 5 & 0.00 & 0.06 & 0.01 & 0.02 & 0.00 + 6 & 0.00 & 1.18 & 0.01 & 0.02 & 0.01 + 7 & 0.00 & 25.49 & 0.01 & 0.02 & 0.01 + & & & & & + 28 & 43.83 & & 0.17 & 143.54 & 0.31 + 29 & 84.92 & & 0.68 & incons & 0.33 + 30 & 158.78 & & 1.02 & incons & 0.47 + + 4 & 0.00 & 0.00 & 0.01 & 0.01 & 0.00 + 5 & 0.00 & 0.01 & 0.01 & 0.01 & 0.02 + 6 & 0.00 & 0.42 & 0.22 & 0.02 & 0.05 + 7 & 0.00 & 126.05 & 0.36 & 0.02 & 0.08 + 8 & 0.00 & & 0.51 & 0.02 & 0.13 + 9 & fail & & 0.76 & 0.02 & 0.28 + 10 & fail & & 0.88 & 0.03 & 0.53 + 11 & fail & & 0.97 & incons & 0.77 + [ [ discussion . ] ] discussion .+ + + + + + + + + + + the described limited experiments can not give definite conclusions or ranking of the considered systems , as discussed above .in particular , one may raise the following concerns , that can be confronted with the following arguments : _ ursawas used with a good problem specifications , and there may be specifications for other systems that lead to better efficiency ._ however , almost all specifications were taken from the system distributions , given there to illustrate the modelling and solving power of the systems . also ,the problem specifications used for ursaare also probably not the best possible , but are rather straightforward , as specifications for other systems . in addition , in contrast to ursa , other modelling systems typically aim at liberating the user of thinking of details of internal representations and are free to perform internal reformulations of the input problem . making specifications in ursamay be somewhat more demanding than for some other systems , but gives to the user a fuller control of problem representation . _ some specifications used for ursaare related to the direct encoding ( known to be efficient , for example , for the queens problem ) , while this is not the case with other systems ._ what is suitable for sat - based systems is not necessarily suitable for other systems .for instance , g12/fd gives significantly poorer results with the specification of the queens problem based on the direct encoding , than with the one used in the experiment .this is not surprising , because systems that are not based on satdoes not necessarily handle efficiently large numbers of boolean variables and constraints ( in contrast to satsolvers ) and lessons from the satworld ( e.g. , that for some sort of problems , some encoding scheme is the most efficient ) can not be _ a priori _ applied to other solving paradigms . _ursauses bit - wise logical operators , while other systems do not ( as they do nt have support for them ) ._ bit - wise logical operators make one of advantages of ursa , while in the same time , some other systems use their good weapons ( e.g. , global constraints such as all - different ) . in summary ,the presented experiments suggest that the ursasystem ( although it is not primarily a cspsolver but a general system for reducing problems to sat ) is competitive to the state - of - the - art , both academic and industrial , modelling systems even if they can encode high - level structural information about the input problem and even if they involve specialized underlying solvers ( such as support for global constraints like all - different ) . a wider and deeper comparison between these ( and some other ) constraint solvers ( not sharing input language ) and with different encodings of considered problems , would give a better overall picture but is out of scope of this paper .the current system ( with the presented semantics and the corresponding implementation ) uses one way ( binary representation ) for representing ( unsigned ) integers but ( as shown in the given examples ) , it still enables using different encoding styles in specifications . for further convenience ,we are planning to natively support other representations for integers , so the user could choose among several encodings. also , signed integers and floating point numbers could be supported .the language ursa(and the interpreter ) can be extended by new language constructs ( e.g. , by division ) .a new form of the ` assert ` can be added , such that it propagates intermediate solutions to subsequent commands of the same sort .support for global constraints can also be developed , but primarily only as a , , syntactic sugar the user could express global constraints more easily , but internally they would be expanded as if they were expressed using loops ( i.e. , as in the current version of the system ) .alternative forms of support for global constraints would require substantial changes in the sat - reduction mechanism . on the lower algorithmic and implementation level, we are planning to further improve the current version of transformation to cnf . in the current version ,ground integers are represented by built - in fixed - precision integers , which is typically sufficient .however , in order to match symbolic integers , ground integers should be represented by arbitrary ( but fixed ) length integers and we are planning to implement that .concerning the underlying satsolvers , currently only two complete satsolvers are used .we are planning to integrate additional solvers , since some solvers are better suited to some sorts of input instances , as the satcompetitions show . within this direction of work, we will also analyze performance of stochastic solvers within ursa .in addition , we are exploring potentials of using non - cnf satsolvers within ursa , which avoids the need for transformation to cnf . choosing among available solverscan be automated by using machine learning techniques for analysis of the generated satinstances ( or even input specifications ) . for solving optimization problems , instead of the existing naive implementation, we are planning to implement more advanced approaches and to explore the use of maxsat and pseudo - boolean solvers . on the theoretical side ,the full operational semantics outlined in this paper can be formally defined and it could be proved that solutions produced by the ursasystem indeed meet the specifications and if there are no produced solutions , then the specifications is inconsistent . along with the formal verification ( i.e. , verification within a proof assistant ) of the satsolver argosatused , that would provide a formal correctness proof of the ursasystem ( which would make it , probably , the first _ trusted _ constraint solver ) . in the presented version of ursa , reducing to satis tightly integrated ( and defined by the semantics of the system ) in the program execution phase .an alternative would be as follows : during the program execution phase , a first - order formula is generated and only before the solving phase it is translated to a propositional formula .moreover , the generated formula would not need to be translated to a propositional formula , but could be tested for satisfiability by using smt(satisfiability modulo theory ) solvers ( e.g. , for linear arithmetic , equality theory , alldifferent theory etc . ) . in particular , symbolic computations employed by the ursasystem are closely related to the theory of bit - vector arithmetic and to decision procedures for this theory based on `` bit - blasting '' . since solvers for this theory typically cover all the operators used in ursa , the theory of bit - vector arithmetic can be used as an underlying theory ( instead of propositional logic ) and any solver for bit - vectors arithmetic can be used as a solving engine . generally , the reduction could be adaptable to smtsolvers available if some solver is available , then its power can be used , otherwise all generated constraints are exported to propositional logic .this would make the approach more powerful and such development is the subject of our current work the system ursa major will be able to reduce constraints not only to satbut to different smttheories .that system will be a general constraint solver and also a high - level front - end to the low level smt - lib interchange format , and , further , to all smtsolvers that supports it .reduction to the theory of bit - vector arithmetic is firstly explored in this context and it shows that reducing to bit - vector arithmetic does not necessarily lead to more efficient solving process than reducing to sat(and the same holds for other smttheories ) . with the increased power of the presented system ( by using both satand smtsolvers ) , we are planning to further consider a wide range of combinatorial , np - complete problems , and potential one - way functions and also to apply the ursasystem to real - world problems ( e.g. , the ones that are already being solved by translating them to sat ) .some applications in synthesis of programs are already the subject of our current work . for the sake of easier practical usability of the ursasystem, we are planning to develop a support for integration of ursawith popular imperative languages ( c , c++ , java ) .in this paper we described a novel approach for uniform representation and solving of a wide class of problems by reducing them to sat .the approach relies on : a novel specification language that combines features of imperative and declarative languages a problem is specified by a test , expressed in an imperative form , that some values indeed make a solution to the problem ; symbolic computations over unknowns represented by ( finite ) vectors of propositional formulae .the approach is general and elegant , with precisely defined syntax and induced ( by the concept of symbolic execution ) semantics of the specification language .this enables straightforward implementation of the system and it works as a `` clear box '' . the proposed language is a novel mixture of imperative and declarative paradigms , leading to a new programming paradigm . thanks to the language s declarative aspects the problem is described by what makes a solution and not by describing how to find it using the system does not require human expertise on the specific problem being solved . on the other hand ,specifications are written in imperative form and this gives the following advantages compared to other modelling languages ( all of them are declarative ) : problem specifications can involve destructive assignments , which is not possible in declarative languages and this can be essential for many sorts of problems ( e.g. , from software verification ) ; modelling problems that naturally involve loops ( and nested loops ) is simple and the translation is straightforward ; for users familiar with imperative programming paradigm , it should be trivial to acquire the specification language ursa(since there are no specific commands or flow - controls aimed at constraint solving ) ; the user has a fuller control of internal representation of the problem , so can influence the efficiency of the solving phase .a specification can be taken , almost as - is , from and to languages such as c ( within c , such code would check if some given concrete values are indeed a solution of the problem ) . the system can smoothly extend imperative languages like c / c++ or java , ( as constraint programming extends logic programming ) .the system can be verified to be correct in a straightforward manner .in addition , the system ursa , in contrast to most of ( or all ) other modelling systems , supports bit - wise logical operators and constraints involving modular arithmetic ( for the base of the form ) , which is essential for many applications , and can also enable efficient problem representation and problem solving .the proposed approach can be used for solving all problems with boolean and numerical unknowns , over boolean parameters and numerical parameters with finite domains that can be stated in the specification language ( e.g. , the domain of the system is precisely determined by expressiveness of its specification language ) .the search for required solutions of the given problem is performed by modern satsolvers that implement very efficient techniques not directly applicable to other domains .while satis already used for solving a wide range of various problems , the proposed system makes these reductions much easier and can replace a range of problem - specific tools for translating problems to sat .the tool ursacan be used not only as a powerful general problem solver ( for finite domains ) , but also as a tool for generating satbenchmarks ( both satisfiable and unsatisfiable ) . concerning weaknesses , ursais not suitable for problems where knowledge of the domain and problem structureare critical and can be efficiently tackled only by specialized solvers , and this holds for reduction to satgenerally . due to its nature , by interfacing ursawith standard specification languages like xcsp or minizinc , most of the ursamodelling features and power would be lost ( e.g. , bit - wise logical operators and destructive updates ) , while global constraints supported by these languages would be translated in an inefficient way . therefore , it makes no much sense to enable conversion from these standard languages to ursaand this makes ursaa bit isolated system in the world of constraint solvers or related systems . in this paper we do not propose : a new sat - encoding technique rather , the ursaspecification language can be used for different encoding styles ; a technique for transforming a satformula to conjunctive normal form this step is not a part of the core of the ursalanguage and is not covered by its semantics , so any approach ( meeting the simple specification ) can be used ; the current technique seem to work well in practice , while it can still be a subject of improvements .a satsolver rather , ursacan use any satsolver ( that can generate all models for satisfiable input formulae ) ; moreover , having several satsolver would be beneficial , since some solvers are better suited to some sorts of problems . in this paperwe also described the system ursathat implements the proposed approach and provided some experimental results and comparison with related systems .they suggest that , although ursais not primarily a cspsolver ( but a general system for reducing problems to sat ) , the system is , concerning efficiency , competitive to state - of - the - art academic and industrial csptools .ursais also competitive to other system that translate problem specifications to sat .in contrast to most of other constraint solvers , the system ursais open - source and publicly available on the internet . for future work, we are planning to extend the system so it can use not only complete satsolvers , but also stochastic satsolvers , non - cnf solvers , and smtsolvers .we will also work on formal ( machine - checkable by a proof assistant ) verification of the system ( i.e. , show that the ursasystem always gives correct results ) and on extensions of the system relevant for practical applications .this work was partially supported by the serbian ministry of science grant 174021 and by snf grant scopes iz73z0_127979/1 .the author is grateful to dejan jovanovi whose ideas on overloading c++ operators influenced development of the system presented in this paper ; to filip mari for valuable discussions on the presented system and for using portions of his code for shared expressions ; to milan eum for portions of his code for performing operations over vectors of propositional formulae ; to ralph - johan back and to the anonymous reviewers for very detailed and useful comments on a previous version of this paper .josep argelich , alba cabiscol , ins lynce , and felip many .regular encodings from max - csp into partial max - sat . in _ismvl 2009 , 39th international symposium on multiple - valued logic _ , pages 196202 .ieee computer society , 2009 .clark barrett , roberto sebastiani , sanjit a. seshia , and cesare tinelli ., chapter 26 , pages 825885 . in arminbiere , marijn heule , hans van maaren , and toby walsh , editors , _ handbook of satisfiability _ , pages 7597 , 2009 .miquel bofill , josep suy , and mateu villaret .a system for solving constraint satisfaction problems with smt . in _ theory and applications of satisfiability testing - sat 2010 _ , lncs 6175 , pages 300305 .springer , 2010 .angelo brillout , daniel kroening , and thomas wahl .mixed abstractions for floating - point arithmetic . in _ proceedings of 9th international conference on formal methods in computer - aided design , fmcad 2009 _ , pages 6976 .ieee , 2009 .tristan denmat , arnaud gotlieb , and mireille ducass . an abstract interpretation based combinator for modelling while loops in constraint programming . in _cp07 : proceedings of the 13th international conference on principles and practice of constraint programming _ , lncs 4741 .springer , 2007 .carsten fuhs , jrgen giesl , aart middeldorp , peter schneider - kamp , ren thiemann , and harald zankl . sat solving for termination analysis with polynomial interpretations . in _ theory andapplications of satisfiability testing - sat 2007 _ , lncs 4501 , pages 340354 .springer , 2007 .holger h. hoos. sat - encodings , search space structure , and local search performance . in _ proceedings of the sixteenth international joint conference on artificial intelligence ,ijcai 99 _ , pages 296303 .morgan kaufmann , 1999 .ali sinan kksal , viktor kuncak , and philippe suter .scala to the power of z3 : integrating smt and programming . in _23rd international conference on automated deduction cade-23 _ , lncs 6803 , pages 400406 .springer , 2011 .ilya mironov and lintao zhang .applications of sat solvers to cryptanalysis of hash functions . in _ theory andapplications of satisfiability testing - sat 2006 _ , lncs 4121 , pages 102115 .springer , 2006 .david g. mitchell and eugenia ternovska .a framework for representing and solving np search problems . in _ national conference on artificial intelligence aaai 2005 _ , pages 430435 . aaai press / the mit press , 2005 .matthew w. moskewicz , conor f. madigan , ying zhao , lintao zhang , and sharad malik .chaff : engineering an efficient sat solver . in _dac 01 : proceedings of the 38th conference on design automation _ , pages 530535 .acm press , 2001 .nicholas nethercote , peter j. stuckey , ralph becket , sebastian brand , gregory j. duck , and guido tack .minizinc : towards a standard cp modelling language . in _ principles and practice of constraint programming cp 2007 _ , lncs 4741 , pages 529543 .springer , 2007 .mladen nikoli , filip mari , and predrag janii .instance - based selection of policies for sat solvers . in _ theory andapplications of satisfiability testing - sat 2009 _ , lncs 5584 , pages 326340 .springer , 2009 .nikolay pelov and eugenia ternovska .reducing inductive definitions to propositional satisfiability . in _ international conference on logic programming iclp 2005 _ , lncs 3668 , pages 221234 .springer , 2005 .peter j. stuckey , maria j. garca de la banda , michael j. maher , kim marriott , john k. slaney , zoltan somogyi , mark wallace , and toby walsh .the g12 project : mapping solver independent models to efficient solutions . in _ principles and practice of constraint programming - cp 2005 _ , lncs 3709 , pages 1316 .springer , 2005 . g. s. tseitin . on the complexity of derivations in propositional calculus .in _ studies in constructive mathematics and mathematical logic ( part ii ) _ , pages 115125. consultants bureau , 1968 .( also in _ the automation of reasoning _ , springer - verlag , 1983 . ) .hantao zhang , dapeng li , and haiou shen . a sat based scheduler for tournament schedules . in _the seventh international conference on theory and applications of satisfiability testing - sat 2004 _ , 2004 .online proceedings .
|
there are a huge number of problems , from various areas , being solved by reducing them to sat . however , for many applications , translation into satis performed by specialized , problem - specific tools . in this paper we describe a new system for uniform solving of a wide class of problems by reducing them to sat . the system uses a new specification language ursathat combines imperative and declarative programming paradigms . the reduction to satis defined precisely by the semantics of the specification language . the domain of the approach is wide ( e.g. , many np - complete problems can be simply specified and then solved by the system ) and there are problems easily solvable by the proposed system , while they can be hardly solved by using other programming languages or constraint programming systems . so , the system can be seen not only as a tool for solving problems by reducing them to sat , but also as a general - purpose constraint solving system ( for finite domains ) . in this paper , we also describe an open - source implementation of the described approach . the performed experiments suggest that the system is competitive to state - of - the - art related modelling systems .
|
in recent years , complex networks attract special attentions from diverse fields of research [ 1 ] .though several novel measurements , such as degree distribution , shortest connecting paths and clustering coefficients , have been used to characterize complex networks , we are still far from complete understanding of all peculiarities of their topological structures .finding new characteristics is still an essential role at present time .describe the structure of a complex network with the associated adjacency matrix . map this complex network with nodes to a large molecule , the nodes as atoms and the edges as couplings between the atoms .denote the states and the corresponding site energies of the atoms with and , repsectively .consider a simple condition where the hamiltonian of the molecule reads , where , by this way a complex network is mapped to a quantum system , and the corresponding associated adjacency matrix to the hamiltonian of this quantum system .the structure of a complex network determines its spectrum .the characteristics of this spectrum can reveal the structure symmetries , which can be employed as global measurements of the corresponding complex network [ 2 - 12 ] . in our recent papers [ 13 - 16 ] ,several temporal series analysis methods are used to extract characteristic features embedded in spectra of complex networks . in the present paper , a new concept , called diffusion factorial moment ( dfm ) ,is proposed to obtain scale features in spectra of complex networks .it is found that these spectra display scale invariance , which can be employed as a global measurement of complex networks in a unified way .it may also be helpful for us to construct a unified model of complex networks .represent a complex network with its adjacency matrix : .the main algebraic tool that we will use for the analysis of complex networks will be the spectrum , i.e. , the set of eigenvalues of the complex network s adjacency matrix , called the spectrum of the complex network , denoted with . connecting the beginning and the end of this spectrum , we can obtain a set of delay register vectors as [ 13 ] , considering each vector as a trajectory of a particle during time units , all the above vectors can be regarded as a diffusion process for a system with particles [ 17 ] .accordingly , for each time denoted with we can reckon the distribution of the displacements of all the particles as the state of the system at time . dividing the possible range of displacements into bins , the probability distribution function ( pdf )can be approximated with , where is the number of particles whose displacements fall in the bin at time . to obtain a suitable ,the size of a bin is chosen to be a fraction of the variance , . if the series constructed with the nearest neighbor level spacings , , is a set of homogeneous random values without correlations with each other , the pdf should tend to be a gaussian form when the time becomes large enough .deviations of the pdf from the gaussian form reflect the correlations in the time series . here, we are specially interested in the scale features in spectra of complex networks .generally , the scale features in spectra of complex networks can be described with the concept of probability moment ( pm ) defined as [ 18 ] , where is the probability for a particle occurring in the bin .assume the pdf takes the form , an easy algebra leads to , if the considered series is completely uncorrelated , the resulting diffusion process will be very close to the condition of ordinary diffusion , where and the function in the pdf is a gaussian function of . can reflect the departure of the diffusion process from this ordinary diffusion condition [ 19 ] .the extreme condition is the ballistic diffusions , whose pdfs read .the values of at this condition are .but the approximation of pdf , , in the above computational procedure will induce statistical fluctuations due to the finite number of particles , which may become a fatal problem when we deal with the spectrum of a complex network .the dynamical information may be merged by the strong statistical fluctuations completely . capturing the dynamical information from a finite number of casesis a non - trivial task .this problem is firstly considered by a. bialas and r. peschanski in analyzing the process of high energy collisions , where only a small number of cases can be available . a concept called factorial moment ( fm )is proposed to find the intermittency ( self - similar ) structures embedded in the pdf of states [ 18,20 - 24 ] .the definition of fm reads , where is the number of the bins the displacement range being divided into and the number of particles whose displacements fall in the bin . stimulated by the concept of fm , we propose in this paper a new concept called diffusion factorial moment ( dfm ) , which reads , we present a simple argument for the ability of dfm to filter out the statistical fluctuations due to finite number of cases [ 20,21].the statistical fluctuations will obey bernoulli and poisson distributions for a system containing uncertain and certain total number of particles , respectively . for a system containing uncertain total number of particles ,the distribution of particles in the bins can be expressed as , where .hence , that is to say , and consequently , becomes , therefore , dfm can reveal the strong dynamical fluctuations embedded in a time series and filter out the statistical fluctuations effectively .we will use the dfm instead of the pm to obtain the scale features in spectrum of a complex network .it should be pointed out that the scale features in our dfm is completely different from that in fm .the fm reveals the self - similar structures with respect to the number of the bins the possible range of the displacements being divided into , i.e. , the scale is the displacement . in dfm ,the considered scale is the time . at time , the state of the system is , . in one of our recent works [ 13 ] ,joint use of the detrended fluctuation approach ( dfa ) and the diffusion entropy ( de ) is employed to find the correlation features embedded in spectra of complex networks . in that paperwe review briefly the relation between the scale invariance exponent , , and the long - range correlation exponent . for fractional brownian motions ( fbm ) andlevy walk processes , we have and , respectively . generally , we can not derive a relation between these two exponents .herein , we present the relation between the concepts of dfm and de . from the probability moment in can reach the corresponding tsallis entropy , , which reads , a trivial computation leads to the relation between the de , ( denoted with ) , the pm and the tsallis entropy , as follows , hence dfm can detect multi - fractal features in spectra of complex networks by adjusting the value of .the de is just a special condition of dfm with .what is more , the dfm can filter out the statistical fluctuations due to finite number of eigenvalues in the spectrum of a network .the adjacency matrices are diagonalized with the matlab version of the software package propack [ 25 ] .[ 0.8 ] four typical results for the erdos - renyi network model.theparameter .denote the size of a network with .( a) , .we have , which is consistent with the random behavior of the spectrum .the corresponding pdf is gaussian .( b ) , .we have , a slight deviation from random . for ( c ) and( d ) and , respectively.,title="fig : " ] consider firstly the erdos - renyi model [ 26,27 ] .starting with nodes and no edges , connect each pair with probability .for the network is broken into many small clusters , while for a large cluster can be formed , which in the asymptotic limit contains all nodes [ 27 ] . is a critical point for this kind of random networks .fig.1 presents four typical results for erdos - renyi networks . for , the scaling exponent is, , which is consistent with the random behavior of the spectrum . with the increase of , becomes larger and larger .the spectrum tends to display a significant scale invariance . as one of the most widely accepted models to capture the clustering effects in real world networks, the ws small world model has been investigated in detail [ 1,28 - 31 ] . herewe adopt the one - dimensional lattice model .take a one - dimensional lattice of nodes with periodic boundary conditions , and join each node with its right - handed nearest neighbors .going through each edge in turn and with probability rewiring one end of this edge to a new node chosen randomly . during the rewiring proceduredouble edges and self - edges are forbidden .[ 0.8 ] dfm for the two extreme conditions of the ws network model , i.e. , regular networks ( ) and the corresponding completely random networks ( ) .the size of a network . and .for these generated networks , when the number of right - handed neighbors is small ( ) the dfms obey a power - law.,title="fig : " ] [ 0.8 ] the values of the exponent for the two extreme conditions of the ws network model , i.e , the regular networks ( ) and the corresponding completely random networks ( ) .the size of a network . and .the values of for the regular networks are in the range of , a slight deviation from that corresponds to a gaussian distribution .the values of for the completely random networks are significantly larger than that of the corresponding regular networks ( with a few exceptions ) ., title="fig : " ] fig.2 and fig.3 show the results for two extreme conditions of the ws network model , i.e. , the regular networks with different right - handed neighbors ( ) and the corresponding completely rewired networks ( ) .when the value of is unreasonable large ( ) , the dfm will not obey a power - law .the scaling exponents for the regular networks are basically in the range of , a slight deviation from that of the gaussion distribution .the scaling exponents for the completely rewired networks with are significantly larger than that of the corresponding regular networks .four typical results for the networks generated with the ws model with different rewiring probability values , as shown in fig.4 , illustrate the significant scale invariance in spectra of these ws networks.the values of for these generated networks with and are presented in fig5 and fig.6 , respectively .we are specially interested in the rough range of ] we have . and in the condition of , is in the range of ] ,where the ws small world network model can capture the characteristics of real world complex networks , we have .,title="fig : " ] [ 0.8]the values of for generated ws networks with different rewiring probabilities .the parameters . in the special range of ] , we have . and the value of oscillates around abruptly . in the range of ] , we have . and the value of oscillates around abruptly . in the range of ] , where the ws model can capture the properties of real world networks ,the spectra display a typical scale invariance .two critical points are found for grn ( growing random network ) networks at and , at which we have two minimum values of , respectively . in the range of $ ], we have basically .hence we find self - similar structures in all the spectra of the considered three complex network models .this common feature may be used as a new measurement of complex networks in a unified way .comparison with the regular networks and the erdos - renyi networks with tells us that this self - similarity is non - trivial .the self - similar structures in spectra shed light on the scale symmetries embedded in the topological structures of complex networks , which can be used to obtain the possible generating mechanism of complex networks .quasicrystal theory tells us that the aperiodic structure of lattice will induce a fractal structure in the corresponding spectrum .the most possible candidate feature sharing by all the complex networks constructed with the three models may be fractal characteristic , which has been proved in a very recent paper [ 33 ] . based upon this feature, we may construct a unified model of complex networks .this work is supported by the innovation fund of nankai university .one of the authors ( h. yang ) would like to thank prof .yizhong zhuo , prof .jianzhong gu in china institute of atomic energy for stimulating discussions .
|
a new method called diffusion factorial moment ( dfm ) is used to obtain scaling features embedded in spectra of complex networks . for an erdos - renyi network with connecting probability , the scaling parameter is , while for the scaling parameter deviates from it significantly . for ws small - world networks , in the special region ] , we have . and the value of oscillates around abruptly . in the range of $ ] , we have basically . scale invariance is one of the common features of the three kinds of networks , which can be employed as a global measurement of complex networks in a unified way .
|
recent advances in sparse inverse problems , often referred to as compressive sensing , have led to new methods and technologies for compressive image reconstruction .the central idea is to design encoded , subsampled linear measurements which allow for accurate optimization - based decoding which assumes sparsity in some transform domain such as wavelets or curvelets .applications of interest include fluorescence microscopy , infrared microscopy and astronomy , for example the single - pixel telescopic system .intensity measurements , in which a pixel array is sampled using a binary ` on - off ' mask , are often considered an attractive choice .for example , in the case of the single - pixel camera , a spatial light modulator ( slm ) in the form of a digital micromirror device is typically used for image acquisition , and binary measurements are simplest to encode into hardware . meanwhile, it is important for the computational efficiency of the optimization - based reconstruction that the measurement matrix can be implemented as a fast transform .the walsh - hadamard transform ( wht ) , which involves a measurement matrix consisting of entries , is therefore an attractive solution , and so measurement schemes are often based upon it ( see for example each of the references given above ) .one of the central tenets of compressive sensing is that uniform , random , subsampling is optimal for reconstructing signals whose sparsity pattern is unstructured .however , wavelet transforms have an inherent multi - scale structure , and the sparsity patterns associated with the wavelet transforms of natural images are in consequence highly structured , with signal energy being concentrated in the coarse ( low frequency ) scales .in fact , the wht itself has a multi - scale interpretation , and it has now been established that improvements in reconstruction accuracy result from subsampling the wht at a variable rate ; and in particular , more aggressively at finer ( higher frequency ) scales .practical schemes incorporating the above principles typically involve randomly subsampling the wht coefficients , while varying the subsampling rate with scale .implementation of this approach requires the locations of the selected random coefficients to be stored and the full wht to be repeatedly computed within the reconstruction algorithm . in this paper, we consider an entirely deterministic approach to variable , multi - scale , sampling .the scheme leads to comparable reconstruction performance , but without the need to rely upon randomness . as a result, no locations of randomly selected coefficients need to be stored , and full whts can be replaced with more efficient transforms which directly compute the subsampled coefficients . the sampling scheme is based upon a family of matrices that have been shown both theoretically and empirically to be effective measurement matrices for compressive sampling : delsarte - goethals frames .we will consider one particular instantiation of delsarte - goethals frames : _ real kerdock matrices_. the contributions of this paper can be summarized as follows . * we show that applying a kerdock matrix to a signal results in multi - scale measurements , where the measurements at a given scale sample only a single scale of the _ haar wavelet transform _ of the signal . in other words , the kerdock transform is scale - preserving .* based on these new insights , we propose a new deterministic strategy for variable , multi - scale , sampling .we present experimental evidence that the new strategy leads to improved image reconstruction performance compared to the uniformly subsampled wht .the structure of the rest of the paper is as follows . in section [ kerdock ] ,we give details on the real kerdock transform , and in section [ scale ] , we establish the scale - preserving properties of the transform and make the connection to the haar wavelet transform .we then propose our new deterministic multi - scale sampling strategy in section [ strategy ] .numerical image reconstruction experiments can be found in section [ experiments ] , comparing the deterministic approach with other subsampling strategies .in this section , we give the essential form of the kerdock transform and refer the interested reader to for further technical details . we define a kerdock matrix as where is a normalized hadamard matrix and , for , is a diagonal matrix with entries on the diagonal .these diagonal entries are determined by binary quadratic codes known as kerdock codes ( see for further details ) .the kerdock matrix is a union of orthobases , and it is instructive to consider how it operates on a vector divided into subvectors of length . writing we have in other words , it is a weighted sum of the whts of the subvectors .it follows immediately that there exists an transform for computing a matrix - vector product with : divide the input vector into subvectors , apply the wht to each , then apply the necessary componentwise sign changes to the resulting outputs and sum them .we will call this transform the 1d kerdock transform , and denote it by .the parameter controls the compression factor of the transform , which is . the ability to vary the parameter for different wavelet scales will be essential for designing a variable , multi - scale sampling strategy .we note that there is an upper bound on imposed by the underlying kerdock codes : we must have ( see ) . the extension to a 2d kerdock transform is achieved using the usual cartesian product . given a image ,define the 2d kerdock transform to be aim of this section is to show that applying a kerdock matrix to a signal results in multi - scale measurements , where the measurements at a given scale sample only a single scale of the haar wavelet transform of the signal . in other words ,the kerdock transform is scale - preserving .we restrict our focus in this section to the 1d kerdock transform , though the analysis extends naturally to the 2d kerdock transform and images .our starting point is a striking result about the relationship between the walsh - hadamard transfrom ( wht ) and the haar wavelet transform .given , let be the hadamard matrix with columns in dyadic / paley order , and let be the matrix whose rows are the haar wavelet basis elements ordered in the usual way from coarse to fine scales , so that multiplication by gives the discrete haar wavelet transform .the matrix product has the following block - diagonal decomposition .* theorem 1 .* the result appears implicitly in the literature , for example in , but appears explicitly here for the first time , to the authors best knowledge . a proof is given in appendix [ proof ] .the result can also be viewed as a statement about the mutual coherence of the hadamard and haar bases .since the magnitude of the coefficients decays with scale , it is an example of the asymptotic incoherence described in .now consider the kerdock transform of a signal ( see section [ kerdock ] ) , and write for the decomposition of into subsignals , so that let be the haar wavelet transform of , and write for its haar wavelet decomposition by scales .write for the haar wavelet transform of each subsignal , , and for their respective haar wavelet decompositions by scales .also write for each diagonal matrix , , where the submatrices correspond in size to the wavelet scales. then we have but haar wavelets have a nested property : if we divide a signal into subsignals , the haar wavelet coefficients of at scale are nothing other than the haar wavelet coefficients of each of the subsignals at scale , that is , it follows that , writing for the decomposition of the output measurements by scale , we have , for , the output measurements at a given scale are thus seen to be nothing other than a weighted sum of the haar wavelet coefficients at scale , we replaced these subcodes with actual kerdock codes for each scale , which is a departure from . ] .we can summarize the implications of the previous section as follows . 1 .the kerdock transform performs uniform sampling of the finest haar wavelet scales and gathers no information from the first wavelet scales .it can therefore be combined with direct sampling of the coarsest wavelet scales to give a two - level sampling scheme .note the desirable property that the coarsest scales , for which full samples are computed , are not further sampled unnecessarily by the kerdock transform .the subsampling factor can be chosen for each scale independently by computing measurements at each scale using whichever transform is desirable .an important remaining question is how to efficiently implement this multi - scale sampling scheme , and we leave this question as future work . based on these observations , we propose a deterministic multi - scale sampling scheme for 1d signals .* inputs : * signal ; sampling strategy . * outputs : * measurements .note that the required user tuning is straightforward and intuitive : simply provide a vector of integers , the entries of which give the power of by which the signal will be subsampled at each scale .let us write for this 1d multi - scale kerdock transform , and for the corresponding measurement matrix .the extension to 2d images immediately follows as in section [ kerdock ] . given a image ,define the 2d multi - scale kerdock transform to be present a comparison of image reconstruction using different schemes for sampling .we sample the 1024x1024 ` man ' test image by taking samples ( a subsampling factor of ) . writing ( ) for the vectorized 2d haar wavelet coefficients of the image , we can represent the linear measurements in the form , where is an matrix .we then use the ` spgl1 ` code to solve the optimization problem we measure accuracy of reconstruction using the signal - to - noise ratio ( snr ) , defined as we present results for three sampling schemes in figure [ image_plots ] .schemes 1 and 2 are subsamplings of the 2d wht , taking randomly chosen coefficients ( scheme 1 ) and the first ( lowest frequency ) coefficients ( scheme 2 ) respectively .we see that the performance of random subsampling ( scheme 1 ) is , as the research consensus would now expect , disastrous , while the low frequency sampling ( scheme 2 ) represents a baseline with which to compare .+ + scheme 3 is a version of the kerdock sampling scheme described in this paper , in which we directly sample the first scales of the wht , and then use kerdock matrices to subsample scales to , using for scale , for scale , and for scale ( see section [ kerdock ] ) , again giving .we observe that the reconstruction accuracy using scheme 3 represents a significant improvement over scheme 2 , demonstrating the effectiveness of a deterministic multi - scale sampling strategy with a number of different subsampling factors across scales ( in this case four ) .given , the hadamard matrix with columns in dyadic / paley order , , is defined by the recursion \;\mbox{for}\;m\geq 0,\ ] ] where denotes the usual kronecker product .given , the haar transform matrix , , may be defined by the recursion \;\mbox{for}\;m\geq 0,\ ] ] where is the identity matrix .the proof of theorem 1 is by induction on .noting that ( [ coherence ] ) holds trivially for , assume ( [ coherence ] ) holds for . using ( [ walsh_dyadic ] ) and ( [ haar ] ) , and by symmetry of , we have \left[\begin{array}{ll}\psi_r^t\otimes\left(\begin{array}{c}1\\1\end{array}\right)&i_r\otimes\left(\begin{array}{c}1\\-1\end{array}\right)\end{array}\right]=\left[\begin{array}{cc}\phi_r\psi_r^t&0\\0&\phi_r\end{array}\right],\ ] ] and ( [ coherence ] ) now follows for , and hence for all by induction. adcock , b. , hansen , a. and roman , b. _ the quest for optimal sampling : computationally efficient , structure - exploiting measurements for compressed sensing_. in _ compressed sensing and its applications _ ; editors : boche , h. , calderbank , r. , kutyniok , g. and vybral , j. springer , pp . 143167 , 2015 . calderbank , r. , howard , s. and jafarpour , s. _ construction of a large class of deterministic sensing matrices that satisfy a statistical isometry property_. ieee journal of selected topics in signal processing 4(2 ) , pp .358374 , 2010 .cands , e. , romberg , j. and tao , t. _ robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information_. communications on pure and applied mathematics 59(8):12071223 , 2006 .studer , v. , bobin , j. , chahid , m. , mousavi , h. , cands , e. and dahan , m. _ compressive fluorescence microscopy for biological and hyperspectral imaging_. proceedings of the national academy of sciences 109(26 ) , pp .e1679e1687 , 2012 .
|
we propose deterministic sampling strategies for compressive imaging based on delsarte - goethals frames . we show that these sampling strategies result in multi - scale measurements which can be related to the 2d haar wavelet transform . we demonstrate the effectiveness of our proposed strategies through numerical experiments .
|
cell division in _ escherichia coli _ is initiated by the formation of a ring of the protein ftsz on the bacterial inner membrane .this ftsz ring shrinks as the growing septum restricts the cytoplasmic channel connecting the two daughter cells .ftsz ring formation is targeted to the mid - cell by two independent processes .nucleoid occlusion prevents ftsz ring formation over the nucleoids , while polar ftsz ring formation is prevented due to the oscillatory dynamics of the min family of proteins .the pole - to - pole oscillation of mind and mine targets minc to the polar inner membrane where it inhibits polar ftsz ring formation and prevents minicelling . several deterministic and stochastic models have been developed to explain the pole - to - pole oscillation pattern of the min proteins .all these quantitative models have recovered oscillatory behavior , though they differ in their detailed interactions .the ftsz ring is the first element of the divisome to localize .induced disassembly of the ftsz ring can occur within a minute , and subsequent relocalization occurs within minutes .ftsz can localize around potential division sites of daughter cells even before septation is complete .min oscillations must persist or be quickly regenerated after septation to ensure that polar ftsz ring formation is blocked in newly formed daughters .the experimental phenomenology of min dynamics during septation has not yet been well characterized .early experiments indicate that min oscillations are qualitatively unaffected by partially constricted cells .significantly , minicelling rates in wild - type _e. coli _ cells are insignificant , and no non - oscillating daughter cells have been reported .these observations suggest that min oscillations persist or regenerate quickly in all daughter cells and , as a result , block ftsz ring formation at the poles of newly formed daughter cells . in a pioneering study , tostevin andhoward addressed min oscillations after cell division with a stochastic model .their model exhibited significant asymmetry in the distribution of min proteins between the two daughter cells after division .approximately of their daughter cells did not oscillate due to such partitioning errors . while systematic studies of partitioning errors have not been done ,large asymmetries of concentrations between daughter cells have not been reported .tostevin and howard suggested that rapid regeneration of min proteins could quickly recover oscillations in non - oscillating daughters .however , no such cell - cycle dependent signal is seen in translation or , for the _ min _ operon , in transcription .moreover , min oscillations continue even when protein synthesis is stopped by chloramphenicol .this indicates that proteolysis rates are small , so that fast unregulated turnover of min proteins ( independent of the cell - cycle ) is also not expected . in model systems ,the mind::mine densities must be above a threshold or `` stability boundary '' for stable oscillations to be observed .experimentally , the position of the stability boundary is not precisely known though large overexpression of mine does lead to minicelling .we have explored how the distance of the parent cell from the stability boundary affects the partitioning and hence the percentage of daughter cells that oscillate .while _ in vivo _ quantification of min concentration has been done , we have primarily varied the distance from the stability boundary by varying the concentration of min proteins in the parent cell within a reasonable range .this has the advantage of keeping the stability boundary fixed . for completeness, we have also varied the model interaction parameters .these are generally under - determined by experiment though diffusivities have now been measured _ in vivo _ .in addition to varying existing parameters , we have also explored heterogeneous interactions along the bacterial length following .the different phospholipids that comprise the _ e. coli _ inner - membrane exhibit variable affinity for mind .cardiolipin ( cl ) is preferentially localized to polar and septal membranes in _ e. coli _ .the differences in mind affinity for anionic phospholipids like cl implies enhanced mind binding to the poles and the growing septum .we explore the implications of this midcell and polar enhancement on min partitioning after septation .we study septation within the context of the model by huang _ , which is a deterministic , model without explicit mind polymerization .this model is significantly different from the stochastic , , polymerizing model of tostevin and howard .strikingly , we find that our partitioning errors are comparable to those seen by tostevin and howard despite the differences in the models . the model of huang _et al . _ still appears to be the best current model at recovering the min oscillation phenomenology , though mind polymerization appears to be called for experimentally and has been used in several quantitative models of min oscillation .our aim is to understand how asymmetric partitioning result from the dynamics of min oscillations and explore possible ways of achieving adequate partitioning of min between daughter cells .we analyze the origins of the partitioning error , and speculate about plausible partitioning mechanisms for min proteins during the septation of _the model developed by huang _ et al . _ includes many of the interactions observed experimentally : \rho_{d : atp } , \label{datp } \\\frac{\partial \rho_{e}}{\partial t } & = d_e\nabla^2\rho_{e } & + \delta_{mem}\sigma_{de}\rho_{de } - \delta_{mem}\sigma_{e}\rho_{d}\rho_e , \label{e } \\\frac{\partial \rho_{d}}{\partial t } & = & -\sigma_e\rho_d\rho_e(m ) + [ \sigma_{d } + \sigma_{dd}(\rho_d + \rho_{de})]\rho_{d}\rho_e \label{d}\\ \frac{\partial \rho_{de}}{\partial t } & = & -\sigma_{de}\rho_{de } + \sigma_{e}\rho_{d}\rho_{e}(m ) , \label{de}\end{aligned}\ ] ] where , and , are the cytoplasmic densities of mind : adp , mind : atp and mine respectively and , are the densities of membrane - bound mind and minde complex , respectively . the rates of binding of mind : atp to the bare membrane , the cooperative binding of mind : atp to membrane bound mind : atp , the binding of cytoplasmic mine to membrane bound mind : atp , and the hydrolysis rate of mind : atp from the membrane under activation by mine are given by , , , and , respectively .the bacterium was modeled as a cylinder of length and radius , with longitudinal interval and radial interval , and with poles represented by flat , circular end - caps .lateral growth is significantly reduced during septation , so we accordingly keep constant .the density of cytoplasmic mine at the membrane surface is , while limits reactions to the bacterial inner membrane .the last term denotes the growing septum at mid - cell , with ] ( for while for ) .this process of septal closure mimics the process of septal growth discussed by burdett and murray . since mind : atp has a greater affinity for anionic phospholipids such as cl and since cl domains are found to be localized around the cell poles and septal regions , we also considered the case in which the rate of attachment of mind : atp ( ) was enhanced at the polar and septal membranes ( by an amount ) compared to the attachment rate elsewhere on the curved surface of the cylindrical cell ( ) .[ sep](a ) and 1(b ) show oscillations in the parent cell during the process of septation while fig .[ sep](c ) shows oscillations in both daughters after septation .a septation duration seconds was chosen to be consistent with the proportion of septating cells observed in culture .significantly faster septation ( s ) does not affect our results .s is shown ) while the bacterial length runs from left to right ( ) for each of mind and minde .( a ) oscillations in parent cell starting from s before and ending s after septation .membrane - bound mind and minde are shown in the first and second columns respectively , as indicated .the arrowhead marks the beginning of the septation process and emerging white bar at midcell corresponds to the growing septum .( b ) oscillations just before and after the end of septation .the arrowhead marks the end of the septation process and formation of two independent daughter cells .oscillations continue in the left daughter cell through septation but are disrupted and then regenerated in the right daughter cell after septation is complete .( c ) oscillations in both daughters after completion of septation .a significant asymmetry of min partitioning between the two daughter cells is apparent .the parameters used in this figure are , , , , , , , .the three subfigures are contiguous in time.,title="fig:",width=151 ] s is shown ) while the bacterial length runs from left to right ( ) for each of mind and minde .( a ) oscillations in parent cell starting from s before and ending s after septation .membrane - bound mind and minde are shown in the first and second columns respectively , as indicated .the arrowhead marks the beginning of the septation process and emerging white bar at midcell corresponds to the growing septum .( b ) oscillations just before and after the end of septation .the arrowhead marks the end of the septation process and formation of two independent daughter cells .oscillations continue in the left daughter cell through septation but are disrupted and then regenerated in the right daughter cell after septation is complete .( c ) oscillations in both daughters after completion of septation .a significant asymmetry of min partitioning between the two daughter cells is apparent .the parameters used in this figure are , , , , , , , .the three subfigures are contiguous in time.,title="fig:",width=151 ] s is shown ) while the bacterial length runs from left to right ( ) for each of mind and minde .( a ) oscillations in parent cell starting from s before and ending s after septation .membrane - bound mind and minde are shown in the first and second columns respectively , as indicated .the arrowhead marks the beginning of the septation process and emerging white bar at midcell corresponds to the growing septum .( b ) oscillations just before and after the end of septation .the arrowhead marks the end of the septation process and formation of two independent daughter cells .oscillations continue in the left daughter cell through septation but are disrupted and then regenerated in the right daughter cell after septation is complete .( c ) oscillations in both daughters after completion of septation .a significant asymmetry of min partitioning between the two daughter cells is apparent .the parameters used in this figure are , , , , , , , .the three subfigures are contiguous in time.,title="fig:",width=151 ]we examined the effect of varying the mind and mine densities in the parent cell on the partitioning of min between daughter cells . for this purpose, we generated sets of different parent cell densities ( ) .for each set , the initiation time of septation was varied uniformly over the oscillation period of the parent cell with at least different phases sampled for each parent cell .min partitioning information was noted at the end of the septation , and the simulation was run for more than 15 minutes after the end of septation to see whether min oscillations were regenerated in the daughter cells . ) .indicated by the solid line is the approximate stability curve for cells , separating oscillating and non - oscillating daughter cells .the large black and grey filled circles indicate example parent cells that lead to and oscillating daughter cells respectively .the smaller black and grey filled circles denote the corresponding linear densities of daughter cells produced after cell division .the open stars correspond to the black filled circle but with a septation duration of seconds , all other points correspond to seconds .parameters are as specified in fig . 1 , but with .,width=604 ] fig .[ noen ] shows the linear density of mind and mine in parent cells ( open triangles ) .daughter cells with a variety of phases of septation start times are shown ( smaller black and grey filled circles ) for two representative parent cells ( larger black and grey filled circles ) close to and far from the stability boundary ( approximately indicated by the black line ) , respectively . for oscillations to restart or continue in daughter cells , the the ratio of mind : mine must be greater than .inadequate partitioning of min results in daughter cells ( ) having min densities which fall below this threshold .the partitioning for the pole - to - pole oscillating mind is worse than for the more midcell mine , resulting in an asymmetric donut - shaped distribution of daughter cell densities for a given parent cell .since the mine ring closely follows the mind cap there is a correlation between the mind and mine partitioning extending the asymmetric donuts along the diagonal . for parent cell densities close to the oscillation threshold ,a large fraction of daughter cells do not a oscillate .away from the threshold a smaller fraction do not oscillate .varying the duration of septation by moderate amounts does not change the partitioning , as illustrated by the nearly identical donuts for ( open stars ) and ( black circles ) . vs. of mind and mine , respectively , in oscillating as well as nonoscillating daughters for all parent cells .black indicates non - oscillating daughter cells , while grey filled circles indicate oscillating daughter cells in cases where both daughter oscillates .filled upper triangles correspond to fractions in the oscillating daughter for cases where only one daughter oscillates .the two daughter cells of a given parent are symmetrically placed around .( b ) a plot of the scaled relative fractions of mind vs. mine in the two daughter cells.,title="fig:",width=377 ] vs. of mind and mine , respectively , in oscillating as well as nonoscillating daughters for all parent cells .black indicates non - oscillating daughter cells , while grey filled circles indicate oscillating daughter cells in cases where both daughter oscillates . filled upper triangles correspond to fractions in the oscillating daughter for cases where only one daughter oscillates .the two daughter cells of a given parent are symmetrically placed around .( b ) a plot of the scaled relative fractions of mind vs. mine in the two daughter cells.,title="fig:",width=377 ] in fig .[ scal1](a ) we show all of the partitioning donuts on one plot , where and are the fraction of mind and mine in the two daughter cells , respectively .the absence of any daughter cells in the central region , near , shows that simultaneous equipartitioning of both mind and mine is never observed .while there is always a septation start - time , relative to the parent cell oscillation , that leads to perfect partitioning of mind _ or _ mine , there is no phase that leads to perfect partitioning of _ both _ mind and mine .this `` donut hole '' is a manifestation of the phase lag between mind and mine oscillations , i.e. the timing of maximal mind at midcell is ahead of the timing of maximal mine . to make this clear , in fig .[ scal1](b ) we have scaled all of the partitioning donuts by their rms radius , , and plotted the scaled densities vs. .relative to , there are no phases that approach symmetric partitioning of both mind and mine .we also plotted against the oscillation period of the parent cell in fig .[ scal2 ] to determine if the rms radius scales with the period of oscillation of the parent cell .we do not see perfect collapse but increases with period away from the stability boundary , indicating that the two partitioning donuts ( formed by the small black or grey filled circles ) shown in fig .[ noen ] are representative . vs. the oscillation period of the mother cell .the period increases as the mother cell min concentrations are moved away from the stability boundary shown in fig .while there is no precise scaling collapse , the trend is for less accurate partitioning as distance from the stability boundary ( and hence ) increases maintaining the non - oscillating daughters shown in fig .we do not find a significant dependence of on the septation duration .,width=604 ] to see whether a distinct phospholipid composition of the closing septum could affect the partitioning , we enhanced mind : atp binding ( ) at the cell poles and the growing mid - cell septum .the degree of enhancement was constrained by the practical requirement that it did not disrupt steady oscillations in the parent cell before .this restricted the polar enhancement to less than ten times the base value of .this is consistent with the affinity of mind : atp for anionic phospholipids like cardiolipin , which is nine times higher than its affinity for zwitterionic phospholipids .the enhancement of mind : atp binding at the poles and septum slightly increased the oscillation period in the parent cell by increasing the time for dissociation of membrane - bound mind : atp by mine . to analyze the effect of enhanced mind binding at the poles and growing septum on the number of oscillating daughters , we compared the results from 50 parameter sets with and without septal and polar enhancement . in this comparison , the concentrations of mind and mine were varied while all other parameters were kept fixed and or .the overall percentage of oscillating daughter cells increased by a small amount ( 2% ) when enhanced polar and septal mind : atp attachment rates were used .more specifically , for parent cell density close to the stability threshold ( large grey filled circle in fig . [ noen ] ) , the enhancement of mind : atp binding at the poles and septum led to a modest increase ( at most ) in the number of daughters which restart oscillations after septation .however , for parent cell densities far from the stability threshold ( large black filled circle in fig . [ noen ] ) no significant increase in the number of oscillating daughters was obtained with enhanced mind : atp binding at the poles and growing septum . in another attempt to increase the fraction of oscillating daughter cells after septation, we explored the parameter space of interactions in the huang _et al . _ model .since most of the parameters are experimentally under - determined , some flexibility is possible in the choice of parameters while insisting upon stable oscillations . in this context, the min concentration , diffusivities , reaction rates , and were all independently varied over plausible ranges for a fixed cell length .the parameter space was explored to move towards symmetric partitioning of mind and mine in non - oscillating daughter cells .each parameter was varied over a range spanning almost an order of magnitude relative to the benchmark values which were chosen to be the parameters specified in huang __ however , no improvement upon the best figure ( obtained with or without polar and septal enhancement of the mind binding rate ) was obtained .why do we never see of the daughter cells oscillating ? the pattern of end - to - end oscillation of mind continues largely unchanged throughout septation ( see , e.g. , fig .[ sep ] ) , even as the period lengthens somewhat , so that when the _ closure _ of the septum coincides with mind being localized predominately at one pole then the mind will be badly partitioned between the two daughter cells . in fig .[ phase ] we plot the longitudinal position of the radially integrated mind and mine peaks away from the cell poles at the end of septation when .[ phase](a ) shows parent cells that lead to two oscillating daughters , while fig .[ phase](b ) shows parent cells that lead to only one oscillating daughter .we see that two oscillating daughters typically result from septation events where both mind and mine have a substantial peak at the mid - cell .when two oscillating daughters result despite polar maxima of mind and mine , a substantial midcell accumulation of mind is also present .a non - oscillating daughter cell is typically produced when mind has a large peak near one pole . , , seconds , and .,width=604 ] fig .[ prfl ] illustrates the spatial profile of radially integrated mind and mine for three different phases at the end of septation .[ prfl](a ) corresponds to a phase where oscillation restarts in both daughters after septation .adequate partitioning is reflected in large peaks of radially integrated mind ( solid line ) and mine ( dashed line ) near the midpoint of the cell .[ prfl](b ) corresponds to a phase where inadequate partitioning is manifest in the large peaks of radially integrated mind ( solid line ) and mine ( dashed line ) near one pole of the cell .only one oscillating daughter results .[ prfl](c ) shows a peak in the radially integrated mind and mine near the midcell and pole respectively .the resulting inadequate partitioning of mine between the two daughters ensures that the ratio of mind : mine falls below the threshold required to regenerate oscillations in one of the daughters .this leads to a non - oscillating daughter and corresponds to the points with a large midcell mind peak in fig .[ phase](b ) . of radially integrated ( linear )densities of mind ( solid line ) and mine ( dashed line ) for three different phases of septation , at the time of septal closure .( a ) leads to two oscillating daughter cells and exhibits strong central mind and mine peaks , ( b ) leads to only one oscillating daughter cell and exhibits a strong polar peak of both mind and mine , and ( c ) leads to only one oscillating daughter cell and exhibits a strong polar peak of mine .the parameters used are the same as in the previous figure.,width=604 ]we have explored the impact of mind and mine concentration , interaction parameters , and end - cap and septal cardiolipin patches on the partitioning of min proteins between daughter cells after septation in the model of huang __ . while concentration close to the stability threshold for oscillations led to less than of daughter cells oscillating after septation , no combination of concentration , interaction parameters , and/or cardiolipin patches led to more than of daughter cells oscillating after septation .these results are comparable to those of tostevin and howard , despite significant differences in the min models that were used .they studied a stochastic one - dimensional model with explicit mind polymerization , while we used a deterministic three - dimensional model without filamentous mind structures .we do not expect that the inclusion of stochastic effects would significantly change our results , following .we found that plotting the mind vs. mine densities in the daughter cells leads to a donut structure around the parent cell densities , and that varying the phase of the septal closure with respect to the end - to - end min oscillation of the parent cell leads to daughter min densities varying around the donut .the `` missing hole '' of the donut , i.e. the absence of daughter cells with the same min densities as the parent cell , arises from the phase - difference between the leading mind cap - forming and lagging mine ring - forming oscillations .furthermore , we find that there is always a phase of septation timing that leads to non - oscillating daughters .we believe that this is a fundamental aspect of end - to - end min oscillation : when the mind cap is at one pole , the distal pole is stable .this should be a generic feature of all min oscillation models .the robustness of the best percentage of oscillating daughters under changes in concentration , parameter variation , heterogeneous perturbations , model variation , dimensionality , and stochastic effects support this conclusion .how might _ e. coli _ achieve its ( observed ) negligible level of minicelling ?we see four basic possibilities .as suggested by tostevin and howard , the non - oscillating daughters could be rescued by rapid regeneration of min concentration .this would require min synthesis to be regulated in a cell - cycle dependent manner .because the average concentration of the two daughter cells equals their parent cell , rapid synthesis leading to recovery in one daughter cell would lead to a spike in min concentration right after septation .however , there is no evidence of such fine - tuned regulatory control , or cell - cycle dependence , of min concentration .moreover , lack of adequate partitioning would give rise to substantial asymmetry of min proteins in the two daughter cells that should be apparent in experimental studies especially with the simple inducible promoters ( not actively regulated ) typically used in min - gfp fusion studies .in our simulations we found that the fraction of the parent mind and mine in daughter cells can be as low and respectively .the lack of any reports of such large visible asymmetries argues against rapid min regeneration .the partitioning problem can be avoided if the min oscillations `` double - up '' before septation , leading to two symmetric oscillations in the two halves of the parent cell .a closing septum would then maintain symmetric min distributions in the daughter cells .indeed , we were hoping to promote this effect with the introduction of cardiolipin patches at poles and septum without success .while there has been one experimental report of a doubling of oscillation for deeply constricted cells , this must be approached with caution due to the difficulty of distinguishing partial from full septation .we never found any evidence for doubling up of oscillations in our simulations . in all cases , we found that oscillations continue until just before the end of septation .indeed , the min oscillation wavelength of seen in filamentous cells would suggest that it is difficult to spontaneously generate oscillations while significant connection between the two ends of the parent cell remains .distortion and/or disruption of the min oscillation by the growing septum before septal closure might also lead to symmetric partitioning of min between the daughter cells .we do find that mind binding to the sides of the growing septum improves partitioning .this was evident by comparing the partitioning for a finite septation time ( seconds ) with instantaneous septation ( ) .in the latter case , no mind can accumulate on the septum before the daughter cells are separated .this resulted in highly skewed min distributions between the two daughter cells ( results not shown ) .however , significant partitioning errors still occur with gradual septal growth .moreover , no significant improvement in partitioning was observed when the mind binding was enhanced at the midcell .we also found that min oscillation was often temporarily disrupted in one daughter cell despite acceptable partitioning for oscillation in both daughters . the time required for recovery of steady oscillations was sometimes as large as 15 minutes .this is much larger than the dynamical time - scale of ftsz rings , though , as shown by tostevin and howard , stochastic effects may eliminate or significantly decrease the regeneration time of oscillations .disruption of the min oscillation in both daughter cells by the late stages of septation may therefore be a viable partitioning mechanism _ in vivo _ especially if the resulting uniform distribution of min is sufficient to block septation in the face of fast ftsz dynamics while the min oscillation is being regenerated .however , in our model we did not observe disruption in both daughter cells even with enhanced mind binding at the growing septum .finally , the cell may coordinate the septal closure with the min oscillation . as seen in fig .[ sep ] there are a number of phases where _ both _daughter cells oscillate after septation . as shown in fig .[ phase](a ) , and illustrated in fig .[ prfl](a ) , most of those phases correspond to midcell mind and mine peaks .triggered septal closure that occurs only at these phases would always recover min oscillation in both daughters .such triggered septal closure could result from the participation of the c - terminal domain of minc in ftsz ring _disassembly _ towards the end of of septation .since septation occurs in mutants , any such effect would have to accelerate septation rather than cause it .narrow constrictions have been observed in cryoelectron tomography studies of _ caulobacter crescentus _ , though too infrequently to indicate a significant septation pause . in _e. coli _ ,mutations of the n - terminal domain of ftsk lead to the stalling of septation at a very late stage with deep constrictions , leading to speculation about pores between the daughter cells before septal closure .the triggered septal closure discussed here would only require a pause ( or speed - up ) of at most one half period of the min oscillation that could be lifted ( or imposed ) by the minc at midcell .the challenge lies in understanding how min oscillations can persist or be regenerated in both daughter cells after septation , in the face of partitioning errors due to the end - to - end oscillation of the min proteins . without one or more of the additional mechanisms discussed above , we expect significant partitioning errors , leading to non - oscillating daughters , in all min models .experimental characterization of the min oscillations during and after septation , and quantitative assessment of min partitioning between the daughter cells will be invaluable in sorting out which of these four partitioning mechanisms , or what combination of these four mechanisms , plays a role in _e. coli_. we believe that the last mechanism , of triggered septal closure , is most likely the dominant mechanism _ in vivo_. reproducing fig .[ scal1 ] from experimental images of newly septated cells should be straightforward if both mind and mine have distinct fluorescent tags ( see , e.g. ) .the average of each fluorescent signal of the two daughter cells can be used to independently scale the corresponding mind or mine signal , without the need for calibration even in the face of photo - bleaching .non - regenerating mechanisms of partitioning , such as septal triggering , would lead to a `` double - bar '' pattern of mind vs mine densities in the daughter cells ( looking like ) rather than the connected donuts seen in fig .[ scal1 ] .we thank benjamin downing and manfred jericho for useful discussions .this work was supported by the canadian institute of health research ._ _ : one - dimensional + _ _ : three - dimensional + _ rms _ : root - mean - square + _ cl _ : cardiolipin10 url # 1#1urlprefix[2][]#2 burdett i d and murray r g 1974 electron microscope study of septum formation in _ escherichia coli _ strains b and b - r during synchronous growth ._ j. bacteriol . _ * 119 * 1039 - 1056 . hu z and lutkenhaus j 1999 topological regulation of cell division in _ escherichia coli _ involves rapid pole to pole oscillation of the division inhibitor minc under the control of mind and mine .microbiol . _ * 34 * 8290 hu z , mukherjee a , pichoff s and lutkenhaus j 1999 the minc component of the division site selection system in _ escherichia coli _ interacts with ftsz to prevent polymerization _ proc .sci . usa_. * 96 * 1481914824 meinhardt h and de boer p a j 2001 pattern formation in _ escherichia coli _ : a model for the pole - to - pole oscillations of min proteins and the localization of the division site .usa _ * 98 * 1420214207 drew d a , osborn m j and rothfield l i 2005 a polymerization - depolymerization model that accurately generates the self - sustained oscillatory system involved in bacterial division site placement . _ proc .102 * 61146118 addinall s g , cao c and lutkenhaus j 1997 temperature shift experiments with an ftsz84(ts ) strain reveals rapid dynamics of ftsz localization and indicate that the z ring is required throughout septation and can not reoccupy division sites once constriction has initiated . _j. bacteriol . _* 179 * 4277 - 4284 pichoff s , vollrath b , touriol c , and bouch j - p 1995 deletion analysis of gene _ mine _ which encodes the topological specificity factor of cell division in _ escherichia coli _micro . _ * 18 * 321329 zhao , c - r , de boer , p a j , and rothfield l i 1995 proper placement of the _ escherichia coli _ division site requires two functions that are associated with different domains of the mine protein _ proc .( usa ) _ *92 * 43134317 shih y l , fu x , king g f , let and rothfield l i 2002 division site placement in _e. coli _ : mutations that prevent formation of the mine ring lead to loss of the normal midcell arrest of growth of polar mind membrane domains _ embo j. _ * 21 * 33473357 mileykovskaya e and dowhan w 2000 visualization of phospholipid domains in _ escherichia coli _ by using the cardiolipin - specific fluorescent dye 10-n - nonyl acridine orange ._ j. bacteriol . _* 182 * 11721175 shih y l , let and rothfield l 2003 division site selection in _ escherichia coli _ involves dynamic redistribution of min proteins within coiled structures that extend between the two cell poles .usa _ * 100 * 78657870 hu z and lutkenhaus j 2001 topological regulation of cell division in _e. coli _ : spatiotemporal oscillation of mind requires stimulation of its atpase by mine and phospholipid .cell . _ * 7 * 1337 - 1343 woldringh cl , huls p , pas e , brakenhoff g j and nanninga n 1987 topography of peptidoglycan synthesis during elongation and polar cap formation in a cell division mutant of _ escherichia coli _ mc4100 .microbiol . _ * 133 * 575586 .de boer p a j , crossley r e , and rothfield l i 1989 a division inhibitor and a topological specificity factor coded for by the minicell locus determine proper placement of the division septum in __ cell _ * 56 * 641649 judd e m , comolli l r , chen j c , downing k h , mowerner w e , mcadams h h 2005 distinct constrictive processes , separated in time and space , divide caulobacter inner and outer membranes _ j. bacteriol . _ * 187 * 68746882
|
ongoing sub - cellular oscillation of min proteins is required to block minicelling in _ e. coli_. experimentally , min oscillations are seen in newly divided cells and no minicells are produced . in model min systems many daughter cells do not oscillate following septation because of unequal partitioning of min proteins between the daughter cells . using the 3d model of huang _ et al . _ , we investigate the septation process in detail to determine the cause of the asymmetric partitioning of min proteins between daughter cells . we find that this partitioning problem arises at certain phases of the mind and mine oscillations with respect to septal closure and it persists independently of parameter variation . at most of the daughter cells exhibit min oscillation following septation . enhanced mind binding at the static polar and dynamic septal regions , consistent with cardiolipin domains , does not substantially increase this fraction of oscillating daughters . we believe that this problem will be shared among all existing min models and discuss possible biological mechanisms that may minimize partitioning errors of min proteins following septation . _ keywords _ : septation , _ escherichia coli _ , mind , mine , protein partitioning , oscillation , spatiotemporal pattern , subcellular localization , reaction diffusion , modeling . + dated : +
|
zoonotic diseases are a major source of human morbidity world wide . in 2010 , there were an estimated 600 million cases globally , of which 96 million were _ campylobacter spp . _ ( resulting in 21 thousand deaths ) . attributing cases of food - borne disease to putative sources of infection is crucial to identify and prioritise food safety interventions .traditional approaches to source attribution include full risk assessments , analysis and extrapolation of surveillance or outbreak data , and analytical epidemiological studies . however , their results can be highly uncertain due to long and variable disease incubation times , and many and various exposures of an individual to potential sources of infection . given this difficulty , quantitative methods using pathogen strain type frequency have shown promise for statistically identifying important sources of food - borne illness .for a given disease , quantitative source attribution uses typing of pathogen isolates from human cases and suspected sources of infection ( food and environmental ) .samples are screened for the presence of the pathogen , with isolates then categorised using a typing methodology .multilocus sequence typing ( mlst ) is a commonly used genotyping method providing a relatively coarse characterisation of isolates of bacterial species .an mlst sequence type is defined as a unique combination of alleles at several gene loci , typically located in conserved regions of the genome .routine surveillance for food - borne pathogens is now commonplace in many countries and is performed by national authorities , for example foodnet in the us , the danish zoonosis centre ( food.dtu.dk ) , and the ministry for primary industries in new zealand ( foodsafety.govt.nz ) . despite this availability of data we are unaware of any previous implementations in standard statistical software for source attribution modelling , with past analyses being performed using a variety of _ ad hoc _ methodologies .moreover , current statistical source attribution models have strong assumptions , computational approximations or inherent identifiability problems ( discussed further in the ` review of models and notation ' section ) .this paper presents an ` r ` package ` sourcer ` , which implements a flexible bayesian non - parametric model , designed for use by epidemiologists and public health decision makers to attribute cases of zoonotic infection to putative sources of infection .we first describe a motivating example and review previous source attribution models before describing our model innovations , demonstrating the software , and discussing results and future directions . in 2006 ,new zealand had one of the highest incidences of campylobacteriosis in the developed world , with an annual incidence in excess of 400 cases per 100,000 people .the data set was first published in , with a detailed description of the data ( and data collection methods ) available in and .a campaign to change poultry processing procedures , supported in part by results from previous quantitative source attribution approaches , was successful in leading to a sharp decline in campylobacteriosis incidence after 2007 .the data consists of mlst - genotyped _campylobacter _ isolates ( from both human cases of campylobacteriosis and potential food and environmental sources ) collected between 2005 and 2008 in the manawatu region of new zealand .these data are included in our ` sourcer ` package ( named ` campy ` ) .we use this data set as a case study , and compare our results with previously published statistical approaches . in this section we define our notation , and briefly review the approaches that have been used previously to analyse ` campy ` . for a given time period ,we denote by the number of human cases of a disease caused by pathogen type . for the same time period ,we let denote the total number of source samples collected from source , for which are positive for pathogen type .the approach of hald _ et al . _ was to compare the number of human cases caused by different pathogen types with their prevalence in different food sources ( whilst accounting for type and source specific effects ) .this requires a heterogenous distribution of pathogen types among the food sources .the number of human cases for each type is modelled as a poisson random variable with mean given by a linear combination of source specific effects , type specific effects and source sample _contamination prevalences . where for source is the annual exposure , is the absolute prevalence of each pathogen type with the prevalence of positive samples and the relative prevalence of each pathogen type .the unknown parameters in the model are the vectors and .here , represents the characteristics that determine a type s capacity to cause an infection ( such as survivability during food processing , pathogenicity and virulence ) , and accounts for the ability of a particular source to act as a vehicle of infection .these parameters are interpreted further in .inference is performed in a bayesian framework allowing the model to explicitly include and quantify the uncertainty surrounding each of the parameters .equation [ eq : haldmodel ] over - specifies the model , with parameters ( the source and type effects ) but only independent observations ( the observed human case totals ) . in the original approach , identifiability was obtained by _ a priori _ clustering of the elements of and . in constrast ,the modified hald model prefers to reduce the effective number of parameters by treating as a log normal distributed random effect .however , a strong prior is needed on to shrink towards 0 sufficiently to avoid overfitting the model , the choice of which is arbitrary .the modified hald model introduces uncertainty into the relative prevalence matrix by modelling the source sampling process .this model was fitted in winbugs using an approximate two stage process .first , a posterior distribution was estimated for the absolute prevalence of source types , using the model specified in eqs [ eq : mhrij ] and [ eq : mhpij ] : the marginal posterior for each element of was then approximated by a beta distribution using the method of moments to calculate and .these were used as independent priors for each which removes the constraint that they sum to over each type .thus , the absolute prevalence for source ( ) is no longer constrained to be a probability ( as it may be larger than 1 ) .the asymmetric island model takes a different approach to the models described above . here , the evolutionary processes ( mutation , migration and recombination ) of the sequence types are modelled to infer probabilistically the source of each human infection using genetic data from each subtype .the extra information in the genetic typing allows the model to attribute human cases from a type not observed in any sources to a likely source of infection by comparing the genetic similarity to other types that are observed in the sources .this is not possible with the hald or modified hald models , however , they are much simpler with fewer assumptions and a wider range of suitable data ( for example , phenotypic typing can be used ) .we include results from this model as a comparison in the ` results ' section .our approach addresses the problems inherent in both the hald and modified hald models .we introduce a fully joint model for both source and human case sampling allowing us to integrate over uncertainty in the source sampling process , estimating both the prevalence of contaminated source samples and the relative prevalence of each identified type ( without resorting to an approximate marginal probability distribution on ) .furthermore , we introduce non - parametric clustering of pathogen types using a dirichlet process ( dp ) model on the type effect vector , providing an automatic data - driven way of reducing the effective number of parameters to aid model identifiability .we are able , therefore , to circumvent the hald model requirement for heuristically grouping pathogen types , as well as avoiding an arbitrary prior distribution specification for the random effect precision parameter ( ) required by the modified hald model .often , human case data is associated with location such as urban / rural , or even gps coordinates . on the other hand ,food samples are likely to be less spatially constrained due to distances between production and sale locations . also , both human and source data may exist for multiple time - periods .we therefore denote the number of human cases of time occurring in time - period at location by , the number of samples of source in time - period by , with the type counts .we allow for different exposures of humans to sources in different locations , by allowing the source effects to vary between times and locations , . as with the hald model, we assume the number of human cases identified by isolation of subtype in time - period at location is poisson distributed for each source , we model the number of positive source samples where denotes the vector of type - counts in source in time - period , denotes the number of positive samples obtained , and denotes a vector of relative prevalences .the advantage of this model is that it automatically places the constraint , avoiding the approximation made in where independent beta - distributed priors were assigned marginally to components of .the source case model is then coupled to the human case model through the simple relationship where is the prevalence of any isolate in source in time - period . in principle , a beta distribution could be used to model , arising as the conjugate posterior distribution of a binomial sampling model for positive samples from tested , and a beta prior on .we instead choose to fix the source prevalences at their empirical estimates ( ) because the number of source samples is typically high .the type effects , which are assumed invariant across time or location , are drawn from a dp with base distribution and a concentration parameter the dp groups the elements of into a finite set of clusters ( unknown _ a priori _ ) with values meaning bacterial types are clustered into groups with similar epidemiological behaviour .heterogeneity in the source matrix is absolutely required to identify clusters from sources , which may not be guaranteed _ a priori _ due to the observational nature of the data collection .this section describes how the model is fitted in a bayesian context by first describing the mcmc algorithm used to fit this model , then developing the prior model . the joint model over all unobserved and observed quantitiesis fitted using markov chain monte carlo ( mcmc , full details in ) .the source effects and relative prevalence parameters are updated using independent adaptive metropolis - hastings updates .the type effects are modelled using a dp ( eq [ eq : qdp ] ) with a gamma base distribution . as the gamma distribution is conjugate with respect to the poisson likelihood ( eq [ eq : likelihood ] ) , it is possible to use a marginal gibbs sampler within a polya urn , or `` chinese restaurant process '' construction ( see ) .this was chosen over the more general `` stick breaking process '' because it allows sampling from the conditional posterior of .this is particularly important when the elements of values are highly dispersed : a base distribution with little mass near the locations of some of the true values for the groups results in poor mixing for the group allocations using the stick breaking algorithm ( as it is difficult for a type to change group when no other groups have a suitable value ) .in contrast , the marginal scheme allows an element of to move into a new cluster , then samples a value directly from the conditional posterior for that group , improving group mixing dramatically .the source and type parameters ( for all and , and respectively ) account for a multitude of source and type specific factors which are difficult to quantify _ a priori_. therefore , with no single real - world interpretation , the distributional form of the priors were chosen for their flexibility . a dirichlet prior is placed on each which suitably constrains its l1 norm , i.e. . a dirichlet prior is also placed on each , with the constrained l1 norm aiding identifiability between the mean of the source and type effect parameters . for more detail on specifying parameters for the dirichlet process and priors see the .standard mcmc packages ( e.g. winbugs , stan , pymc3 ) all lack the capability to implement marginal gibbs sampling for dirichlet processes , necessitating a custom mcmc framework ( see section ` extensibility ' ) .we chose r as a platform because of its ubiquity in epidemiology , and advanced support for post - processing of mcmc samples .minimal dependencies on other r packages are required , and are installed automatically . `sourcer ` uses an object - oriented design , which allows separation of the model from the mcmc algorithm .internally , the model is represented as a directed acyclic graph ( dag , see ) in which nodes are represented by an r6 class hierarchy .generic adaptive metropolis hastings algorithms are attached to each parameter node , with the conditional independence properties of the dag allowing automatic computation of the required ( log ) conditional posterior densities .a difficulty with the dag setup is the representation of the dirichlet process model on the type effects , since each update of the marginal gibbs sampler requires structural alterations .therefore , we subsume the entire dirichlet process into a single node , with a bespoke marginal gibbs sampling algorithm written for our gamma base - distribution and poisson likelihood model .the case study below ( using the ` campy ` ( campylobacteriosis ) data set described in the ` motivation ' section ) illustrates how the ` sourcer ` package is used in practice to identify important sources of infection .we compare the results of our bayesian non - parametric approach with results from the modified hald and asymmetric island models , and additionally the historical ` dutch ' model ( see and ) .the priors for our model were selected to be non - informative . the prevalence is calculated by dividing the number of positive samples by the total number of samples for each source . in the data below, we note that for several samples the mlst typing failed , with the number of positive samples exceeding the apparent total number of mlst - typed isolates .however , assuming mlst typing fails independently of pathogen type , this does not bias our results .the work flow for fitting the model begins with removing types with no source cases and calculating the prevalences . ....data(campy ) zero_rows < - which(apply(campy [ , c(2 : 7 ) ] , 1 , sum ) = = 0 ) campy < - campy[-zero_rows , ] total_samples = c(239 , 196 , 127 , 595 , 552 , 524 ) positive_samples = c(181 , 113 , 109 , 97 , 165 , 86 ) k < - data.frame(value = positive_samples / total_samples , source = colnames(campy [ , 2:7 ] ) , time = rep("1 " , 6 ) , location = rep("a " , 6 ) ) .... the data and model parameters are set using the ` halddpnew(data = campy , k = k , priors = priors , a_q = 0.1 ) ....mcmc control parameters are be passed via ` fit_params ` ....my_modelupdate ( ) # my_modelextract ( ) .... the ` extract ` function returns the posterior for the selected parameters as a list with a multidimensional array for each of ` alpha ` , ` r ` , ` q ` , ` s ` , ` lambda_j ` and ` lambda_i ` .trace and autocorrelation plots for the parameters indicate that the markov chain is mixing well and has converged , and that thinning by 500 is adequate ( figure [ fig : trace_params_real ] ) .the residual plots for the ( figure [ fig : lambda_i_residuals_real ] ) show that the model fits well ..... # # plot the marginal posteriors for the following parameters # # source effect for chicken supplier c plot(my_modelalpha , type="l " ) # # type effect 25 plot(my_modelq , type="l " ) # # relative prevalence for source effect ovine , type 354 plot(my_modelr , type="l " ) # # number of cases attributed to chicken supplier b plot(my_modellambda_j , type="l " ) # # number of cases attributed to sub type 42 plot(my_modellambda_i , type="l " ) .... the ` summary ( ) ` function calculates medians and credible intervals calculated with three possible methods ( percentile , spin , or chen - shao ) . ....my_modelsummary(alpha = 0.05 , params = " lambda_i " , time = " 1 " , location = " a")print_data()plot_heatmap ( ) .... the violin plots of the marginal posterior distributions for each type effect ( figure [ fig : type_effect_violinplots_real ] ) show the largest group of types has very small type effects .these correspond to types observed in few source samples and no human cases .consequently , there is very little information for their type effects which results in very wide credible intervals .the other three groups have much larger type effects .the clustering results identify clusters of strains having particular traits that could be explored using further genotyping or phenotyping assays .figure [ fig : lambda_j_real ] shows the proportion of cases attributed to each source for the halddp model and three commonly used source attribution models .the median values are similar between all models except the dutch method .the dutch model confidence intervals are very narrow because there are far fewer parameters in the model ; however , the lack of source and type effects in the model biases the results .the credible intervals produced by the island model may be narrow due to more accuracy ( as additional genetic information is used ) .the wide credible intervals for the the halddp and modified hald models may be due to c. jejuni s complex epidemiology resulting in relatively large uncertainty for the disease origin , and posterior correlations between some parameters . in particular , the new model shows that the proportion of cases attributed to poultry supplier a is negatively correlated with the proportion of cases attributed to both ovine and poultry supplier b sources ( pearson correlation coefficients of -0.60 and -0.65 respectively , see fig [ fig : cor_plots ] in ) .the halddp model gives a more accurate representation of the uncertainty inherent in source attribution .some of this non - identifiability is not fully explored in the modified hald model as fitting the model in two stages does not allow full propagation of the uncertainty .in particular , when calculating the hyper - parameters for the beta priors for each from the first stage model , the authors imposed a minimum of 1 .this prevents bath tub shaped beta priors for any which makes the model easier to fit at the expense of discouraging full exploration of the marginal posteriors for that truly have a bath - tub shape .the stable release version of ` sourcer ` is available from the comprehensive r archive network , released under a gpl-3 licence .the development version is available at http://fhm-chicas-code.lancs.ac.uk/millerp/sourcer .as this package develops , we intend ` sourcer ` to become a platform for new source attribution model development , providing a central analytic resource for public health professionals .the establishment of a standard package with a familiar interface will therefore lead to improved repeatability and reusability of source attribution analyses , supporting national public health and hygiene policy decisions . with increased interest in source attribution models for both food - borne pathogens , and `sourcer ` has been written with extensibility in mind , with the dag representation allowing for rapid construction of modified or new models .the package routines are written in r ( as opposed to c or c++ ) to aid readability , with the node class hierarchy and three stage workflow designed to aid the addition of new model classes .all internal classes and methods are documented to enable prospective developers to familiarise themselves with the source code quickly .we note that the dag framework is not limited solely to source attribution models and may used for other bayesian applications , particularly those for which a dirichlet process is required. the main focus of extending ` sourcer ` will be on modelling spatiotemporal correlation in the time- and location- dependent parameters . with the trend in collecting precise geolocation data with human cases , and improved traceability of food , a spatiotemporal correlation model on be used to identify particular foci of source contamination , therefore enabling targeted investigation of particular food supply regions .implementation of time varying type effects may be appropriate , particularly in the face of evidence that _ campylobacter _ can evolve quickly , with genetic variation conferring virulence not apparent from course - scale mlst typing .interaction terms between some sources and types would allow for the biologically plausible possibility that certain types are more or less likely to survive and cause disease , dependent on the food source they appear in .this would occur if a specific type was particularly well adapted to a certain food source .however , including interaction terms would significantly increase the number of parameters and reduce identifiability of the model .testing has revealed that the current metropolis - hastings based fitting algorithm suffers a loss of efficiency if the source matrix is sparse or highly unbalanced , imbuing negative correlations between certain type / source effect combinations .gradient - based fitting algorithms such as hamiltonian monte carlo ( hmc ) are designed to converge to high - dimensional , non - orthogonal target distributions much more quickly , and are a target of future development . in particular , the no u - turn sample ( nuts ) presents an attractive method for tuning hmc adaptively , a quality which we consider necessary to minimise user intervention and maximise research productivity .we have presented a novel source attribution model which builds upon , and unites , the hald and modified hald approaches .it is widely applicable , fully joint , and does not require approximations or a large number of assumptions . mixing and _ a posteriori _ correlations are significantly decreased in comparison to the modified hald model .furthermore , it allows the data to inform type effect clustering using a bayesian non - parametric model which identifies groups of bacterial sub types with similar putative virulence , pathogenicity and survivability .this is a significant improvement over the previous attempts to improve model identifiability ( fixing some source and type effects , or modelling the type effects as random using a 2 stage model ) . like the modified hald model, the new model incorporates uncertainty in the prevalence matrix into the model , however , it does this by fitting a fully joint model rather than a 2 step model .this has the advantage of allowing the human cases to influence the uncertainty in the source cases and preserves the restriction on the sum of the prevalences for each source .the ` sourcer ` package implements this model to enable straightforward attribution of cases of zoonotic infection to putative sources of infection by epidemiologists and public health decision makers .[ [ s1_dutch ] ] s1 appendix .+ + + + + + + + + + + + * dutch model overview * the dutch method is one of the simplest models for source attribution .it compares the number of reported human cases caused by a particular bacterial subtype with the relative occurrence of that subtype in each source .the number of reported cases per subtype and reservoir is estimated by : where is the relative occurrence of bacterial subtype in source , is the estimated number of human cases of type per year , is the expected number of cases per year of type from source .a summation across types gives the total number of cases attributed to source , denoted by : as the dutch model has no inherent statistical noise model , confidence intervals for the estimated total attributed cases by bootstrap sampling over the data set .this model implicitly assumes that there are no source or type specific effects ( such as differing virulence of types , or differing consumption of food sources ) which is not plausible for most zoonoses .[ [ s1_table ] ] s1 table .+ + + + + + + + + * summary of model parameters .* the following table gives a list of the model parameters for easy reference .[ table : haldmodelparams ] .description and definition of the model parameters .[ cols= " < , < , < " , ] + [ [ s2_dp ] ] s2 appendix .+ + + + + + + + + + + + * dirichlet priors and process details * the dirichlet process is a random probability measure defined by a base distribution and a concentration parameter .the base distribution constitutes a prior distribution in the values of each element of the type effects whilst the concentration parameter encodes prior information on the number of groups to which the pathogen types are assigned .for small values of , samples from the dp are likely to have a small number of atomic measures with large weights . for large values , most samples are likely to be distinct , and hence , concentrated on .a value of 1 implies that , _ a priori _ , two randomly selected types have probability 0.5 of belonging to the same cluster .[ [ specifying - the - dirichlet - process - base - distribution - and - concentration - parameters ] ] specifying the dirichlet process base distribution and concentration parameters : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the concentration parameter of the dp is specified by the analyst as a modelling decision .the concentration parameter specifies how strong the prior grouping is . in the limit ,all types will be assigned to one group , increasing makes a larger number of groups increasingly likely .the gamma base distribution induces a prior for the cluster locations .this prior should not be too diffuse because if these locations are too spread out , the penalty in the marginal likelihood for allocating individuals to different clusters will be large , hence the tendency will be to overly favour allocation to a single cluster . however , the prior parameters may have a stronger effect than anticipated due to the small size of the relative prevalence and source effect parameters .this can been seen by considering the marginal posterior for the term is very small ( due to the dirichlet priors on and ) , which can result in even a fairly small rate parameter ( ) dominating .[ [ specifying - dirichlet - priors ] ] specifying dirichlet priors : + + + + + + + + + + + + + + + + + + + + + + + + + + + + the simplest dirichlet priors for the source effects and relative prevalences are symmetric ( meaning all of the elements making up the parameter vector have the same value , called the concentration parameter ) .symmetric dirichlet distributions are used as priors when there is no prior knowledge favouring one component over another .when is equal to one , the symmetric dirichlet distribution is uniform over all points in its support .values of the concentration parameter above one prefer variates that are dense , evenly distributed distributions , whilst values of the concentration parameter below 1 prefer sparse distributions .note , a prior of 1 for the relative prevalences is too strong ( if a relatively non - informative prior is preferred ) when there are many observed zero s in the source data ; a prior value of 0.1 is more suitable .a more informative prior can be specified by using a non - symmetric dirichlet distribution .the magnitude of the vector of parameters corresponds to the strength of the prior .the relative values of the vector corresponds to prior information on the comparative sizes of the parameters .[ [ s3_dag ] ] s3 appendix .+ + + + + + + + + + + + * directed acyclic graph of the model * for a concise description of the parameters . ] [ [ s4_type_source_effects ] ] s4 appendix .+ + + + + + + + + + + + * further details about the interpretation of the source and type effects .* the interpretation of source and type effects depends on the quality and type of data collected , the model specification , and the characteristics of the organism of interest .source effects account for factors such as the amount of the food source consumed , the physical properties of the source and the environment provided for the bacteria through storage and preparation .including an environmental source in the model can be thought of as grouping the ( individually ) unmeasured wildlife sources into one .it may also be a transmission pathway for pathogens present in livestock sources ( for example , through the contamination of waterways ) which complicates the interpretation meaning the source effects no longer directly summarise the ability of the source to act as a vehicle for food - borne infections .future work could involve attributing the water/ environmental samples to the other sources of infection ( such as contamination from bovine , ovine , poultry , or other animal sources ) .therefore , it would be possible to estimate the proportion of cases attributed to a sample directly , and via the environement . [[ s5_running_package ] ] s5 appendix .+ + + + + + + + + + + + * helpful details regarding use of ` sourcer ` * the ` sourcer ` package currently allows the relative prevalence matrix to be fixed at the maximum likelihood estimates , which includes zero values where a particular type was not detected in any samples from a source . fixing the relative prevalence matrix increases the posterior precision ( and significantly reduces run time ) , but the results may be biased if the source data is not of high quality .reducing the number of elements in the relative prevalence matrix that get updated at each iteration can significantly reduce computation time , however , the chains will converge more slowly .care must be taken in performing marginal interpretations of the number of type parameters .it is much easier to split a group into two ( with similar group means ) than it is to merge two groups with clearly different means .hence , a histogram of the number of groups per iteration is positively skewed compared to the true number of groups . when fitting the model with simulated data , visually assessing the dendrogram and heatmap to determine the number of groups usually provides a closer value to the true number of groups than looking at a histogram , particularly when the group means are well separated .[ [ s6_mcmc_alg ] ] s6 appendix. + + + + + + + + + + + + * full mcmc algorithm . *this section gives the full details of the algorithm used to fit our fully joint non - parametric source attribution model .the outline mcmc is shown in algorithm [ alg : mcmc ] .the dirichlet distributed source effects across times and locations ( step 1 ) , and the relative prevalences across sources and times ( step 2 ) are updated using a constrained adaptive multisite logarithmic metropolis - hastings update step for 95% of proposals , and a constrained adaptive multisite metropolis - hastings update step for the remainder to prevent the chain getting stuck at very low values .the adaptive algorithm updates the tuning value every 50 updates of the parameter .this is further explained in algorithm [ alg : constrainedmrw ] .initialize all parameters let for the dirichlet process prior on , a marginal gibbs sampler is constructed , as described in algorithm [ alg : dpmarginalgibbs ] .let denote a set of cluster identifiers , with the -dimensional group assignment vector associating elements of with clusters , such that assigns to cluster .furthermore , each cluster assumes a value such that . in step 1 of algorithm[ alg : dpmarginalgibbs ] , conjugacy between the gamma - distributed base distribution and the poisson data likelihood permits the calculation of multinomial conditional posteriors for elements of arising from the chinese restaurant process construction . here, the conditional posterior probability of type being assigned to group is as shown in algorithm [ alg : dpmarginalgibbs ] , with conjugacy permitting marginalisation with respect to the base distribution in order to calculate the probability of being assigned to a new group with and if a type is assigned to a new group , the set is augmented and a corresponding cluster value is drawn from the posterior of .conversely , is shrunk if a particular group becomes empty . in step 2 ,the group values are drawn from the posterior , conditional on .the algorithm therefore alternates between updating group assignments and group values .hence , it explores the number of groups present , the type effects assigned to each group , and the values of each group .[ [ s7_cor_plot ] ] s7 appendix .+ + + + + + + + + + + + * posterior correlations and non - identifiability of source attribution .* [ [ s8_sim_study ] ] s8 appendix .+ + + + + + + + + + + + * worked example showing features of package using simulated data . * in this section , we provide a worked example using simulated data with multiple times and locations for source attribution data generated from the model in section [ sec : model ] ( available in the ` sourcer ` data sets ). there are two times ( 1 , 2 ) and two locations ( a , b ) over which the human cases vary .the data must be in long format , with columns giving the number of human cases for each type , a column for each of the sources giving the number of positive samples for each type , and columns giving the time , location and type i d s for each observation .note , the source data is the same for all locations within a time .the algorithm is run for a total of 500,000 iterations ( with a burn in of 10000 iterations and thinning 500 ) .the acceptance rates for all parameters ( except those updated using a gibbs sampler ) can be accessed using the ` my_modelnew(data = sim_sa_data , k = sim_sa_prev , priors = priors , a_q = 0.1 ) .... fitting parameters for the mcmc are be passed using ....my_modelupdate ( ) .... trace and autocorrelation plots for the parameters ( figure [ fig : trace_acf_sim_data_plots ] ) indicate that the markov chain is mixing well and has converged , and that thinning by 500 is adequate .the following r code demonstrates how to access and plot the marginal posteriors for some parameters . ....# # plot the marginal posterior for source effect 2 , time 1 , location a plot(my_modelalpha , type="l " ) # # plot the marginal posterior for the type effect 21 plot(my_modelq , type="l " ) # # plot the marginal posterior for the relative prevalence of # # source effect 5 , type 17 , at time 2 plot(my_modelr , type="l " ) # # plot the marginal posterior for lambda_j source 1 , time 1 , location a plot(my_modellambda_j , type = " l " ) # # plot the marginal posterior for lambda_i 10 , time 2 , location b plot(my_modellambda_i , type="l " ) .... medians and credible intervals can be obtained for each parameter using ` res$summary ( ) ` .the marginal density plots of the number of cases attributed to each source at each time and location ( ) show that the true values ( shown by a red horizontal line on the graph ) are being estimated well ( figure [ fig : sim_pois_lambdaj_plots ] ) .the violin plots of the number of cases attributed to each type ( residual plot ) for ( figure [ fig : lambda_i_residuals ] ) shows that the model is fitting well .the heatmap shows the grouping of the type effects ( figure [ fig : type_effect_heatmap ] ) computed using a dissimilarity matrix from the clustering output of the mcmc .the coloured bar under the dendrogram gives the correct grouping from the simulated data .this shows that the majority of types have been classified correctly .the research for this paper was financially supported by the ministry for primary industries , the institute of fundamental sciences ( massey university ) , the mepilab ( massey university ) , and chicas ( lancaster university ) .we acknowledge the following individuals and groups : mepilab ( massey university ) , midcentral public health services and petra mullner ( for the manawatu data set ) and geoff jones ( for his helpful input on automatic clustering methods ) .havelaar ah , kirk md , torgerson pr , gibb hj , hald t , lake rj , et al .world health organization global estimates and regional comparisons of the burden of foodborne disease in 2010 .doi:10.1371/journal.pmed.1001923 . . who estimates of the global burden of foodborne diseases : foodborne disease burden epidemiology reference group 2007 - 2015 ; 2015available on the who web site ( www.who.int ) or can be purchased from who press , world health organization , 20 avenue appia , 1211 geneva 27 , switzerland .available from : http://apps.who.int/iris/bitstream/10665/199350/1/9789241565165_eng.pdf?ua=1 .allos bm , moore mr , griffin pm , tauxe rv .surveillance for sporadic foodborne disease in the 21st century : the foodnet perspective . clinical infectious diseases .2004;38(supplement 3):s115s120 .doi:10.1086/381577 .baker m , wilson r , ikram r , chambers s , shoemack s , cook g. regulation of chicken contamination urgently needed to control new zealand s serious campylobacteriosis epidemic . the new zealand medical journal .2006;. mullner p , collins - emerson j , midwinter a , carter p , spencer s , van der logt p , et al .molecular epidemiology of campylobacter jejuni in a geographically isolated country with a uniquely structured poultry industry . applied and environmental microbiology .2010;76(7):21452154 .van pelt w , van de giessen a , van leeuwen w , wannet w , henken a , evers e. oorsprong , omvang en kosten van humane salmonellose .oorsprong van humane salmonellose met betrekking tot varken , rund , kip , ei en overige bronnen .infectieziekten bull .1999;. wilson dj , gabriel e , leatherbarrow ajh , cheesbrough j , gee s , bolton e , et al . rapid evolution and the importance of recombination to the gastroenteric pathogen campylobacter jejuni . molecular biology and evolution .
|
zoonotic diseases are a major cause of morbidity , and productivity losses in both humans and animal populations . identifying the source of food - borne zoonoses ( e.g. an animal reservoir or food product ) is crucial for the identification and prioritisation of food safety interventions . for many zoonotic diseases it is difficult to attribute human cases to sources of infection because there is little epidemiological information on the cases . however , microbial strain typing allows zoonotic pathogens to be categorised , and the relative frequencies of the strain types among the sources and in human cases allows inference on the likely source of each infection . we introduce ` sourcer ` , an ` r ` package for quantitative source attribution , aimed at food - borne diseases . it implements a fully joint bayesian model using strain - typed surveillance data from both human cases and source samples , capable of identifying important sources of infection . the model measures the force of infection from each source , allowing for varying survivability , pathogenicity and virulence of pathogen strains , and varying abilities of the sources to act as vehicles of infection . a bayesian non - parametric ( dirichlet process ) approach is used to cluster pathogen strain types by epidemiological behaviour , avoiding model overfitting and allowing detection of strain types associated with potentially high virulence. ` sourcer ` is demonstrated using _ campylobacter jejuni _ isolate data collected in new zealand between 2005 and 2008 . chicken from a particular poultry supplier was identified as the major source of campylobacteriosis which is qualitatively similar to results of previous studies using the same dataset . additionally , the software identifies a cluster of 9 mlsts with abnormally high virulence in humans . ` sourcer ` enables straightforward attribution of cases of zoonotic infection to putative sources of infection by epidemiologists and public health decision makers . as ` sourcer ` develops , we intend it to become an important and flexible resource for food - borne disease attribution studies .
|
a central problem in quantum information theory is to characterize entanglement in quantum states shared by two or more parties . a bipartite density matrix , or _ state _ ,is a positive semidefinite matrix on the tensor product of finite dimensional complex vector spaces that is _ normalized _ , meaning .such a state is _ separable _ if it can be written as , for local states and and probabilities . any separable state can be created by local quantum operations and classical communication ( locc ) by alice and bob and thus only contains classical correlations .quantum states that are not separable are called _ entangled_. as the normalized hermitian matrices on form a real vector space of dimension ( we abbreviate ) , the set of all states can be viewed as a compact , convex subset of containing the convex subset of separable states . a fundamental question is to decide , given a description of ( say , as a rational vector in ) whether or not it is separable , i.e. whether or not it is contained in .this can be formalized as a decision problem via the weak membership problem .given a norm on and a closed subset , let be the distance from to . ( weak membership problem for separability ) : given a density matrix with the promise that either ( i ) or ( ii ) , decide which is the case .this problem has been intensely studied in recent years ( see e.g. ) with the norm given either by the euclidean norm or by the trace norm . the best - known algorithms for ( with the norm equal either to euclidean or trace norm ) have worst - case complexity . on the hardness side, gurvits proved that is -hard for , with ; the dependence on was later improved to .the same results apply to the trace norm , since for every matrix , .a second problem closely related to the weak - membership problem for separability is the following : bss( ) ( best separable state ) : given a hermitian matrix on , estimate with additive error .the bss( ) problem thus consists of optimizing a linear function over the convex set of separable states .it is a standard fact in convex optimization that linear optimization and weak - membership over a convex set are equivalent tasks , which implies that bss( ) can be used to solve and vice - versa , up to a loss in the error parameters and ( see for a detailed analysis ) .the best known algorithm for bss( ) has worst - case complexity .is the _ operator norm _ of , given by the maximum eigenvalue of . ] with , one considers -nets ( * ? ? ?* lemma ii.4 ) and for the and systems of sizes and , respectively , and minimize over . ]the -hardness of the weak - membership problem for separability implies that bss( ) is -hard for .conditioned on the stronger assumption that there is no subexponential - time algorithm for 3- , harrow and montanaro , building on work by aaronson et al . , recently ruled out even quasipolynomial - time algorithms for bss( ) of complexity up to for _ constant _ and any .more specifically , they showed one could solve 3- with clauses by solving bss( ) , with constant , for a matrix on with .indeed , this shows that an algorithm for bss( ) with time complexity would imply an -time algorithm for 3- .the best separable state problem has a number of other applications ( see e.g. ) , including the estimation of the ground - state energy of mean - field quantum hamiltonians and estimating the minimal min - entropy of quantum channels . in entanglement theory, it has been studied under the name of optimization of entanglement witnesses ( see e.g. ) .is a hermitian operator which has positive trace on all separable states and a negative trace on a particular entangled state , thus witnessing the fact that the state is entangled . ]it turns out that the problem bss( ) is also intimately connected to quantum merlin - arthur games with multiple merlins .the class is a quantum analog of and is formed by all languages that can be decided in quantum polynomial - time by a verifier who is given a quantum system of polynomially many qubits as a proof ( see e.g. ) .the class , in turn , is a variant of in which two proofs , not entangled with one another , are given to the verifier .the properties of and its relation to have recently been in the center of interest in quantum complexity theory . as shown in ,the optimal acceptance probability of a protocol can be expressed as a bss( ) instance .thus a better understanding of the latter would also shed light on the properties of .* a quasipolynomial - time algorithm for separability : * our main result is a quasipolynomial - time algorithm for , for two different choices of the norm : [ main ] and can be solved in time .the norm can be seen as a restricted version of the trace norm .the latter can be written as where is the identity matrix , and is of special importance in quantum information theory as it is directly related to the optimal probability for distinguishing two equiprobable states and with a quantum measurement . , which we write .when the state is , the probabilities of the outcomes are and .the optimal bias of distinguishing two states and is then given by .] in analogy with this interpretation of the trace norm , we define the locc norm as where is the convex set of matrices such that there is a two - outcome measurement that can be realized by locc .is convex , closed , symmetric about the origin and has nonempty interior .therefore it is the unit ball for a norm whose corresponding dual norm is equal to . ]the optimal bias in distinguishing and by any locc protocol is then .we note that in many applications of the separability problem , e.g. assessing the usefulness of a quantum state for violating bell s inequalities or for performing quantum teleportation , the locc norm is actually the more relevant quantity to consider .the euclidean , or frobenius norm is the negative exponential of the quantum collision entropy , and is often of interest in quantum information theory because its quadratic nature makes it especially easy to work with .the algorithm for testing separability , which we present and analyze in more detail in section [ proofs ] , is very simple and searches for symmetric extensions of the state using semidefinite programming .the search for symmetric extensions using semidefinite programming as a test of separability has first been proposed by doherty , parillo and spedalieri .* a quasipolynomial - time algorithm for best separable state : * the same method used to prove theorem [ main ] also results in the following new algorithm for bss( ) : [ main2 ] there is an algorithm solving bss( ) for the hermitian operator in time furthermore , there is an -time algorithm solving bss( ) for any such that is an locc measurement .it is intriguing that the complexity of our algorithm for locc operators matches the hardness result of harrow and montanaro for general operators , which shows that a subexponential - time algorithm of complexity up to for constant and any would imply a algorithm for with clauses .can be taken to be a non - normalized separable state .this , however , does not imply that it can be implemented by locc . ]it is an open question if a similar hardness result could be obtained for locc measurements , which would imply that our algorithm is optimal , assuming requires exponential time .an application of theorem [ main2 ] concerns the estimation of the ground state energy of mean - field hamiltonians .a mean - field hamiltonian consists of a hermitian operator acting on sites ( each formed by a -dimensional quantum system ) defined as , with given by the hermitian matrix which acts as on sites and ( for a fixed two - sites interaction ) and as the identity on the remaining sites .mean - field hamiltonians are often used in condensed - matter physics as a substitute for a given local hamiltonian , since they are easier to analyze and in many cases provide a good approximation to the true model .an important property of quantum many - body hamiltonians is their ground - state energy , i.e. their minimal eigenvalue .a folklore result in condensed - matter physics , formalized e.g. in , is that the computation of the ground - state energy of a mean - field hamiltonian is equivalent to the minimization of over separable states with .theorem [ main2 ] then readily implies a -time algorithm for the problem . before , the best - known algorithm over an -net in the set of product states . ]scaled as * monogamy of entanglement and locc norm : * we say that a bipartite state is -extendible if there is a state that is permutation - symmetric in the systems with .the sets of -extendible states provide a sequence of approximations to the set of separable states . in the limit of large ,the approximation becomes exact because a state is separable if , and only if , it is -extendible for every ( see e.g. ) .this result is a manifestation of a property of quantum correlations known as _ monogamy of entanglement _ : a quantum system can not be equally entangled with an arbitrary number of other systems , i.e. entanglement is a non - shareable property of quantum states . in a quantitative manner ,quantum versions of the de finetti theorem imply that for any -extendible state : .-partite quantum state invariant under exchange of the systems , there is a measure on quantum states on system such that .] moreover , this bound is close to tight , as there are -extendible states that are -away from the set of separable states .unfortunately , for many applications this error estimate exponentially large in the number of qubits of the state is too big to be useful .the key result behind theorems [ main ] and [ main2 ] is the following de finetti - type result , which shows that a significant improvement is possible if we are willing to relax our notion of distance of two quantum states : [ monogamy ] let be -extendible . then in it was shown that , so we also have a similar bound for the euclidean norm , namely a direct implication of theorem [ monogamy ] concerns data - hiding states .every state that can be well - distinguished from separable states by a global measurement , yet is almost completely indistinguishable from a separable state by locc measurements is a so - called data - hiding state : it can be used to hide a bit of information ( whether the prepared state is or the closest separable state to in locc norm ) that is not accessible by locc operations alone .the bipartite antisymmetric state of sufficiently high dimension is an example of a data hiding state , as are random mixed states with high probability ( given an appropriate choice of the dimensions and the rank of the state ) .theorem [ monogamy ] shows that highly extendible states that are far away in trace norm from the set of separable states must necessarily be data - hiding . * quantum merlin - arthur games with multiple merlins : * a final application of theorem [ monogamy ] concerns the complexity class quantum merlin - arthur ( ) , the quantum analogue of ( or more precisely of ) .it is natural to ask how robust the definition of is and a few results are known in this direction : for example , it is possible to amplify the soundness and completeness parameters to exponential accuracy , even without enlarging the proof size .also , the class does not change if we allow a first round of logarithmic - sized quantum communication from the verifier to the prover . from theorem [ main2 ]we get a new characterization of , which at first sight might appear to be strictly more powerful : we show to be equal to the class of languages that can be decided in polynomial time by a verifier who is given unentangled proofs and can measure them using any quantum polynomial - time implementable locc protocol among the proofs .this answers an open question of aaronson _ et al .we hope this characterization of proves useful in devising new verifying systems . in order to formalize our result , let be a class of two - outcome measurements and consider the classes , defined in analogy to as follows : [ defqma2 ] a language is in if there is a uniform family of polynomial - sized quantum circuits that , for every input , can implement a two outcome measurement from the class such that * _ completeness : _if , there exist witnesses , each of qubits , such that * _ soundness : _ if , then for any states we call . by a _ uniform family _ , we mean that there should be a classical algorithm which , upon given the input length and the string , outputs a description of the quantum circuit implementing the measurement in time .let be the class of two outcome povms such that , the povm element corresponding to _accept _ , is a ( non - normalized ) separable operator .harrow and montanaro showed that for any , i.e. two proofs are just as powerful as proofs and one can restrict the verifier s action to without changing the expressive power of the class .we define in an analogous way , but now the verifier can only measure the proofs with a locc measurement .then we have , [ qma2 ] for , in particular , a preliminary step in the direction of theorem [ qma2 ] appeared in , where a similar result was shown for , a variant of in which the verifier is restricted to implement only local measurements on the proofs and jointly post - process the outcomes classically .is also called since the verifier is basically restricted to perform a _ bell test _ on the proofs . ]it is an open question whether eq .( [ qma2exact ] ) remains true if we consider instead of .if this turns out to be the case , then it would imply an optimal conversion of into in what concerns the proof length ( under a plausible complexity - theoretic assumption ) .for it follows from ( based on the protocol for 3- with variables of ) that unless there is a subexponential - time quantum algorithm for 3- , then there is a constant such that for every , recently chen and drucker showed that a variant of the 3- protocol from can be implemented with only local measurements , showing is in . ] that 3- is in it is an intriguing open question if one could also obtain a protocol with the same total proof length ( ) , which would imply that the reduction from to given in theorem [ qma2 ] can not be improved , unless there is a subexponential time quantum algorithm for . we will now give a characterization of in terms of protocols for multiple provers with a restriction on the euclidean norm of the verifiers measurements .let be defined as above , with the class of measurements for which , but with such a restriction imposed only on the _ no _ instances of the language .a language belongs to if there is a uniform family of quantum circuits that , for every , can implement a two - outcome measurement such that * _ completeness : _if , there exist witnesses , each of qubits , such that * _ soundness : _ if , then and for any states then we also have [ lowqma ] for , it is an open question whether theorems [ qma2 ] and [ lowqma ] hold for nonconstant , say for .our methods fail to achieve this because the quadratic blowup in the proof size inherent to our proofs prevents us from applying the reduction recursively more than a constant number of times . *existence of disentanglers * : an interesting approach to the vs. question concerns the existence of disentangler superoperators , defined as follows : a superoperator is an -disentangler in the norm if * is -close to a separable state for every , and * for every separable state , there is a such that is -close to . as noted in , the existence of an efficiently implementable -disentangler in trace norm ( for sufficiently small and ) would imply .watrous has conjectured that this is not the case and that for every , any -disentangler ( in trace - norm ) requires .theorem [ monogamy ] readily implies that the locc - norm analog of watrous conjecture fails : there _ is _ an efficient disentangler in locc norm .indeed , let and .define the superoperator , with and for all , as then is a -disentangler in locc norm . *a lower bound on conditional mutual information : * the main technical tool we use for obtaining theorem [ monogamy ] is a new lower bound on the quantum conditional mutual information of tripartite quantum states , which might be of independent interest .the conditional mutual information is defined as where is the von neumann entropy .then we have the following analog of pinsker s inequality ] : [ boundcmi ] for every , theorem [ boundcmi ] leads to a new result concerning the entanglement measure _ squashed entanglement _ , defined as an immediate corollary of theorem [ boundcmi ] is then [ squashed ] for every , in particular , this implies that squashed entanglement is _ faithful _ , meaning it is strictly positive on all entangled states .this had been a long - standing conjecture in entanglement theory .we now give complete proofs of our theorems with the exception of theorem [ boundcmi ] , for which we give a brief outline of the proof strategy . a complete proof can be found in .we begin with a brief proof of theorem [ monogamy ] , which itself is the key for the complexity - theoretic results .this theorem is a simple combination of corollary [ squashed ] and the following monogamy relation for squashed entanglement : for every bipartite state : this and corollary [ squashed ] give the proof .we prove the statement for the locc norm .the euclidean norm case follows by the same argument , replacing each application of theorem [ monogamy ] by eq .( [ boundeuclidean ] ) .the idea of the algorithm , which is also the basic idea of the algorithm from , is to formulate the search for a -extension of as a semidefinite program ( sdp ) .if is separable then such an extension exists because separable states have a -extension for every .otherwise if , no such extension exists by theorem [ monogamy ] .we only have to make sure that the precision of the algorithm solving the sdp is good enough , which we now analyze in detail .consider the following semidefinite program , with the maximally mixed state , and , we introduced as we require a non - negligible bound on the minimum eigenvalue of the state .observe that has a -extension precisely when the solution of ( [ sdpproblem ] ) is 1 , in which case the extension is obtained by symmetrizing the parts of , i.e. by replacing with the operator where the sum is over permutations .we now consider the approximate case .define \}\ ] ] as the set of feasible points and its -interior , i.e. the use of frobenius norm in the definition of is completely independent of the norm in the theorem statement .rather , it ensures the ellipsoid algorithm solves problem ( [ sdpproblem ] ) up to additive error in time as long as is nonempty ( see e.g. and references therein ) .we claim that is nonempty when and . before proving this ,let us show how it implies that we can solve the weak - membership problem for separability by solving ( [ sdpproblem ] ) .suppose first that is separable .convexity of implies that is also separable , so we know there is a symmetric extension of .the ellipsoid algorithm applied to problem ( [ sdpproblem ] ) will therefore return a number bigger than .suppose now that is -away from .then is -away from . by theorem [ monogamy ] , any state that is -close to in locc norm does not have a -extension .from this we can get that the solution of the sdp ( [ sdpproblem ] ) will be smaller than .indeed suppose it were not the case and that the solution was larger than ( for sufficiently small ) .then because we are guaranteed to be at most away from the exact solution of ( [ sdpproblem ] ) , this would imply there is a positive semidefinite matrix such that for every $ ] and .we can symmetrize the systems in to obtain a semidefinite positive matrix , symmetric under the exchange of the systems and such that and . defining , we find to be -extendible with , so .but this is a contradiction , since we found before that the -ball around does not contain any -extendible state . because , the computational cost of solving the ellipsoid algorithm with accuracy is we now prove that is nonempty .this follows from the fact that , where the maximally mixed state .indeed , it is clear that for every moreover , which immediately implies that .let be the set of -extendible states .let us first analyze the case in which is such that is locc .then the inclusion and theorem [ monogamy ] give hence choosing we can compute an -error additive approximation to bss( ) by solving the semidefinite program given by maximizing over -extendible states , whose time - complexity is .this proves the first part of the theorem . to obtain the bound for general , note that by the cauchy - schwarz inequalitytherefore then choosing we can obtain an -error additive approximation to bss( ) by solving a sdp of time - complexity we start by proving eq .( [ qma2exact ] ) .consider a protocol in given by the locc measurement .we construct a protocol that can simulate it : the verifier asks for a proof of the form where ( each register consists of qubits ) and .he then symmetrizes the systems obtaining the state and measures in the subsystems .let us analyze the completeness and soundness of the protocol . for completeness, the prover can send , for states such that .thus the completeness parameter of the protocol is at least . for soundness, we note that by theorem [ monogamy ] .thus , as the soundness parameter for the protocol can only be away from .indeed , for every symmetric in the systems , eq . ( [ qma2loccequalqma ] ) follows from the protocol above . given a protocol in with each proof of size we can simulate it in as follows : the verifier asks for proofs , the first proof consisting of registers , each of size qubits and , and all the other proofs of size qubits .then he symmetrizes the systems and traces out all of them except the first .finally he applies the original measurement from the to the resulting state .the completeness of the protocol is unaffected by the simulation .for the soundness let be an arbitrary state sent by the prover ( after symmetrizing ) .let be the verification measurement from the protocol .then the equality in the second line follows since we can assume that the states belong to the verifier and adding local states does not change the minimum locc - distance to separable states .since for going from to we had to blow up one of the proof s size only by a quadratic factor , we can repeat the same protocol a constant number of times and still get each proof of polynomial size . in the end ,the completeness parameter of the procedure is the same as the original one for , while the soundness is smaller than , which can be taken to be a constant away from by choosing sufficiently small . to reduce the soundness back to the original value we then use the standard amplification procedure for ( see e.g. ) , which works in this case since the verification measurement is locc . the proof is very similar to the proof of theorem [ qma2 ] , so we only comment on the differences .the strategy for simulating a protocol in is the same as before : the verifier asks for a proof of the form where ( each register consists of qubits ) and .he then symmetrizes the systems to obtain the state , and measures in the subsystems .the completeness of the protocol is the same as that of the original , since the prover can send . for analyzing the soundness of the protocol, let be the closest separable state to in euclidean norm .( [ boundeuclidean ] ) gives then , by the cauchy - schwarz inequality , the proof for for is completely analogous to the proof of theorem [ qma2 ] .a last point to argue is the converse relation , namely that is contained in .this follows from the error reduction protocol of marriot and watrous .indeed , they showed how any protocol in can be transformed into a protocol with proof size equal to the original proof size and soundness .this means that for no " instances the associated measurement must be such that , from which follows that the protocol is in .the proof of theorem [ boundcmi ] begins by first chaining together three inequalities ( lemmas [ nonlockability ] , [ almostmonogamy ] and [ lowerboundnorm ] below ) , each of which is a new result in entanglement theory and is of independent interest .a recursive step ( lemma [ recursion ] below ) completes the proof .these same lemmas appear in with complete proofs ; here we only outline these proofs .the first step involves an entanglement measure called the _regularized relative entropy of entanglement _ , defined as where is the _ relative entropy of entanglement _ , and where is the quantum _ relative entropy_. a distinctive property of the relative entropy of entanglement among entanglement measures is the fact that it is not `` lockable , '' meaning that after discarding a small part of the state , can only drop by an amount proportional to the number of qubits traced out .indeed , as shown in , while the same is true for , we prove the following stronger version : for every , [ nonlockability ] this lemma follows by combining the inequality ( [ nonlockable ] ) with an optimal protocol for the following multipartite quantum data compression problem .consider many copies of a pure state can be expressed as the partial trace of a pure state on a larger system .] on whose restriction to is .suppose these states are shared between two parties : a sender , who holds , and a receiver , who holds , while is inaccessible to both .the _ state redistribution problem _ asks the sender to use quantum communication to transfer the system to the receiver , while asymptotically preserving the overall global quantum state .a protocol for state redistribution was given in achieving the optimal communication rate of , providing an operational interpretation for quantum conditional mutual information .the proof of lemma [ nonlockability ] is obtained by carefully using the state redistribution protocol to apply the inequality ( [ nonlockable ] ) to a tensor - power state in the most efficient way .next , we recall a recent operational interpretation of in the context of quantum hypothesis testing .suppose alice and bob are given either copies of an entangled state , or an arbitrary separable state across .then we define to be the optimal error exponent for distinguishing between these two situations , using only measurements from the class .specifically , let where the minimization is over all measurements identifying with asymptotically unit probability : the main result of gives the following equality i.e. the regularized relative entropy of entanglement is the optimal distinguishability rate when trying to distinguish many copies of an entangled state from ( arbitrary ) separable states , in the case where there is no restrictions on the measurements available .define in analogy to , using only measurements that can be implemented by _one - way _ locc , i.e. by any protocol formed by local operations and classical communication _ only _ from bob to alice .then we have : for every , [ almostmonogamy ] the lemma follows by using eq .( [ dallequalser ] ) and further developing the connection with hypothesis testing in the form of a new monogamy - like inequality for : this inequality is proved by using measurements that achieve and to construct a global measurement distinguishing from separable states at a sufficiently good rate .we define in analogy to the locc norm , the one - way locc norm , in which only measurements implementable by are allowed .then next step is to convert the entropic bound on obtained from lemmas [ nonlockability ] and [ almostmonogamy ] into a lower bound in terms of the minimum distance to the set of separable states : for every , [ lowerboundnorm ] this follows from a combination of von neumann s minimax theorem and azuma s inequality , since separable states satisfy a martingale property when they are subject to local measurements .so far , lemmas 1,2 and 3 combine to give we now consider the family of norms , which quantify distinguishability with respect to measurements that can be implemented by rounds of locc . in particular , they satisfy and .theorem 1 follows by recursive application of the following technical lemma , which is proved in : assume that then [ recursion ]we thank s. aaronson , m. berta , a. harrow , l. ioannou and a. winter for helpful discussions .fb and jy thank the institute mittag leffler , where part of this work was done , for their hospitality .h. kobayashi , k. matsumoto , and t. yamakami .quantum merlin - arthur proof systems : are multiple merlins more helpful to arthur ? in _ lecture notes in computer science _ ,volume 2906 , page 189 .springer , 2003 .
|
we present a quasipolynomial - time algorithm for solving the weak membership problem for the convex set of separable , i.e. non - entangled , bipartite density matrices . the algorithm decides whether a density matrix is separable or whether it is -away from the set of the separable states in time where and are the local dimensions , and the distance is measured with either the euclidean norm , or with the so - called locc norm . the latter is an operationally motivated norm giving the optimal probability of distinguishing two bipartite quantum states , each shared by two parties , using any protocol formed by quantum local operations and classical communication ( locc ) between the parties . we also obtain improved algorithms for optimizing over the set of separable states and for computing the ground - state energy of mean - field hamiltonians . the techniques we develop are also applied to quantum merlin - arthur games , where we show that multiple provers are not more powerful than a single prover when the verifier is restricted to locc protocols , or when the verification procedure is formed by a measurement of small euclidean norm . this answers a question posed by aaronson _ et al . _ ( theory of computing * 5 * , 1 , 2009 ) and provides two new characterizations of the complexity class , a quantum analog of . our algorithm uses semidefinite programming to search for a symmetric extension , as first proposed by doherty , parrilo and spedialieri ( phys . rev . a , 69 , 022308 , 2004 ) . the bound on the runtime follows from an improved de finetti - type bound quantifying the monogamy of quantum entanglement . this result , in turn , follows from a new lower bound on the quantum conditional mutual information and the entanglement measure squashed entanglement .
|
* theorem 1*. _ let _ _ _ _ be continuous functionals on a compact set _ _ of a metric space , _ _ _ be a number,__ _ _ consider the following extremal problems__ __ and__ _ let _ _ be a solution of the problem ( [ jmax ] ) , _ _ _ a solution of ( [ jbmax]),__ _ _then__{cc}% j\left ( \beta\right ) < 0 , & \beta>\beta_{\max},\\ j\left ( \beta\right ) > 0 , & \beta<\beta_{\max},\\ j\left ( \beta\right ) = 0 , & \beta=\beta_{\max } , \end{array } \right . \label{betamax}%\ ] ] _ so _ _ _ is the only solution of the equation__ _ functionals _ _ and _ _ take their maxima at the same point _ . the function _ _ _ _ _ is continuous in every compact segment _ ] * theorem 2*. _ let _ and __ _ be functionals on a set _ _ _ of a metric space__ _ suppose that for every _ _ _ the extremal problems__ _ have their solutions _. let _ _ _ _ then__{cc}% j\left ( \beta\right )< 0 , & \beta>\beta_{\max},\\ j\left ( \beta\right ) > 0 , & \beta<\beta_{\max},\\ j\left ( \beta\right ) = 0 , & \beta=\beta_{\max } , \end{array } \right.\ ] ] _ so _ _ _ is the only solution of the equation__ _ functionals _ _ and _ _ take their maxima at the same point _ . _ _ * prove . * to prove the theorem we multiply the numerator and denominator of the fraction by and repeat the prove of the previous theorem the corollary of the theorems is the following procedure of solution of the problem ( [ jmax ] ) . on the first stepwe solve the problem ( [ jbmax ] ) for any arbitrary ; then we calculate the function and solve the scalar equation ( [ jbeta ] ) . by calculation of for any given we can see the direction where root of the equation is situated .then any iteration scheme can be applied to find the root with the necessary accuracy .let us consider some problems where the above algorithm can be applied .problem 1 .: : let ] , , , , , .+ we have now + the solution of the problem is one of the following three values and ( in the case when the last belongs to ] , , , the problem is equivalent to for this problem we construct the auxiliary problem let be a solution of the last problem , a solution of the equation for a fixed .next let be a solution of the equation then is a solution of the initial problem . problem 4 .: : let , where is a solid sphere of a hilbert space , , , , + it is clear that the solution of the problem ( [ jbmax ] ) has now the form therefore, so , the problem of maximization of has been transformed to the solution of the nonlinear equation ( in unknown value + let us show that the curves and intersect .we have = \\= \lim\limits_{\beta\rightarrow\infty}r\frac{\left\vert w_{0}-\beta w\right\vert ^{2}-\left\vert \beta w\right\vert ^{2}}{\left\vert w_{0}-\beta w\right\vert + \left\vert \beta w\right\vert } = \\= r\lim\limits_{\beta\rightarrow\infty}\frac{\left\langle w_{0}-\beta w , w_{0}-\beta w\right\rangle -\left\langle \beta w,\beta w\right\rangle } { \left\vert w_{0}-\beta w\right\vert + \left\vert \beta w\right\vert } = \\= r\lim\limits_{\beta\rightarrow\infty}\frac{\left\vert w_{0}\right\vert ^{2}-\beta\left\langle w_{0},w\right\rangle -\beta\left\langle w , w_{0}% \right\rangle } { \left\vert w_{0}-\beta w\right\vert + \left\vert \beta w\right\vert } = \\= -r\operatorname{sgn}\beta\operatorname{re}\left\langle w_{0},\tilde { w}\right\rangle , \ \tilde{w}\equiv w/\left\vert w\right\vert .\end{gathered}\ ] ] so , under the condition the curves intersect for positive . under the condition they intersect for .so , in the first case and in the second .the cause of it is that the functional takes some positive values in the first case and only non positive in the second .+ substiuting the asymptotes ( [ asimp ] ) to the equation ( [ urav ] ) , we get a priory valuations of maximal value of {cc}% \frac{h_{0}-r\operatorname{re}\left\langle w_{0},\tilde{w}\right\rangle } { h - r\operatorname{re}\left\langle w,\tilde{w}\right\rangle } , & h_{0}+r\left\vert w_{0}\right\vert > 0,\\ \frac{h_{0}+r\operatorname{re}\left\langle w_{0},\tilde{w}\right\rangle } { h+r\operatorname{re}\left\langle w,\tilde{w}\right\rangle } , & h_{0}+r\left\vert w_{0}\right\vert \leq0 .\end{array } \right .\label{abeta}%\ ] ] note that this valuations are correct only for big values of .+ example 1 .; ; let , , + , , , .+ in this case .the solution of the problem is presented on fig .[ fig : optvect1 ] . the value .graphs of the functions and in ] are presented on fig .[ fig : jb2 ] .the process of asymptotic estimations of the is illustrated by fig .[ fig : y2 ] , the formula ( [ abeta ] ) for gives
|
we propose an algorithm for reduction of the problem of maximization of fraction of two functionals to the equivalent procedure including maximization of difference between the functionals and the solution of an equation of scalar unknown . for illustration of the algorithm we solve some problems of the described type . * key words : * extremal problem , iteration scheme
|
it is a pleasure to be part of the slac summer institute again , not simply because it is one of the great traditions in our field , but because this is a moment of great promise for particle physics .i look forward to exploring many opportunities with you over the course of our two weeks together .my first task in talking about nature s greatest puzzles , the title of this year s summer institute , is to deconstruct the premise a little bit .`` about 500 years ago man s curiosity took a special turn toward detailed experimentation with matter , '' wrote viki weisskopf . it was the beginning of science as we know it today .instead of reaching directly at the whole truth , at an explanation for the entire universe , its creation and present form , science tried to acquire partial truths in small measure , about some definable and reasonably separable groups of phenomena .`` science developed only when men began to restrain themselves not to ask general questions , such as : what is matter made of ?how was the universe created ? what is the essence of life ?they asked limited questions , such as : how does an object fall ?how does water flow in a tube ? etc . instead of asking general questions and receiving limited answers , they asked limited questions and found general answers . ''an important part of what we might do in these two weeks together is to think about how we actually construct science , how we construct understanding , and how we present the acts of doing science to other people .galileo , the icon of the moment when we humans found the courage to reject authority and learned to interrogate nature by doing experiments , expressed his approach in this way : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ io stimo pi il trovar un vero , bench di cosa leggiera , chl disputar lungamente delle massime questioni senza conseguir verit nissuna ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we have built up science over these past five hundred years not so much by focusing on the majestic questions as by thinking about small questions that we have a chance to answer , and then trying to weave the answers to those questions together into an understanding that will give us insight into the largest questions . by focusing on `` small things , '' with an eye to their larger implications ,galileo achieved far more than the philosophers and theologians who surrounded him in florence and venice , and who , by their authority , asserted answers to the `` greatest questions . '' a great shame of the race of physics professors is that going through galileo s motions , _ without _ an eye to their larger implications , too often constitutes freshman physics lab .we owe it to our students to explain _ why _ we require them to renact galileo s investigations , how we seek to weave the answers to small questions into broader understanding , and what science really is . there is a glorious story here , and we need to convey that glorious story to our students and to the public at large .we owe no less to the future of our science !i do nt underestimate the value of grand themes as organizing principles and motivational devices , but i want to emphasize the need to balance the grandeur and sweep of the great questions with our prospects for answering them . at every moment , we must decide which questions to address .unimagined progress may flow from small questions . measuring how the conductivity of the atmosphere varies with altitude, victor hess discovered the cosmic radiation one of the wellsprings of particle physics and the subject of great puzzle no . 9 at this xxxii slac summer institute .hess did not set out to found particle physics , nor even to explore the great beyond , but merely to pursue a puzzling observation .so it s entirely possible that by paying close attention to a _ well - chosen small thing , _ we may be able to change the world .i am insisting with weisskopf and galileo and many others on the importance of small questions because their role in the making of science is so poorly understood . introducing _ time_ magazine s top eighteen ( not just ten ! ) list of america s best in science and medicine , michael lemonick wrote in 2001 , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` the questions scientists are tackling now are a lot narrower than those that were being asked 100 years ago . as john horgan pointed out in his controversial 1997 best seller , _ the end of science , _ we ve already made most of the fundamental discoveries : that the blueprint for most living things is carried in a molecule called dna ; that the universe began with a big bang ; that atoms are made of protons , electrons and neutrons ; that evolution proceeds by natural selection . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ horgan s assertion that most of the great questions have already been answered is a relatively puerile form of millennial madness .perhaps this misperception lingers because when we scientists talk about our work we do nt always situate our immediate goals within a larger picture that would give an image of what we re trying to learn , what we re trying to understand . butthe notion that science s best days are behind us will pass , if it hasnt already .i m more troubled by the breezy claim ( `` more and more about less and less '' ) that we scientists today address narrower questions than our ancestors did a century ago .this is preposterously false ; it has nothing to do with the way science is actually done .ever since galileo , what we call science has advanced precisely by asking , and answering , limited questions , seeking small facts , and synthesizing an ever - more - comprehensive understanding of nature .it is vexing to hear this misconception from a distinguished science writer .it is even more vexing because the writer s father was a legendary princeton physics professor and a particle physicist .we are failing to communicate that science is , in its essence , weaving together the answers to small questions , and we must do better !now let us turn for a moment to the list of `` greatest puzzles '' that will command our attention for these two weeks : 1 .where and what is dark matter ?2 . how massive are neutrinos ? 3 .what are the implications of neutrino mass ? 4 .what are the origins of mass ? 5 .why is there a spectrum of fermion masses ? 6. why is gravity weak ? 7 . is nature supersymmetricwhy is the universe made of matter and not antimatter ? 9 .where do ultrahigh - energy cosmic rays come from ? 10 .did the universe inflate at birth ? to their credit, the organizers have given you ten `` greatest puzzles '' that are not all great questions .some of them are small questions that might grow , in the spirit of hess s studies of the atmosphere , into great answers .i think it s important to recognize that `` top - ten '' lists are always subjective in some way : they suit a certain moment , a certain purpose , a certain institution , a certain prejudice .it s also true that the list of `` greatest puzzles '' changes with time . to me ,one of the most inspiring things about the progress of science is the way in which questions that were , not so long ago , `` metaphysical''that could nt be addressed as scientific questions have become scientific questions .i give you two that in former times were used exclusively to torture graduate students on their qualifying exams : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ what would happen if the mass of the proton or the mass of the electron changed a little bit ? + what would happen if the fine structure constant changed a little bit ? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ when i was on the receiving end of those questions , i had little patience for them . to tell the truth , i really hated them , because the world was nt that way ,so why think about it ?now that i ve lost some of the certainty of youth , i ve come to understand that these were much better questions than my teachers realized .let s recast them slightly , as _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ why is the proton mass 1836 the electron mass ? + what accounts for the different strengths of the strong , weak , and electromagnetic interactions ?_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ not so long ago , these were metaphysical questions beyond the reach of science : masses and coupling strengths were givens .but now we can see how the values of masses and coupling strengths might arise ; we recognize these questions as scientific questions .as we ll recall in a few paragraphs , we understand where the proton mass comes from .we have a framework for inquiring into the origin of the electron mass . we know , through renormalization group analysis , that coupling constants evolve with energy; we can make a picture in which the coupling constants have the low - energy values we measure because they evolve from a common value at a high energy the unification scale .we can imagine how , if the world were a little different , the couplings would have changed .so these turn out to be not such annoying questions not mere instruments of torture but questions that we can answer scientifically .soon , we will be able at least to sketch plausible storylines , if not to tell the full stories .similar progressions from apparently arbitrary givens to answerable scientific questions appear all over the map of science .some questions remain unanswered for so long that we might be tempted to forget that they are questions .one that has been much on my mind of late is , `` why are charged - current weak interactions left - handed ? ''nearly everyone in this room was born or at least born as a physicist after the 1957 discovery of parity violation in the weak interactions .it s fair to say that , whereas our ancestors were shaken by the asymmetry between left - handed and right - handed particles , we have grown up with it .[ i estimate that i have written down more than ten thousand left - handed doublets to this point in my career .] so it would not be astonishing if the question had lost its edge for us .but i hope you will agree that the distinction between left - handed and right - handed particles is one of the most puzzling aspects of the natural world .it suggests the following _ exercise . _what other profound questions have been with us for so long that they are less prominent in `` top - ten '' lists than they deserve to be ? if new questions come within our reach and long - standing questions slip from our consciousness , some formerly great questions now seem to us the wrong questions .a famous example , developed in detail by lincoln wolfenstein last year , is kepler s quest to understand why the sun should have exactly six planetary companions in the observed orbits .kepler sought a symmetry principle that would give order to the universe following the platonic - pythagorean tradition .perhaps , he thought , the six orbits were determined by the five regular solids of geometry , or perhaps by musical harmonies .we now know that the sun holds in its thrall more than six planets , not to mention the asteroids , periodic comets , and planetini , nor all the moons around kepler s planets .but that is not why kepler s problem seems ill - conceived to us ; we just do not believe that it should have a simple answer .neither symmetry principles nor stability criteria make it inevitable that those six planets should orbit our sun precisely as they do .i think this example holds two lessons for us : first , it is very hard to know in advance which aspects of the physical world will have simple , beautiful , informative explanations , and which we shall have to accept as `` complicated . ''second , and here kepler is a particularly inspiring example , we may learn very great lessons indeed while pursuing challenging questions that in the end do not have illuminating answers .sometimes we answer a great question before we recognize it as a scientific question . a recent exampleis , `` what sets the mass of the proton ? '' and its corollary , `` what accounts for the visible mass of the universe ? '' hard on the heels of the discovery of asymptotic freedom , quantum chromodynamics provided the insight : the mass of the proton is given mostly by the kinetic energy of three extremely light quarks and the energy stored up in the gluon field that confines them in a small space .almost before most people realized that qcd had made the question answerable , we had in our hands the conceptual answer and an essentially complete _ a priori _ calculation .i do not have a lot of patience for debates about the problem of knowledge ; for the most part , i would rather do science than talk about how to do it .nevertheless , at this time when we anticipate a great flowering of our subject , we should examine our habits and think a little bit about how other people do science and how they see us .two interesting characters , bob laughlin and david pines , have published a broadside proclaiming the end of reductionism ( `` the science of the past '' ) , which they identify with particle physics , and the triumph of emergent behavior , the study of complex adaptive systems ( `` the physics of the next century '' ) .the idea of emergent behavior , which they advertise as being rich in its applications to condensed matter physics in particular , is that there are phenomena in nature , or regularities , or even very precise laws , that you can not recognize by starting with the lagrangian of the universe .these include situations that arise in many - body problems , but also situations in which a simple perturbation - theory analysis is not sufficient to see what will happen .my first response to laughlin & pines is that they have profoundly misconstrued the way we work .what is quark confinement in qcd , the theory of the strong interactions , if not emergent behavior ?you could do perturbation theory for a very long time and not discover the phenomenon of confinement .this notion of emergence is ubiquitous in particle physics . as qcd becomesstrongly coupled , new phenomena emerge not only confinement , but also chiral symmetry breaking and the appearance of goldstone bosons that we would nt have anticipated by staring at the lagrangian .[ this is , by the way , one of the reasons that we should force ourselves to pay attention to heavy - ion collisions at high energies ; the very lack of simplicity may push us into realms of qcd where we ca nt guess the answers by simple analysis . ]the `` little higgs '' approach to electroweak symmetry breaking is another example of important features that are not apparent in the lagrangian in any simple sense .a graceful description of the consequences of these phenomena entails new degrees of freedom and a new effective theory .laughlin and pines advocate the search for `` higher organizing principles '' ( perhaps universal ) , relatively independent of the fundamental theory .i give them credit for emphasizing that many different underlying theories may lead to identical observational consequences .but they turn a blind eye to the idea that in many important physical settings , the detailed structure and parameters of the lagrangian are decisive .they campaign as well for the synthesis of principles through experiment , which i also recognize as part of the way we do particle physics .i believe that the best practice of particle physics of physics in general embraces both reductionist and emergentist approaches , in the appropriate settings .overall , i am left with the impression that laughlin & pines are giving a war to which no one should come , because the case for their revolutionary intellectual movement is founded on misperception and false choices .perhaps the best way for us to be heard is to listen more closely , try to understand the approaches we have in common , and occasionally to use their language to describe what we do .it is important for us to seek the respect and understanding of our colleagues who do other physics , in other ways .one question of scientific style remains : when we understand a phenomenon as emergent , will that stand as a final verdict , or does emergence represent a stage in our understanding that will be supplanted as we gain control over our theories and the methods by which we elaborate their consequences ? and does one perspective or another limit our ability to advance our understanding ?i would like to bring these introductory remarks to a close by pointing you toward some meta - questions that i hope you will think about during the course of the summer institute .i call them to your attention because some wise people ( including wise people from our own community , and even wise people from stanford , california ) have been pondering them as questions that might be moving toward scientific questions , to which we may hope to find scientific answers. : : is this the best of all possible worlds ?pangloss s assertion , though burdened with ironical baggage , carries with it the daring suggestion that other worlds are thinkable . according to an enduring dream that has probably infected all of us from time to time, the theory of the world might prove to be so restrictive that things have to turn out the way we observe them .is this really the way the world works , or not ?are the elements of our standard model the quarks and leptons and gauge groups and coupling constants inevitable , at least in a probabilistic sense , or did it just happen this way ? : : is nature simple or complex ? and if we take the sophisticate s view that it is both , which aspects will have beautiful `` simple '' explanations and which explanations will remain complicated ? : : are nature s laws the same at all times and places ?yes , of course they are , to good approximation _ in our experience ._ otherwise science would have had to confront a universe that is in some manner capricious .but _ all _ times and _ all _ places is a very strong conclusion , for which we can not have decisive evidence .many people have been thinking about multiple universes in which there may be different incarnations of the basic structures . : : can one theoretical structure account for `` everything , '' or should we be content with partial theories useful in different domains ? can we really expect to have a theory that applies from the lowest energies to the highest , from the smallest distances to the greatest ?all these questions are a bit wooly and may even be undecidable ; they could generate a lot of blather and not lead to any telling insights .but we would be mistaken to pretend they are not there .so i urge you to spend a little of your time at the summer institute thinking about what constitutes a scientific explanation . to work toward your own understanding of the galilean relationship between small questions and sweeping insights , and to practice presenting the significance of your work to the wider world, please complete the following _ exercise ._ explain in a paragraph or two how your current research project relates to great questions about nature or is otherwise irresistibly fascinating .be prepared to present your answer to a science writer at a ssi social event .before i move on to explore some themes that bind together the questions that our organizers have given us ( and some other topics ) , i want to emphasize again that we stand on the threshold of a great flowering of experimental particle physics and of dramatic progress in theory especially that part of theory that engages with experiment .we particle physicists are impatient and ambitious people , and so we tend to regard the decade just past as one of consolidation , as opposed to stunning breakthroughs .but an objective look at the headlines of the past ten years gives us a very impressive list of discoveries .it is important that we know this for ourselves , and that we convey our sense of achievement and promise to others .* the electroweak theory has been elevated from a very promising description to a _ law of nature ._ it is quite remarkable that in a short time we have gone from a conjectured electroweak theory to one that is established as a real quantum field theory , tested as a quantum field theory at the level of one per mille in many many observables .this achievement is truly the work of many hands ; it has involved experiments at the pole , the study of , , and interactions , and supremely precise measurements such as the determination of .* electroweak experiments have observed what we may reasonably interpret as the influence of the higgs boson in the vacuum .* experiments using neutrinos generated by cosmic - ray interactions in the atmosphere , by nuclear fusion in the sun , and by nuclear fission in reactors , have established neutrino flavor oscillations : and . * aided by experiments on heavy quarks , studies of , investigations of high - energy , , and collisions , and by developments in lattice field theory , we have made remarkable strides in understanding quantum chromodynamics as the theory of the strong interactions . * the top quark , a remarkable apparently elementary fermion with the mass of an osmium atom ,was discovered in collisions .* direct violation has been observed in decay .* experiments at asymmetric - energy factories have established that -meson decays do not respect invariance . *the study of type - ia supernovae and detailed thermal maps of the cosmic microwave background reveal that we live in an approximately flat universe dominated by dark matter and energy . *a `` three - neutrino '' experiment has detected the interactions of tau neutrinos .* many experiments , mainly those at the highest - energy colliders , indicate that quarks and leptons are structureless on the 1-tev scale . we have learned an impressive amount in ten years , and i find quite striking the diversity of experimental and observational approaches that have brought us new knowledge , as well as the richness of the interplay between theory and experiment .now i want to talk about five themes that weave together the great questions and small that we will be talking about during these two weeks .i spoke at the beginning of the hour about the decade of discovery just achieved .i believe that the decade ahead will be a real golden age of exploration and discovery .* we will make a thorough exploration of the 1-tev energy scale ; search for , find , and study the higgs boson or its equivalent ; and probe the mechanism that hides electroweak symmetry .decisive progress will come from our ( anti)proton - proton colliders , notably the large hadron collider at cern , but we envisage a tev - scale electron - positron linear collider to give us a second look , through a different lens .* we will continue to challenge the standard model s attribution of violation to a phase in the quark mixing matrix , in experiments that examine decays and rare decays or mixing of strange and charmed particles .fixed - target experiments , as well as and colliders , will contribute .* new accelerator - generated neutrino beams , together with reactor experiments and the continued study of neutrinos from natural sources , will consolidate our understanding of neutrino mixing .double - beta - decay searches may confirm the majorana nature of neutrinos . and do not dismiss the possibility that three neutrinos will not suffice to explain all observations ! * the top quark will become an important window into the nature of electroweak symmetry breaking , rather than a mere object of experimental desire .single - top production and the top quark s coupling to the higgs sector will be informative .hadron colliders will lead the way , with the lc opening up additional detailed studies . *the study of new phases of matter and renewed attention to hadronic physics will deepen our appreciation for the richness of qcd , and might even bring new ideas to the realm of electroweak symmetry breaking .heavy - ion collisions have a special role to play here , but collisions , fixed - target experiments , and and colliders all are contributors . *planned discoveries and programmatic surveys have their ( important ! ) place , but exploration breaks the mold of established ideas and can recast our list of urgent questions overnight .the lhc , not to mention a whole range of experiments down to tabletop scale , will make the coming decade one of the great voyages into the unknown . among the objectives we have already prepared in great theoretical detail are extra dimensions , new strong dynamics , supersymmetry , and new forces and constituents .any one of these would give us a new continent to explore .* proton decay remains the most promising path to establish the existence of extended families that contain both quarks and leptons .vast new underground detectors will be required to push the sensitivity frontier .* we will learn much more about the composition of the universe , perhaps establishing the nature of some of the dark matter .observations of type ia supernovae , the cosmic microwave background , and the large - scale structure of the universe will extend our knowledge of the fossil record .underground searches may give evidence of relic dark matter .collider experiments will establish the character of dark - matter candidates and will make possible a more enlightened reading of the fossil record .these few items constitute a staggeringly rich prospectus for search and discovery and for enhanced understanding .exploiting all these opportunities will require many different instruments , as well as the toil and wit of many physicists . fred gilman will offer a roadmap to the future at the end of the school , but it is plain that one of our great challenges is to think clearly about the diversity of our experimental initiatives , and about scale diversity of those initiatives .it is relatively easy to write the major headlines of the program we would like to see . but how do we create the institutions that year after year make important measurements ?how do we create the next set of greatest puzzles ?that , it seems to me , is a very significant issue for people who will be part of our field over the next thirty years .i leave you with a list of advances that i believe can happen over the next decade or so .i put up my list for the same reason , i think , that the organizers of the school gave you their list because then you can object to it , and make your own ! we will _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ understand electroweak symmetry breaking , observe the higgs boson , measure neutrino masses and mixings , establish majorana neutrinos through the observation of neutrinoless double - beta decay , thoroughly explore violation in decays , exploit rare decays ( , , ) , observe the neutron s permanent electric dipole meoment , and pursue the electron s electric dipole moment , use top as a tool , observe new phases of matter , understand hadron structure quantitatively , uncover the full implications of qcd , observe proton decay , understand the baryon excess of the universe , catalogue the matter and energy of the universe , measure the equation of state of the dark energy , search for new macroscopic forces , determine the gauge symmetry that unifies the strong , weak , and electromagnetic interactions , detect neutrinos from the universe , learn how to quantize gravity , learn why empty space is nearly weightless , test the inflation hypothesis , understand discrete symmetry violation , resolve the hierarchy problem , discover new gauge forces , directly detect dark - matter particles , explore extra spatial dimensions , understand the origin of the large - scale structure of the universe , observe gravitational radiation , solve the strong problem , learn whether supersymmetry operates on the tev scale , seek tev - scale dynamical symmetry breaking , search for new strong dynamics , explain the highest - energy cosmic rays , formulate the problem of identity , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ and learn the right questions to ask !the first theme is one on which i am rather confident that we will make enormous progress over the next decade .that is the problem of understanding the everyday , the stuff of the world around us .it pertains to basic questions : why are there atoms ?why is there chemistry ?why are stable structures possible ? and even , knowing the answers to those questions gives us an insight into what makes life possible ?those are the general questions that we are seeking to answer when we look for the origin of electroweak symmetry breaking .i think that the best way to make the connection is to consider what the world would be like if there were no mechanism , like the higgs mechanism , for electroweak symmetry breaking .it s important to look at the problem in this way , because in the public presentations of the aspiration of particle physics we hear too often that the goal of the lhc or a linear collider is to check off the last missing particle of the standard model , this year s holy grail of particle physics , the higgs boson . _the truth is much less boring than that ! _ what we re trying to accomplish is much more exciting , and asking what the world would have been like without the higgs mechanism is a way of getting at that excitement .first , it s clear that quarks and leptons would remain massless , because mass terms are not permitted in our left - handed world if the electroweak symmetry remains manifest .we ve done nothing to qcd , so that would still confine the ( massless ) color - triplet quarks into color - singlet hadrons , with very little change in the masses of those stable structures . in particular, the nucleon mass would be essentially unchanged , but the proton would outweigh the neutron because the down quark now does not outweigh the up quark , and that change will have its own consequences .an interesting , and slightly subtle point is that , even in the absence of a higgs mechanism , the electroweak symmetry is broken by qcd , precisely by one of the emergent phenomena we have just discussed in [ subsec : toe ] .as we approach low energy in qcd , confinement occurs and the chiral symmetry that treated the massless left - handed and right - handed quarks as separate objects is broken .the resulting communication between the left - handed and right - handed worlds engenders a breaking of the electroweak symmetry .the trouble is that the scale of electroweak symmetry breaking is measured by the pseudoscalar decay constant of the pion , so the amount of mass acquired by the and is set by , not by what we know to be the electroweak scale : it is off by a factor of 2500 .but the fact is that the electroweak symmetry is broken , so the world without a higgs mechanism but with strong - coupling qcd is a worldin which the becomes . because the and have masses , the weak - isospin force , which we might have taken to be a confining force in the absence of symmetry breaking , is not confining .beta decay is very rapid , because the gauge bosons are very light .the lightest nucleus is therefore one neutron ; there is no hydrogen atom .there s been some analysis of what would happen to big - bang nucleosynthesis in this world ; that work suggests that some light elements such as helium would be created . because the electron is massless , the bohr radius of the atom is infinite , so there is nothing we would recognize as an atom , there is no chemistry as we know it , there are no stable composite structures like the solids and liquids we know .i invite you to explore this scenario in even greater detail .[ to do so is at least as challenging as trying to understand the world we do live in . ]the point is to see how very different the world would be , if it were not for the mechanism of electroweak symmetry breaking whose inner workings we intend to explore and understand in the next decade .what we are really trying to get at , when we look for the source of electroweak symmetry breaking , is why we do nt live in a world so different , why we live in the world we do .i think that s a glorious question .it s one of the deepest questions that human beings have ever tried to engage , and _ you _ will answer this question . what could the answer be ? as far as we can tell , because we have an _ effective field theory _ description , the agent of electroweak symmetry breaking represents a novel fundamental interaction at an energy of a few hundred gev . as we parametrize it in the standard electroweak theory , and we contrive the higgs potential , it is not a gauge force but a completely new kind of interactionwe do not know what that force is . what could it be? it could be the higgs mechanism of the standard model , which is built in analogy to the ginzburg landau description of superconductivity .maybe it is a new gauge force .one very appealing possibility at least until you get into the details is that the solution to electroweak symmetry breaking will be like the solution to the model for electroweak symmetry breaking , the superconducting phase transition .the superconducting phase transition is first described by the ginzburg landau phenomenology , but then in reality is explained by the bardeen cooper schrieffer theory that comes from the gauge theory of quantum electrodynamics .maybe , then , we will discover a mechanism for electroweak symmetry breaking almost as economical as the qcd mechanism we discussed above .one line that people have investigated again and again is the possibility that there are new constituents still to be discovered that interact by means of forces still to be discovered , and when we learn how to calculate the consequences of that theory we will find our analogue of the bcs theory .it could even be that there is some truly emergent description at this level of the electroweak phase transition , a residual force that arises from the strong dynamics among the weak gauge bosons .we know that if we take the mass of the higgs boson to very large values , beyond a tev in the lagrangian of the electroweak theory , the scattering among gauge bosons becomes strong , in the sense that scattering becomes strong on the gev scale .resonances form among pairs of gauge bosons , multiple production of gauge bosons becomes commonplace , and that resonant behavior could be what hides the electroweak symmetry .we ll also hear during these two weeks about the possibility that electroweak symmetry breaking is the echo of extra spacetime dimensions .we do nt know , and we intend to find out during the next decade which path nature has taken .one very important step toward understanding the new force is to find the higgs boson and to learn its properties .i ve said before in public , and i say again here , that the higgs boson will be discovered whether it exists or not . that is a statement with a precise technical meaning .there will be ( almost surely ) a spin - zero object that has effectively more or less the interactions of the standard - model higgs boson , whether it is an elementary particle that we put into to the theory or something that emerges from the theory .such an object is required to get good high - energy behavior of the theory . if something will be found , what is it ?how many are there ?is its spin - parity what we expect ( ) in the electroweak theory ?does it generate mass for the gauge bosons and alone , or does it generate mass for the gauge bosons and the fermions ?how does it interact with itself ? there will be a party on the day the higgs boson is discovered , but it will mark the beginning of a lot of work !the second theme has to do with the cast of characters , the basic constituents of matter , the quarks and leptons .it involves the question , `` what makes a top quark a top quark , an electron an electron , a neutrino a neutrino ? what distinguishes these objects ? '' now , maybe this is a kepler - style question that we should nt be asking , but it is a tantalizing question in any event .what do i mean by this more precisely ?i mean , what sets the masses and mixings of the quarks and leptons ?this has to do with the famous ckm matrix of quark mixings , which our colleagues here and elsewhere are measuring so assiduously .these elements arise , in the standard model , in the course of electroweak symmetry breaking with values set by those famous arbitrary yukawa couplings , whose values we do nt know except by experiment .what is violation really trying to tell us ?one of the things i am most confused about is what discrete symmetries mean , when they are exact and when they are broken .are parity violation and violation intrinsic defects or essential features of the laws of nature , or do they represent spontaneously broken symmetries ?neutrino oscillations flavor - changing transitions , more generally give us a new look at the meaning of identity , because they , too , have to do with fermion masses and identities .neutrino masses can be generated in the old ways , through yukawa couplings , and in new ways as well , so they may give us a new take on the problem , and add richness to it .we often hear that neutrino mass is evidence for physics beyond the standard model .i m here to tell you that _ all fermion masses , starting with the electron mass , are evidence for physics beyond the standard model . _ the reason in this :while , in the electroweak theory a little box pops up and says , `` write the electron mass here , '' nothing in the electroweak theory either now or at any time in the future is going to tell us how to calculate that number .it s not that the calculation is technically challenging , it is that the electroweak theory has nothing to say about fermion mass. all of these masses are profoundly mysterious .neutrino masses could present an additional mystery , because neutrinos can be their own antiparticle , which means there are other ways of generating neutrino masses .there is a real enigma here , one that we need to get our minds around .maybe we havent figured out what the pattern is because there is more to see in the pattern .perhaps it will only become apparent when we take into account the masses of superpartners or other kinds of matter .it s worth remembering that when mendeleev made his periodic table , he constructed it out of the chemical elements that had been discovered by chemists .the chemicals discovered by chemists are the chemicals that have chemistry ; and so mendeleev did nt know about helium , neon , argon , krypton , xenon .if you had tried to see the pattern , you would have made real progress filling in the missing elements , but without the noble gases that we now think of as the last column , you would nt have had the clues necessary to build up , in a systematic way , the properties of the elements , or to guess what lies behind the periodic table .perhaps we need to see something more an analogue of the noble gases before we can understand what lies behind the pattern .i m less confident that in ten years we will get to the bottom of this theme , because i really think that we are at the stage of developing for ourselves what this question is .we know very well what are the measurements we d like to make in physics , charm and strange physics , and neutrino physics which elements of the mixing matrices we would like to fill in and which relationships we would like to test .but i do nt think we ve done a satisfactory job yet of constructing what the big question is , and what the properties of the fermions are trying to tell us .i think it is very important that we try to think of the quarks and leptons together , to see what additional insights a common analysis might bring , and to try to understand what the question really is here . among the extensions to the standard model that might give us clues into the larger pattern there is , of course , supersymmetry .in common with many extensions to the standard model , supersymmetry brings us dark matter candidates .supersymmetry is very highly developed .it has a number of very important consequences if it is true .first , if the top quark is heavy and a few other things happen in the right way , then supersymmetry predicts the condensation that gives rise to the hiding of electroweak symmetry .it can generate , by the running of masses , the shape of the higgs potential .it predicts a light higgs mass , less than some number in the neighborhood of 130 , 140 , 150 gev . that s consistent with the current indications from precision electroweak measurements .it predicts cosmological cold dark matter , which seems to be a good thing to have . it might lead to an understanding of the excess of matter over antimater in the universe . and , in a unified theory , it explains the ( relative ) values of the standard - model coupling constants . to seethat , we have to move on to the next theme .the quarks have strong interactions , as you all know , and the leptons do nt .could we have a world made only of quarks , or only of leptons ?there are many strong reasons for believing that quarks and leptons must have something to do with each other , despite their different behavior under the strong interactions .what do they have in common ?they are all spin- particles , structureless at the current limits of resolution .the six quarks match the six leptons .what motivates us to think of a world in which the quarks and leptons are not just unrelated sets that match by chance , but have a deep connection ?the simplest way to express it , i think , is to go back to a puzzle of very long standing , why atoms are so very nearly neutral .this is one of the best measured numbers close to zero in all of experimental science : atoms are neutral to one part in .if there is no connection between quarks and leptons , since quarks make up the proton , then the balance of the proton and electron charge is just a remarkable coincidence .it seems impossible for any thinking person to be satisfied with coincidence as an explanation .some principle must relate the charges of the quarks and the leptons .what is it ?a fancier way of saying it , and more or less equivalent , is that for the electroweak theory to make sense up to arbitrarily high energies , the symmetries on which it is based must survive quantum corrections . the way we say that is that the theory must be free of anomalies quantum corrections that break the gauge symmetry on which the theory is based . in our left - handed world , that is only possible if weak - isospin pairs of color - triplet quarks accompany weak - isospin pairs of color - singlet leptons . for these reasons ,it is nearly irresistible to consider a unified theory that puts quarks and leptons into a single extended family .once you ve done that , it s a natural implication that protons should decay .although it s a natural implication , it may not be unavoidable , because we do nt know which quarks go with which leptons .if you look at the tables chiseled in marble out in the hallway to celebrate the nobel prize of 1976 , you will see that the up and down quarks go with the electron and its neutrino .we have no experimental basis for that arrangement , it just reflects the order in which we met the particles . for all we know, the first generation of quarks goes with the third generation of neutrinos .supersymmetry is interesting in this context because it sets an experimental target that s not so far away an order of magnitude or two away : perhaps that target provides enough stimulus if we can think of how to build a massive , low - background apparatus at finite cost to go the next order of magnitude or two in sensitivity , perhaps to find evidence for proton decay , which would be the definitive proof of the connection between quarks and leptons .coupling constants unify in the unified theory . at some high scale ,whose value we might discover in some future theory , all the couplings have a certain value .the differing values we see at low energy for the associated with weak hypercharge , the associated with weak isospin , and the associated with color come about because of the different evolution given by the different gauge groups and the spectrum of particles between up there and down here . in this sensewe can explain why the strong interactions are strong on a certain scale .one way of thinking about the masses of the quarks and leptons is to imagine that the pattern just looks weird to us because we are examining the fermion masses at low energies .masses run with momentum scale in a way analogous to the running of coupling constants .so possibly , if we look at very high energies , we will see a rational pattern that relates one mass to another through clebsch gordan coefficients or some other symmetry factors .there are examples of this .one of the nice fantasy studies for the linear collider is measuring masses of superpartners well enough at low energies to have the courage to extrapolate them over fourteen or fifteen orders of magnitude in energy , to see how they come together .we particle physicists have neglected gravity all these years , and for good reason . if we calculate a representative process , kaon decay into a pion plus a graviton for example , it s easy to estimate that the emission of a graviton is suppressed by .the planck mass ( ) is a big number because newton s constant is small in the appropriate units .a dimensional estimate for the branching fraction is .it will be a long time before the single - event sensitivity of any kaon experiment reaches this level ! and that s why we have been able to safely neglect gravity most of the time .all of us have great respect for the theory of gravity , because it was given to us by einstein and newton and the gods , whereas we know the people who made the electroweak theory , and so it s natural to think that gravity must be true . but from the experimental point of view , we know very little about gravity at short distances . down to a few tenths of a millimeter ,elegant experiments using torsion oscillators and microcantilevers exclude a deviation from newton s inverse - square law with strength comparable to gravity s .the techniques and the bounds are very impressive ! but at shorter distances , the constraints deteriorate rapidly , so nothing prevents us from considering changes to gravity even on a small but macroscopic scale . even after this new generation of experiments ,we have only tested our understanding of gravity through the inverse - square law up to energies of 10 mev ( yes , _milli_-electron volts ) , some fourteen orders of magnitude below the energies at which we have studied qcd and the electroweak theory .that does nt mean that a deviation from the inverse - square law is just around the corner , but experiment plainly leaves an opening for gravitational surprises .indeed , it is an open possibility that at _ larger _ distances than we have observed astronomically gravity might deviate from the inverse - square law . there is a huge field over which gravity might be different from newton s law , and we would nt have discovered it yet .now , in spite of the fact that we have had good reason to neglect gravity in our daily calculations of feynman diagrams , we have also been keenly aware that gravity is not always negligible . inmore or less any interacting field theory , and certainly in one like the electroweak theory , where the higgs field has a nonzero value that fills all of space , all of space has some energy density . in the electroweak theory , that energy density turns out to be really large . if you calculate it , you find that the contribution of the higgs field s vacuum expectation value to the energy density of the universe is , where is the higgs - boson mass and is the scale of electroweak symmetry breaking .a vacuum energy density corresponds to a cosmological constant in einstein s equations .we ve known for a very long time that there is not much of a cosmological constant , that the vacuum energy has to less than about , a very little number .it corresponds to or . even in the blackest heart, there is not much dark energy !but if we use the current lower limit on the higgs - boson mass , , to estimate the vacuum energy in the electroweak theory , we find .that is wrong by no less than fifty - four orders of magnitude !this mismatch has been known for about three decades .that long ago , tini veltman was concerned that something fundamental was missing from our conception of the electroweak theory . for many of us ,the vacuum energy problem has been a chronic dull headache for all this time .this raises an interesting point about how science is done , and how science progresses .we could , all of us , have said , `` the electroweak theory is wrong , let s put it aside .'' think of all that we would nt know , if we had followed that course . we ca nt forget about deep problems like the vacuum energy conflict , but we have to have the sense to put them aside , to defer consideration until the right moment . in the simplest terms , the question is , `` why is empty space so nearly massless ? '' that is a puzzle that has been with us repeatedly in the history of physics , and it is one that is particularly pointed now. maybe now should be the time that we return to the vacuum energy problem . over the last few years, we have a new wrinkle to the vacuum energy puzzle , the evidence within a certain framework of analysis for a nonzero cosmological constant , respecting the bounds cited a moment ago .that discovery recasts the problem in two important ways .first , instead of looking for a principle that would forbid a cosmological constant , perhaps a symmetry principle that would set it exactly to zero , now we have to explain a tiny cosmological constant !whether we do that in two steps or one step remains to be seen .second , from the point of view of the dialogue among observation and experiment and theory , now it looks as if we have access to some new stuff whose properties we can measure . maybe that will give us the clue that we need to solve this old problem .we now come to the question of how we separate the electroweak scale from higher scales .this is a realm in which we havent neglected gravity all along , because we have wanted to think of the electroweak theory as a truly useful effective theory , and we have known that we live in a world in which the electroweak scale is nt the only scale .we have taken note of the planck scale , and there may be a unification scale for strong , weak , and electromagnetic interactions ; for all we know , there are intermediate scales , where flavor properties are determined and masses are set .we know that the higgs - boson mass must be less than a tev , but the scalar mass communicates quantum - mechanically with the other scales that may range all the way up to .how do we keep the higgs - boson mass from being polluted by the higher scales ? that s the essence of the hierarchy problem .we ve dealt with this , for twenty - five years or so , by extending the standard model .maybe the higgs boson is a composite particle , maybe we have broken supersymmetry that tempers the quadratic divergences in the running of the higgs - boson mass , maybe .now , because of the observation that we havent tested gravity up to very high energies , it has become fashionable to turn the question around and ask why the planck scale is so much bigger than the electroweak scale , rather than why the electroweak scale is so low .in other words , why is gravity so weak ?that line of investigation has given rise to new thinking , part of it connected with a new conception of spacetime .what is in play here , again , is a question so old that , for a long time , we had forgotten that it was a question : is spacetime really three - plus - one dimensional ?what is our evidence for that ?how well do we know that there are not other , extra , dimensions ? what must be the character of those extra dimensions , and the character of our ability to investigate them , for them to have escaped our notice ?could extra dimensions be present ?what is their size ? what is their shape ?what influence do they exert on our world ?( because if they have no effect , it almost does nt matter that they exist . )are the extra dimensions where fermion masses are set , or electroweak symmetry is broken , or what ?how can we map them ?how can we attack the question of extra dimensions experimentally ?i will give you just two examples of new ways of thinking that are stimulated by the notion that additional dimensions have eluded detection .these are both probably wrong , and that hardly matters , because they are mind - expanding . perhaps , in contrast to the strong and electroweak gauge forces , gravity can propagate in the extra dimensions in all dimensions , because it is universal .when we inspect the world on small enough scales , we will see gravity leaking into the extra dimensions .then by gauss s law , the gravitational force will not be an inverse - square law , but will be proportional to , where is the number of extra dimensions .that would mean that , as we extrapolate to smaller distances , or higher energies , gravity will not follow the newtonian form forever , as we conventionally suppose .below a certain distance scale , it will start evolving more rapidly ; its strength will grow faster .therefore it might join the other forces at a much lower energy than the planck scale we have traditionally assumed . that could change our perception of the hierarchy problem entirely . that s a way we had nt thought about the problem before. it has stimulated a lot of research into how we might detect extra dimensions .perhaps extra dimensions offer a new way to try to understand fermion masses .one of the great challenges beyond the fact that we do nt have a clue how to calculate fermion masses is that the fermion masses have such wildly different values . in units of ,the mass of the top quark is , the mass of the electron is a few , and so on .how can a reasonable theory generate such big differences ?suppose , for simplicity , that spacetime has one additional dimension .in that extra dimension , wave packets correspond to left - handed and right - handed fermions . for reasons to be supplied by a future theory ,each wave packet rides on a different rail ( is centered on a different value of the new coordinate , ) .it is the overlap between a left - handed wave packet , a right - handed wave packet , and the higgs field assumed to vary little with sets the masses of the fermions .if the wave packets are gaussian ( how else could they be ? )then they need only be offset by a little in order for the overlap integral to change by a lot .i do nt know whether this story can possibly be right , but it is very different from any other story we have told ourselves about fermion masses .for that reason , i think it is an important opening . other extra - dimensional delights may present themselves , provided that gravity is intrinsically strong but spread out into many dimensions .tiny black holes might be formed in high - energy collisions .we might have the possibility of detecting the exchange or emission of gravitons not as individual gravitons , but as towers of them ., freeman dyson asserts that we do nt need a quantum theory of gravity because single graviton emission can never be detected .we would say that he is mistaken , but the dialogue reveals an interesting contrast of styles and world - views .] at all events , gravity is here to stay in particle physics .it s been present for years as a headache , in the form of the hierarchy problem and in the challenge of the vacuum energy problem .now it is perhaps presenting itself as an opportunity !as i intimated in [ subsec : natq ] , i have been concerned for some time with the prevailing narrow view of the goals of our science .it is troubling , to be sure , when we read in the popular press that the sole object of our endeavors is to find to check off , if you will the higgs boson , the holy grail ( at least for this month ) of particle physics .what is more troubling to me , the shorthand of the higgs search narrows the discourse within our own community . in response , i have begun to evolve a visual metaphor the double simplex for what we know , for what we hope might be true , and for the open questions raised by our current understanding .while i have a deep respect for the refiner s fire that is mathematics , i believe that we should be able to explain the essence of our ideas in languages other than equations .i interpolated a brief animated overview of the double simplex at this point in my lecture . for a preliminary exposition in a pedagogical setting ,see ref .a more complete explanation of the aims of particle physics through the metaphor of the double simplex is in preparation .i ve given you my view of how our puzzles and opportunities and clues fit together , of how we might think about our field and evolution .the organizers have given you their picture , with ten themes for ten days of our school .to encourage lively participation and debate , i issued _ to what extent is poincar symmetry exact ? _ looking back on the history of science , discovering that different symmetries are not exact has ushered in a new era .poincar symmetry is particularly interesting because it is currently considered the most sacred geometry .moreover , its evolution to the form we learn about today has marked great revolution in physics , in the past .yasaman s trophy , a bottle of california s finest sparkling wine , bears the autographs of nobel laureates martin perl and burton richter ; slac notables jonathan dorfan , persis drell , sid drell , and vera lth ; high energy physics advisory panel chair fred gilman ; slac summer institute organizers joanne hewett , john jaros , tune kamae , and charles prescott ; and my own .even more precious was the opportunity need we say , obligation to present and defend the best eleventh question in an eleven - minute talk at that day s afternoon discussion session .padova student marco zanetti and colorado state / ucsd student thomas topel received special commendations for their questions on the nature of time and the mechanism that breaks the strong electroweak symmetry .their prizes are copies of peter galison s recent book , _einstein s clocks , poincare s maps : empires of time . _thanks and congratulations to all who entered the challenge !fermilab is operated by universities research association inc . under contractde - ac02 - 76ch03000 with the u.s .department of energy .i gratefully acknowledge the warm hospitality of cern th , where i prepared the final form of these notes .i thank tom appelquist , andreas kronfeld , and marvin weinstein for insightful comments on emergent phenomena .my enthusiastic thanks go to the organizers and participants in the xxxii slac summer institute for a very enjoyable and educational fortnight . for a capsule history , see h. pleijel , presentation speech for the 1936 nobel physics prize , _ in nobel lectures in physics ( 1922 1941 ) _( world scientific , singapore , 1998 ) , pp .for accounts of a renactment , ninety years after hess , see g. snow and r. j. wilkes , `` cosmic ray balloon flights at snowmass 2001 , '' http://faculty.washington.edu/~wilkes/salta/balloon/ ; mike perricone , `` balloon flight launches cosmic ray education project , '' http://www.fnal.gov/pub/ferminews/ferminews01-07-27/p3.html r. s. chivukula , `` the origin of mass in qcd , '' presented at this slac summer institute , , slides available at http://www-conf.slac.stanford.edu/ssi/2004/lec_notes/chivukula/default.htm .d. pines & r. b. laughlin , _ proc .* 97 , * 28 - 31 ( 2000 ) , http://www.pnas.org/cgi/content/full/97/1/28 .l. susskind , `` the anthropic landscape of string theory , '' .n. arkani - hamed , `` the last word on nature s greatest puzzles , '' presented at this slac summer institute , available at http://www-conf.slac.stanford.edu/ssi/2004/lec_notes/arkani-hamed/default.htm .w. j. marciano , `` precision ew measurements and the higgs mass , '' presented at this slac summer institute , , slides available at http://www-conf.slac.stanford.edu/ssi/2004/lec_notes/marciano1/default.htm and http://www-conf.slac.stanford.edu/ssi/2004/lec_notes/marciano2/default.htm .g. gratta , `` experimental neutrino oscillations , '' presented at this slac summer institute , available at http://www-conf.slac.stanford.edu/ssi/2004/lec_notes/gratta/default.htm and http://www-conf.slac.stanford.edu/ssi/2004/lec_notes/gratta2/default.htm .b. kayser , `` theory basics , '' presented at this slac summer institute , available at http://www-conf.slac.stanford.edu/ssi/2004/lec_notes/kayser1/default.htm and http://www-conf.slac.stanford.edu/ssi/2004/lec_notes/kayser2/default.htm. a. refregier , `` weak lensing : a probe of dark matter and dark energy , '' presented at this slac summer institute , available at http://www-conf.slac.stanford.edu/ssi/2004/lec_notes/refregier/default.htm .m. weinstein , phys .d * 8 * , 2511 ( 1973 ) .v. agrawal , s. m. barr , j. f. donoghue and d. seckel , phys .lett . * 80 * , 1822 ( 1998 ) [ ] .v. agrawal , s. m. barr , j. f. donoghue and d. seckel , phys .d * 57 * , 5480 ( 1998 ) [ ] .c. j. hogan , rev .* 72 * , 1149 ( 2000 ) [ ] .j. j. yoo and r. j. scherrer , phys .d * 67 * , 043517 ( 2003 ) [ ] .r. n. mohapatra , `` physics of neutrino mass , '' presented at this slac summer institute , new j. phys .* 6 * , 82 ( 2004 ) [ ] , slides available at http://www-conf.slac.stanford.edu/ssi/2004/lec_notes/mohapatra/default.htm .m. trodden , `` baryogenesis and leptogenesis , '' presented at this slac summer institute , , slides available at http://www-conf.slac.stanford.edu/ssi/2004/lec_notes/trodden/default.htm .b. c. allanach , g. a. blair , s. kraml , h. u. martyn , g. polesello , w. porod and p. m. zerwas , .n. arkani - hamed , s. dimopoulos and g. r. dvali , phys .b * 429 * , 263 ( 1998 ) [ ] .e. g. adelberger , b. r. heckel and a. e. nelson , ann .nucl . part .sci . * 53 * , 77 ( 2003 ) [ ] .s. smullin `` tests of short distance gravity , '' presnted at this slac summer institute , http://www-conf.slac.stanford.edu/ssi/2004/lec_notes/smullin/default.htm t. g. rizzo , `` pedagogical introduction to extra dimensions , '' presented at this slac summer institute , slides available at http://www-conf.slac.stanford.edu/ssi/2004/lec_notes/rizzo/default.htm. g. landsberg , `` collider searches for extra dimensions , '' presented at this slac summer institute , , slides available at http://www-conf.slac.stanford.edu/ssi/2004/lec_notes/landsberg/default.htm .n. arkani - hamed and m. schmaltz , phys .d * 61 * , 033005 ( 2000 ) [ ] .s. giddings , `` gravity and strings , '' presented at this slac summer institute , available at http://www-conf.slac.stanford.edu/ssi/2004/lec_notes/giddings/default.htm .a short introduction to the double simplex appears in 1.3 of c. quigg , `` beyond the standard model in many directions , '' lectures presented at the 2003 latin - american school of high - energy physics , . f. gilman , `` road map to the future , '' presented at this slac summer institute , available at http://www-conf.slac.stanford.edu/ssi/2004/lec_notes/gilman/default.htm .
|
opening lecture at the 2004 slac summer institute .
|
directed graph is a directed acyclic graph ( dag ) or acyclic digraph if does not contain a directed cycle . in this paper, we consider a generic optimization problem over a directed graph with acyclic constraints , which require the selected subgraph to be a dag .let us consider a complete digraph .let be the number of nodes in digraph , a decision variable matrix associated with the arcs , where is related to arc , the 0 - 1 ( adjacency ) matrix with if , otherwise , the sub - graph of defined by , and let be the collection of all acyclic subgraphs of .then , we can write the optimization problem with acyclic constraints as where is a function of . acyclic constraints ( or dag constraints ) appear in many network structured problems .the maximum acyclic subgraph problem ( mas ) is to find a subgraph of with maximum cardinality while the subgraph satisfies acyclic constraints .mas can be written in the form of with .although exact algorithms were proposed for a superclass of cubic graphs and for general directed graphs , most of the works have focused on approximations or inapproximability of either mas or the minimum feedback arc set problem ( fas ) .fas of a directed graph is a subgraph of that creates a dag when the arcs in the feedback arc set are removed from .note that mas is closely related to fas and is dual to the minimum fas . finding a feedback arc set with minimum cardinality is -complete in general .however , minimum fas is solvable in polynomial time for some special graphs such as planar graphs and reducible flow graphs , and a polynomial time approximation scheme was developed for a special case of minimum fas , where exactly one arc exists between any two nodes ( called tournament ) .dags are also extensively studied in bayesian network learning .given observational data with features , the goal is to find the true unknown underlying network of the nodes ( features ) while the selected arcs ( dependency relationship between features ) do not create a cycle .in the literature , approaches are classified into three categories : ( 1 ) score - based approaches that try to optimize a score function defined to measure fitness , ( 2 ) constraint - based approaches that test conditional independence to check existence of arcs between nodes ( 3 ) and hybrid approaches that use both constraint and score - based approaches .although there are many approaches based on the constraint - based or hybrid approaches , our focus is solving by means of score - based approaches . for a detailed discussion of constraint - based and hybrid approaches and models for undirected graphs ,the reader is referred to aragam and zhou and han _ et al ._ . for estimating the true network structure by a score - based approach, various functions have been used as different functions give different solutions and behave differently .many works focus on penalized least squares , where penalty is used to obtain sparse solutions .popular choices of the penalty term include bic , -penalty , -penalty , and concave penalty .lam and bacchus use minimum - description length as a score function , which is equivalent to bic .chickering propose a two - phase greedy algorithm , called greedy equivalence search , with the norm penalty .van de geer and bhlmann study the properties of the norm penalty and show positive aspects of using regularization .raskutti and uhler use a variant of the norm .they use cardinality of the selected subgraph as the score function where the subgraphs not satisfying the markov assumption are penalized with a very large penalty .aragam and zhou introduce a generalized penalty , which includes the concave penalty , and develop a coordinate descent algorithm . use the norm penalty and propose a tabu search based greedy algorithm for reduced arc sets by neighborhood selection in the pre - processing step . with any choice of a score function , optimizing the score function is computationally challenging , because the number of possible dags of grows super exponentially in the number of nodes and learning bayesian networksis also shown to be -complete .many heuristic algorithms have been developed based on greedy hill climbing or coordinate descent , or enumeration when the score function itself is the main focus .there also exist exact solution approaches based on mathematical programming .one of the natural approaches is based on cycle prevention constraints , which are reviewed in section [ section_mip_formulation ] .the model is covered in han _ as a benchmark for their algorithm , but the mip based approach does not scale . studied mip models for minimum fas based on triangle inequalities and set covering models .several works have been focused on the polyhedral study of the acyclic subgraph polytopes . in general ,mip models have gotten relatively less attention due to the scalability issue . in this paper, we propose an mip model and iterative algorithms based on the following well - known property of dags .[ property_dag_topological_order ] a directed graph is a dag if and only if it has a topological order .suppose that is the adjacency matrix of an acyclic graph .then , by sorting the nodes of acyclic graph based on the topological order , we can create a lower triangular matrix from , where row and column indices of the lower triangular matrix are in the topological order .then , any arc in the lower triangular matrix can be used without creating a cycle . by considering all arcs in the lower triangular matrix, we can optimize in without worrying to create a cycle .this is an advantage compared to arc - based search , where acyclicity needs to be examined whenever an arc is added .although the search space of topological orders is very large , a smart search strategy for a topological order may lead to a better algorithm than the existing arc - based search methods .the proposed mip assigns node orders to all nodes and add constraints to satisfy property [ property_dag_topological_order ] . the iterative algorithms search over the topological order space by moving from one topological order to another order .the first algorithm uses the gradient to find a better topological order and the second algorithm uses historical choice of arcs to define the score of the nodes . with the proposed mip model and algorithms for, we consider a gaussian bayesian network learning problem with penalty for sparsity , which is discussed in detail in section [ section_reg_network ] . out of many possible models in the literature , we pick the -penalized least square model from recently published work of han _ , which solves the problem using a tabu search based greedy algorithm .the algorithm is one of the latest algorithms based on arc search and is shown to be scalable when is large .further , their score function , penalized least squares , is convex and can be solved by standard mathematical optimization packages . hence , we select the score function from han __ and use their algorithm as benchmark . in the computational experiment , we compare the performance of the proposed mip model and algorithms against the algorithm in han __ and other available mip models for synthetic and real instances .our contributions are summarized in the following . 1 .we consider a general optimization problem with acyclic constraints and propose an mip model and iterative algorithms for the problem based on the notion of topological orders .the proposed mip model has significantly less constraints than the other mip models in the literature , while maintaining the same order of the number of variables .the computational experiment shows that the proposed mip model outperforms the other mip models when the subgraph is sparse .the iterative algorithms based on topological orders outperform when the subgraph is dense .they are more scalable than the benchmark algorithm of han _ when the subgraph is dense . in section [ section_mip_formulation ] , we present the new mip model along with two mip models in the literature . in section [ section_fast_algorithm ] ,we present two iterative algorithms based on different search strategies for topological orders .the gaussian bayesian network learning problem with -penalized least square is introduced and computational experiment are presented in sections [ section_reg_network ] and [ sec_experiment ] , respectively . in the rest of the paper , we use the following notation . 1. = index set of the nodes 2 . = index set of the nodes excluding node , 3 . 4 . topological order given , we define to denote that the order of node is . for example ,given three nodes , and topological order , we have , , and . with this notation , if , then we can add an arc from to .in this section , we present three mip models for .the first and second models , denoted as and , respectively , are models in the literature for similar problems with acyclic constraints .the third model , denoted as , is the new model we propose based on property [ property_dag_topological_order ] .a popular mathematical programming based approach for solving is the cutting plane algorithm , which is well - known for the traveling salesman problem formulation .let be the set of all possible cycles and be the set of the arcs defining a cycle .let be a function that counts the number of selected arcs in from .then , can be solved by which can be formulated as an mip .note that has exponentially many constraints due to the cardinality of .therefore , it is not practical to pass all cycles in to a solver .instead , the cutting plane algorithm starts with an empty active cycle set and iteratively adds cycles to .that is , the algorithm iteratively solves with the current active set , detects cycles from the solution , and adds the cycles to .the algorithm terminates when there is no cycle detected from the solution of .one of the drawbacks of the cutting plane algorithm based on is that in the worst case we can add all exponentially many constraints .in fact , han _ et al ._ study the same model and concluded that the cutting plane algorithm does not scale .baharev _ et al ._ recently presented mip models for the minimum feedback arc set problem based on linear ordering and triangular inequalities , where the acyclic constraints presented were previously used for cutting plane algorithms for the linear ordering problem . for any , we can write the following mip model based on triangular inequalities presented in .note that is not defined for all and . instead of having a full matrix of binary variables ,the formulation only uses lower triangle of the matrix using the fact that .we can also use this technique to any of the mip models presented in this paper .however , for ease of explanation , we will use the full matrix , while the computational experiment is done with the reduced number of binary variables .therefore , the cutting plane algorithm with should be more scalable than the implementation in han _ et al . _ , which has twice more binary variables .baharev _ et al ._ also provides a set covering based mip formulation .the idea is similar to . in the set covering formulation , each row and column represents a cycle and an arc , respectively .similar to , existence of exponentially many cycles is a drawback of the formulation and baharev _ et al . _ use the cutting plane algorithm .next , we propose an mip model based on property [ property_dag_topological_order ] .although uses significantly less constraints than , still has constraints which grows rapidly in .on the other hand , the mip model we propose has variables and constraints .in addition to , let us define decision variable matrix .
|
we propose a mixed integer programming ( mip ) model and iterative algorithms based on topological orders to solve optimization problems with acyclic constraints on a directed graph . the proposed mip model has a significantly lower number of constraints compared to popular mip models based on cycle elimination constraints and triangular inequalities . the proposed iterative algorithms use gradient descent and iterative reordering approaches , respectively , for searching topological orders . a computational experiment is presented for the gaussian bayesian network learning problem , an optimization problem minimizing the sum of squared errors of regression models with l1 penalty over a feature network with application of gene network inference in bioinformatics .
|
it is well known that if very fast , rapidly oscillating signals propagate in a real medium , they undergo the dispersion phenomenon . various frequency components of a signalpropagate with different phase velocities , and they are differently dumped . as a result ,the shape of the signal is distorted during propagation the signal in the medium .naturally , this phenomenon is practically important only at very short times and very high frequencies ( of the order of hz and above in the assumed model ) . in now classical works sommerfeld and brillouin have shown that in the lorentz model of a dispersive medium , apart of the main signal two small precursors are formed . in the asymptotic description of the total fieldthese precursors are interpreted as contributions to the field resulting from two different pairs of saddle points . for the sommerfeld precursor pertinent simple saddle points vary outside some disc in a complex frequency plane . as the space - timecoordinate , to be defined later , takes the initial value equal unity , those points merge at infinity to form one saddle point of infinite order .as grows up to infinity , they separate into two simple saddle points that move symmetrically with respect to the imaginary axis towards corresponding branch points located in the left and the right half - plane , respectively . in the case of brillouin precursor ,two other simple saddle points vary inside a smaller disc . as the coordinate grows from unity , they move toward each other along the imaginary axis , coalesce into one saddle point of the second order on the axis , and then again split into simple saddle points that depart from the axis and move , symmetrically with respect to this axis , towards corresponding branch points in the left and the right half - plane , respectively .the location of the saddle points affects local oscillations and dumping of the precursor .it depends on the space - time coordinate and is governed by the saddle point equation . in this paperwe confine our attention to the brillouin precursor , also called a second precursor ( as opposed to the first , sommerfeld precursor ) . fundamental work on this precursor is due to brillouin . because of limitations of asymptotic methods then available ( now referred to as non - uniform methods ), brillouin could not correctly describe the precursor s dynamics for values of corresponding to the coalescence of simple saddle points into one saddle point of a higher order . with the development of advanced , uniform asymptotic techniques , complete description of the precursor now got feasible ( kelbert and sazonov , and oughstun and sherman ) . in the latter monograph , in addition to the delta function pulse , the unit step - function modulated signal and the rectangular modulated signal , the authors also studied an initial signal with finite rate of growth . in their model , however , the envelope of the initial signal is described by everywhere smooth function of time , tending to zero as time goes to minus infinity ( , sec .4.3.4 ) . in the present paperwe consider more realistic excitation which is caused by an abruptly switched modulated sine signal , vanishing identically for time and being non - zero for . at the derivative of the signal s envelope suffers a step discontinuity . as increases , the envelope grows with a finite speed , asymptotically tending to its maximum value . in the following sections we construct uniform asymptotic representation for the brillouin precursor resulting from this sort of excitation , and show how the speed of growth in the initial signal affects the form of the precursor .we also illustrate the results with numerical examples .we consider a one dimensional electromagnetic problem of propagation in a lorentz medium .the medium is characterized by the frequency - dependent complex index of refraction where so called plasma frequency of the medium , is a damping constant and is a characteristic frequency .any electromagnetic field in the medium satisfies the maxwell equations where is a real function and is a real constant ( hereafter assumed to be equal 1 ) . by fourier transforming the equations with respect to and assuming that the fields depend on one spatial coordinate only , we obtain the following equations for transforms of the respected fields where is the unit vector directed along -axis and is the fourier transform of .it then follows that , and are mutually perpendicular .moreover , if is known then is also known , and vice versa .it is also true for the electromagnetic field components , which are the inverse fourier transforms of and .therefore , the knowledge of the electric ( magnetic ) field is sufficient to determine the full electromagnetic field . to make the calculations as simple as possible ,it is advisable that the ( or ) axis be directed to coincide with the electric or magnetic field .assume that in the plane an electromagnetic signal is turned on at the moment .for it oscillates with a fixed frequency and its envelope is described by a hyperbolic tangent function .suppose the selected cartesian component ( say -component ) of one of these fields in the plane is given by the parameter determines how fast the envelope of the signal grows .this initial electromagnetic disturbance excites a signal outside the plane . inwhat follows we will be interested in the field propagating in the half - space .the problem under investigation can be classified as a mixed , initial - boundary value problem for the maxwell equations .the exact solution for this specific form of the initial signal is described by the contour integral defined in the complex frequency plane . here , , \hspace{.8cm}\end{aligned}\ ] ] the complex phase function given by =i\o[n(\o)-\t],\ ] ] and is the beta function defined via the psi function as .\ ] ] is a fourier transform of the initial signal envelope .the dimensionless parameter defines a space - time point in the field , and is the speed of light in vacuum .the contour is the line , where is a constant greater than the abscissa of absolute convergence for the function in square brackets in ( [ e4 ] ) and ranges from negative to positive infinity .our goal is twofold .first , we shall seek an asymptotic formula for the second ( brillouin ) precursor that results from the excitation .in other words , we shall find near saddle points contribution to the uniform asymptotic expansion of the total field .second , we shall examine how the speed parameter in ( [ e2 ] ) affects the form of the brillouin precursor .our derivation of the asymptotic formula for the brillouin precursor is based on the technique developed by chester et al. for two simple saddle points coalescing into one saddle point of the second order .the technique is also conveniently described in and .the locations in the complex -plane of the saddle points in ( [ e3 ] ) are determined from the saddle point equation at these points the first derivative of the phase function vanishes .we are interested in the near saddle points , varying in the domain . as increases from 1 to a value denoted by , the near saddle points and approach each other along the imaginary axis from below and from above , respectively ( ) .they coalesce to form a second order saddle point at .finally , as tends to infinity they depart from the axis and symmetrically approach the points in the right and in the left complex half plane , respectively . if is eliminated from ( [ e8 ] ) then the equation can be represented in the form of an eighth degree polynomial in on its left hand side , and zero on its right hand side .it does not seem to be possible to solve the equation exactly . in what followswe shall employ the solution to ( [ e8 ] ) which was obtained numerically .alternatively , a simple approximate solution found in could be used here at the expense of accuracy in resulting numerical examples . the first step in the procedure is to change the integration variable in ( [ e3 ] ) to a new variable , so that the map in some disk containing the saddle points ( but not any other saddle points ) is conformal and at the same time the exponent takes the simplest , polynomial form notice that has two simple saddle points that can coalesce into one saddle point of the second order , corresponding to .from we infer that for to be conformal , should correspond to , and should correspond to .then , where is a short notation for . in casethe saddle points merge to form one saddle point of the second order , one has , and the relevant formula for is ^{1/3}.\ ] ] by using correspondence in ( [ e14 ] ) one finds that and are equal to .\ ] ] the equation ( [ e19 ] ) for has three complex roots .only one root corresponds to a regular branch of the transformation ( [ e14 ] ) leading to the conformal map . to find the proper value of we first note from ( [ e17 ] ) that can take one of the three values : , or corresponding to three different branches of the transformation ( [ e14 ] ) .it can be readily verified that for both and are real valued and , .then it follows from ( [ e19 ] ) that . on the other hand , if then implies that in the present case ^\ast ] .it is now seen that rhs of ( [ e14 ] ) equals , where .hence for , .we now take advantage of the fact that as given by ( [ e16 ] ) tends in the limit to ( [ e17 ] ) as . because and for , and and as , we conclude that and for , i.e. ^{1/3}\;e^{i\alpha},\ ] ] where if or , respectively . with the new variable of integration the integral ( [ e3 ] )can be written down in the form where and \ ; \dot{\o}(s).\ ] ] the contour is an infinite arc in the left complex half - plane , symmetrical with respect to the real axis , running upwards and having rays determined by the angles and as its asymptotes .the domain is the image of under ( [ e14 ] ) .the term , standing for the integral of defined over the parts of outside , is exponentially smaller than itself .we now represent in the canonical form provided the function is regular , the last term in ( [ e22 ] ) vanishes at the saddle points , and its contribution to the asymptotic expansion is smaller than that from the first two terms .indeed , it can be shown that integration by parts of the last term leads to an integral of similar form as ( [ e16 ] ) multiplied by . to determine and we substitute in ( [ e22 ] ) and thus find by using ( [ e22 ] ) and( [ e14 ] ) in ( [ e16 ] ) , and extending the integration contour in the resulting integrals to , we find that the leading term of the asymptotic expansion of as is given by +{\lambda}^{-2/3}c_1(\t ) \hbox{ai}^{\prime}[{\lambda}^{2/3}\g(\t)^2]\right).\ ] ] it is defined through the airy function and its derivative , as given by ( ) plots of both functions for real are shown in fig . 1 .the expansion holds for any , including .this special case corresponds to coalescing of the two simple saddle points into one saddle point of the second order . in other wordsthe expansion is uniform in , and hence in .it is seen that for , i.e. for , the algebraic order of in is .this behavior is characteristic of an integral with a saddle point of the second order . for separated from zero the airy function and its derivative can be replaced by their asymptotic expansions ( ) ,\ ] ] \ ] ] as , and \left(1+o\left[(-x)^{-2}\right]\right)\right.}\\ & & \left . { } + o\left[(-x)^{-3/2}\right]\right\ } , \hspace{3in}\nonumber\end{aligned}\ ] ] \left(1+o\left[(-x)^{-2}\right]\right)\right.}\\ & & \left . { } + o\left[(-x)^{-3/2}\right]\right\ } \hspace{3in}\nonumber\end{aligned}\ ] ] as .by using these expansions in ( [ e30 ] ) we arrive at the following non - uniform asymptotic representation of the precursor if , and if .we see from the above formulas that for sufficiently distant from ( for brillouin s choice of medium parameters ) , the representation ( [ e30 ] ) reduces to a simple saddle point contribution from if , and to a sum of simple saddle point contributions from and if . in this mannerit is confirmed that the saddle point does not contribute when .this is a direct consequence of the fact that the original contour of integration in ( [ e3 ] ) can not be deformed to a descent path from imaginary .the algebraic order of in is now because in this case separate simple saddle points contribute to the expansion . from ( [ e34 ] ) and ( [ e35 ] )it is also seen that these formulas are non - applicable at ( i.e. ) , where . on the other handthe uniform expansion ( [ e30 ] ) remains valid for any ( and ) .in particular it provides a smooth transition between the cases of small and large .if , then it can be readily seen that , , and similarly . in this case( [ e35 ] ) can be written down in a more compact form .\ ] ] fig .2 shows the dynamics of the brillouin precursor in a lorentz medium as given by its uniform representation ( [ e30 ] ) and non - uniform ones ( [ e34 ] ) and ( [ e35 ] ) . throughout this work the brillouin s choice of medium parameters is assumed . , and ._ ] for the function takes positive values and the precursor is described by a monotonically changing function .adversely , for the argument in both functions takes negative values which leads to oscillatory behavior of the precursor .this reflects the behavior of both airy function and its derivative for positive and negative values of their argument ( see fig 1 ) ., and ( bottom ) . here , and ._,title="fig : " ] , and ( bottom ) . here, and ._,title="fig : " ] an important question arises on how the rate parameter affects the form of the brillouin precursor .as the parameter in ( [ e30 ] ) increases starting from relatively small values , the shape of the precursor remains virtually unchanged while its magnitude grows .this tendency is no longer valid if enters a transitory interval . in that interval the shape of the precursor changes and its magnitude rapidly increases . above transitory interval , further increase of leaves the shape and the magnitude of the precursor virtually constant .the form of brillouin precursor for below ( ) and above ( ) the transitory interval is shown in fig .explanation of this behavior lies in the properties of the coefficients and in ( [ e30 ] ) , which are -dependent . the coefficients , in turn , determine the weight with which airy function and its derivative contribute to the precursor . and against the speed parameter at . here , and ._ ] first , consider the case of . in fig .4 the coefficients of , respectively , and in the parentheses in ( [ e30 ] ) multiplied by are plotted against .the value of is chosen to be slightly below .for relatively small the term proportional to dominates over the term proportional to . in this casethe ratio of both terms remains unchanged in a wide interval of variation .the magnitude of the precursor increases with growth up to the moment where the contribution from changes sign and rapidly grows until finally it settles down at a virtually constant level . at the same timethe contribution from decreases to another constant level and is very small compared to the other term . at this stagethe shape and the magnitude of the precursor are approximately determined by the special form of the function which appears in and , which is a limiting case of as , i.e. for the initial signal with a unit step function envelope .now consider the case of where the precursor becomes oscillatory .the envelope of the oscillations can be conveniently approximated with the help of ( [ e36 ] ) by }\left| \left(\frac{-2\pi}{{\lambda}\phi^{''}(\o_{2})}\right)^{1/2 } g(\o_2;\b)\right|,\ ] ] provided is sufficiently large . .calculated at different values of .here , and ._ ] in fig . 5 the magnitude of the precursor envelopeis plotted against the parameter for different values of .it is seen again that after fast growth of the envelope magnitude at relatively small values of , which occurs with approximately the same rate for all , the magnitude reaches a saturation level at higher values of . since the saturation appears earlier at larger values of , the precursor envelope has a tendency to become narrower with growing .additionally , one observes that with growing the first extremum moves towards larger values of . this is a direct consequence of the fact that the first extremum of the airy function is shifted towards negative values of its argument as compared to the first extremum of the derivative of the airy function .it has an additional effect on narrowing the precursor shape .in this paper we have derived the uniform and non - uniform asymptotic representations for the brillouin precursor in a lorentz medium , excited by an incident signal of finite rise time , and well defined , startup time . with the use of these representations we analyzed the effect of the speed parameter on the form and magnitude of the precursor .the results obtained can be helpful e.g. in applications involving triggering devices that work with signal amplitudes close to the noise level . in this paperwe did not consider the problem of smooth transition from brillouin precursor to the main signal .
|
propagation of a brillouin precursor in a lorentz dispersive medium is considered . the precursor is excited by a sine modulated initial signal , with its envelope described by a hyperbolic tangent function . the purpose of the paper is to show how the rate of growth of the initial signal affects the form of the brillouin precursor . uniform asymptotic approach , pertinent to coalescing saddle points , is applied in the analysis . the results are illustrated with numerical examples . _ key words : _ lorentz medium , dispersive propagation , brillouin precursor , uniform asymptotic expansions
|
ognitive radio ( cr ) , since the name was coined by mitola in his seminal work , has drawn intensive attentions from both academic and industrial communities .generally speaking , there are three basic operation models for crs , namely , _ interweave _ , _ overlay _ , and _ underlay _( see , e.g. , and references therein ) .the interweave method is also known as _ opportunistic spectrum access _ ( osa ) , originally outlined in and later introduced by darpa , where the cr is allowed to transmit over the spectrum allocated to an existing primary radio ( pr ) system only when all pr transmissions are detected to be off . in contrast to interweave , the overlay and underlay methods allow the cr to transmit concurrently with prs at the same frequency .the overlay method utilizes an interesting `` cognitive relay '' idea , . for this method, the cr transmitter is assumed to know perfectly all the channels in the coexisting pr and cr links , as well as the pr messages to be sent .thereby , the cr transmitter is able to forward pr messages to the pr receivers so as to compensate for the interference due to its own messages sent concurrently to the cr receiver . in comparison with overlay ,the underlay method requires only the channel gain knowledge from the cr transmitter to the pr receivers , whereby the cr is permitted to transmit regardless of the on / off status of pr transmissions provided that its resulted signal power levels at all pr receivers are kept below some predefined threshold , also known as the _ interference - temperature _ constraint , . from implementation viewpoints , interweave and underlay methods could be more favorable than overlay for practical cr systems . in a wireless environment, channels are usually subject to space - time - frequency variation ( fading ) due to multipath propagation , mobility , and location - dependent shadowing . as such , _ dynamic resource allocation _ ( dra ) becomes crucial to crs for optimally deploying their transmit strategies , where the transmit power , bit - rate , bandwidth , and antenna beam are dynamically allocated based upon the channel state information ( csi ) of the pr and cr systems ( see , e.g. , - ) . in this paper , we are particularly interested in the case where the cr terminal is equipped with multi - antennas so that it can deploy joint transmit precoding and power control , namely _ cognitive beamforming _( cb ) , to effectively balance between avoiding interference at the pr terminals and optimizing performance of the cr link . in ,various cb schemes have been proposed considering the cr transmit power constraint and a set of interference power constraints at the pr terminals , under the assumption that the cr transmitter knows perfectly all the channels over which it interferes with pr terminals . in this work , however , we propose a _ practical _cb scheme , which does not require any prior knowledge of the cr - to - pr channels . instead , by exploiting the time - division - duplex ( tdd ) operation mode of the pr link and the channel reciprocities between the cr and pr terminals ,the proposed cb scheme utilizes a new idea so - called _ effective interference channel _ ( eic ) , which can be efficiently estimated at the cr terminal via periodically observing the pr transmissions .thereby , the proposed learning - based cb scheme eliminates the overhead for pr terminals to estimate the cr - to - pr channels and then feed them back to the cr , and thus makes the cb implementable in practical systems .furthermore , the proposed learning - based cb scheme with the eic creates a new operation model for crs , where the cr is able to transmit with prs at the same time and frequency over the detected available spatial dimensions , thus named as _ opportunistic spatial sharing _ ( oss ) . on the one hand ,oss , like the underlay method , utilizes the spectrum more efficiently than the interweave method by allowing the cr to transmit concurrently with prs . on the other hand, oss can further improve the cr transmit spectral efficiency over the underlay method by exploiting additional side information on pr transmissions , which is extractable from the observed eic ( more details will be given later in this paper ) . therefore , oss is a more superior operation model for crs than both underlay and interweave methods in terms of the spectrum utilization efficiency .the main results of this paper constitute two parts , which are summarized as follows : * first , we consider the ideal case where the cr s estimate on the eic is _ perfect _ or noiseless .for this case , we derive the conditions under which the eic is sufficient for the proposed cb scheme to cause no adverse effects on the concurrent pr transmissions .in addition , we show that when the pr link is equipped with multi - antennas but only communicates over a subspace of the total available spatial dimensions , the learning - based cb scheme with the eic leads to a capacity gain over the conventional zero - forcing ( zf ) scheme even with the exact cr - to - pr channel knowledge , via exploiting side information on pr transmit dimensions extracted from the eic . *second , we consider the practical case with _imperfect _ estimation of eic due to finite learning time .we propose a _ two - phase _protocol for crs to implement learning - based cb .the first phase is for the cr to observe the pr signals and estimate the eic , while the second phase is for the cr to transmit data with cb designed via the estimated eic .we present two algorithms for crs to estimate the eic , under different assumptions on the availability of the noise power knowledge at the cr terminal . furthermore , due to imperfect channel estimation , the proposed cb scheme results in leakage interference at the pr terminals , which leads to an interesting _ learning - throughput tradeoff _ ,i.e. , different choices of time allocation between cr s channel learning and data transmission correspond to different tradeoffs between pr transmission protection and cr throughput maximization .we formulate the problem to determine the optimal time allocation for estimating the eic to maximize the effective throughput of the cr link , subject to the cr transmit power constraint and the interference power constraints at the pr terminals ; and derive the solution via applying convex optimization techniques .the rest of this paper is organized as follows .section [ sec : system model ] presents the cr system model .section [ sec : effective channel ] introduces the idea of eic .section [ sec : beamforming ] studies the cb design based on the eic under perfect channel learning .section [ sec : tradeoff ] considers the case with imperfect channel learning , presents algorithms for estimating the eic , and studies the learning - throughput tradeoff for the cr link .section [ sec : simulation results ] presents numerical results to corroborate the proposed studies .finally , section [ sec : conclusion ] concludes the paper ._ notation _ : scalar is denoted by lower - case letter , e.g. , , and bold - face lower - case letter is used for vector , e.g. , , and bold - face upper - case letter is for matrix , e.g. , . for a matrix , , , , , , , and denote its trace , rank , determinant , inverse , pseudo inverse , transpose , and conjugate transpose , respectively . denotes a diagonal matrix with diagonal elements given by . for a matrix , and denote the maximum and minimum eigenvalues of , respectively . and denote the identity matrix and the all - zero matrix , respectively , with proper dimensions . for a positive semi - definite matrix , denoted by , denotes a square - root matrix of , i.e. , , which is assumed to be obtained from the eigenvalue decomposition ( evd ) of : if the evd of is expressed as , then . denotes the euclidean norm of a complex vector . denotes the space of matrices with complex entries .the distribution of a circular symmetric complex gaussian ( cscg ) vector with mean vector and covariance matrix is denoted by , and stands for `` distributed as '' . i ] , ; s are the additive noises assumed to be independent cscg random vectors with zero - mean and covariance matrix denoted by .denote the cardinality of the set as .it is reasonable to assume that pr will transmit , with a constant probability , during a certain time period .mathematically , we may use =\alpha_j ] . note that , where a strict inequality occurs when there are guard ( silent ) intervals between alternate pr tdd transmissions . also note thatif , there will be no active pr transmissions in the observed frequency band .define as , where , if and otherwise .obviously , s are random variables with =\alpha_j i ] , , but ={{\mbox{\boldmath{ } } } } ] and ^t q i i ] and ={\mbox{\boldmath{ } } } _ { \rm cr} ] and where s and are given in section [ sec : effective channel ] . from (* appendix i ) , we know that the first order perturbation to due to the finite number of samples and the additive noise b g u c u g b b g y z u c u z y g b c b g y y g b c w a g \cal a s s \cal a g a w c w i 0 i 0 0 i i 0 w c w w x i ] for any constant matrix ; is due to the definitions of and ; and is approximately true since is usually a large number . from , we have by noting , from ( [ eq : lambda min ] ) and ( [ eq : lambda max ] ) it follows that using ( [ eq : ij bar ] ) , ( [ eq : i bar final ] ) , and ( [ eq : inequality ] ) , the upper bound on given in ( [ eq : ub ij bar ] ) is obtained .from ( [ eq : f z ] ) , it is known that in each section ] , is differentiable . for boundary points of each section, it can be verified that . therefore , is differentiable at all the points . for a given , is obtained by solving the optimization problem in ( [ eq : optimize f z ] ) , which can be easily verified to be a convex optimization problem .thus , the duality gap for this optimization problem is zero and can be equivalently obtained as the optimal value of the following min - max optimization problem : where the summations are taken over , and is the optimal dual variable for a given .in fact , it can be shown that is just the water level given in ( [ eq : water level ] ) corresponding to the total power .denote as any constant in $ ] .let , , and be the optimal for , , and , respectively . for , we have where the inequality is due to the fact that is not the optimal dual solution for .therefore , thus , is a concave function .x. kang , y. c. liang , a. nallanathan , h. garg , and r. zhang , `` optimal power allocation for fading channels in cognitive radio networks : ergodic capacity and outage capacity , '' _ ieee trans .wireless commun .940 - 950 , feb .2009 .y. chen , g. yu , z. zhang , h. h. chen , and p. qiu , `` on cognitive radio networks with opportunistic power control strategies in fading channels , '' _ ieee .wireless commun .7 , no . 7 ,2752 - 2761 , jul . 2008 .r. zhang , s. cui , and y .- c .liang , `` on ergodic sum capacity of fading cognitive multiple - access and broadcast channels , '' _ to appear in ieee trans .theory_. available [ online ] at arxiv:0806.4468 .q. h. spencer , a. l. swindlehurst , and m. haardt , `` zero - forcing methods for downlink spatial multiplexing in multiuser mimo channels , '' _ ieee trans .sig . process ._ , vol . 52 , no .461 - 471 , feb . 2004 .f. gao , y. zeng , a. nallanathan , and t .- s .ng , `` robust subspace blind channel estimation for cyclic prefixed mimo odfm systems : algorithm , identifiability and performance analysis , '' _ ieee j. sel .areas commun .378 - 388 , feb . 2008 .
|
this paper studies the transmit strategy for a secondary link or the so - called cognitive radio ( cr ) link under opportunistic spectrum sharing with an existing primary radio ( pr ) link . it is assumed that the cr transmitter is equipped with multi - antennas , whereby transmit precoding and power control can be jointly deployed to balance between avoiding interference at the pr terminals and optimizing performance of the cr link . this operation is named as _ cognitive beamforming _ ( cb ) . unlike prior study on cb that assumes perfect knowledge of the channels over which the cr transmitter interferes with the pr terminals , this paper proposes a _ practical _ cb scheme utilizing a new idea of _ effective interference channel _ ( eic ) , which can be efficiently estimated at the cr transmitter from its observed pr signals . somehow surprisingly , this paper shows that the learning - based cb scheme with the eic improves the cr channel capacity against the conventional scheme even with the exact cr - to - pr channel knowledge , when the pr link is equipped with multi - antennas but only communicates over a subspace of the total available spatial dimensions . moreover , this paper presents algorithms for the cr to estimate the eic over a finite learning time . due to channel estimation errors , the proposed cb scheme causes leakage interference at the pr terminals , which leads to an interesting _ learning - throughput tradeoff _ phenomenon for the cr , pertinent to its time allocation between channel learning and data transmission . this paper derives the optimal channel learning time to maximize the effective throughput of the cr link , subject to the cr transmit power constraint and the interference power constraints for the pr terminals . cognitive beamforming , cognitive radio , effective interference channel , learning - throughput tradeoff , multi - antenna systems , spectrum sharing . [ section ] [ section ] [ section ] [ section ] [ section ] [ section ]
|
in this article , we discuss a graph - based approach for testing spatial point patterns . in statistical literature , the analysis of spatial point patterns in natural populations has been extensively studied and have important implications in epidemiology , population biology , and ecology .we investigate the patterns of one class with respect to other classes , rather than the pattern of one - class with respect to the ground .the spatial relationships among two or more groups have important implications especially for plant species .see , for example , , , and .our goal is to test the spatial pattern of complete spatial randomness against spatial segregation or association .complete spatial randomness ( csr ) is roughly defined as the lack of spatial interaction between the points in a given study area .segregation is the pattern in which points of one class tend to cluster together , i.e. , form one - class clumps . in association , the points of one class tend to occur more frequently around points from the other class . for convenience and generality ,we call the different types of points as classes " , but the class can be replaced by any characteristic of an observation at a particular location . for example , the pattern of spatial segregation has been investigated for species ( ) , age classes of plants ( ) and sexes of dioecious plants ( ) .we use special graphs called proximity catch digraphs ( pcds ) for testing csr against segregation or association . in recent years , introduced a random digraph related to pcds ( called class cover catch digraphs ) in and extended it to multiple dimensions . , , , and demonstrated relatively good performance of it in classification . in this article , we define a new class of random digraphs ( called pcds ) and apply it in testing against segregation or association .a pcd is comprised by a set of vertices and a set of ( directed ) edges .for example , in the two class case , with classes and , the points are the vertices and there is an arc ( directed edge ) from to , based on a binary relation which measures the relative allocation of and with respect to points . by construction , in our pcds , points further away from points will be more likely to have more arcs directed to other points , compared to the points closer to the points .thus , the relative density ( number of arcs divided by the total number of possible arcs ) is a reasonable statistic to apply to this problem . to illustrate our methods ,we provide three artificial data sets , one for each pattern .these data sets are plotted in figure [ fig : deldata ] , where points are at the vertices of the triangles , and points are depicted as squares .observe that we only consider the points in the convex hull of points ; since in the current form , our proposed methodology only works for such points .hence we avoid using a real life example , but use these artificial pattern realizations for illustrative purposes . under segregation ( left ) the relative density of our pcd will be larger compared to the csr case ( middle ) , while under association ( right ) the relative density will be smaller compared to the csr case .the statistical tool we utilize is the asymptotic theory of -statistics . properly scaled ,we demonstrate that the relative density of our pcds is a -statistic , which have asymptotic normality by the general central limit theory of -statistics .the digraphs introduced by , whose relative density is also of the -statistic form , the asymptotic mean and variance of the relative density is not analytically tractable , due to geometric difficulties encountered . however , the pcd we introduce here is a parametrized family of random digraphs , whose relative density has tractable asymptotic mean and variance .ceyhan and priebe introduced an ( unparametrized ) version of this pcd and another parametrized family of pcds in and , respectively . used the domination number ( which is another statistic based on the number of arcs from the vertices ) of the second parametrized family for testing segregation and association .the domination number approach is appropriate when both classes are comparably large . used the relative density of the same pcd for testing the spatial patterns .the new parametrized family of pcds we introduce has more geometric appeal , simpler in distributional parameters in the asymptotics , and the range of the parameters is bounded . using the delaunay triangulation of the observations, we will define the parametrized version of the proximity maps of in section [ sec : tau - factor - pcd ] for which the calculations regarding the distribution of the relative density are tractable .we then can use the relative density of the digraph to construct a test of complete spatial randomness against the alternatives of segregation or association which are defined explicitly in sections [ sec : spat - pattern ] and [ sec : null - and - alt ] .we will calculate the asymptotic distribution of the relative density for these digraphs , under both the null distribution and the alternatives in sections [ sec : asy - norm - null ] and [ sec : asy - norm - alt ] , respectively .this procedure results in a consistent test , as will be shown in section [ sec : consistency ] .the finite sample behaviour ( in terms of power ) is analyzed using monte carlo simulation in section [ sec : monte - carlo ] .the pitman asymptotic efficiency is analyzed in section [ sec : pitman ] .the multiple - triangle case is presented in section [ sec : mult - tri ] and the extension to higher dimensions is presented in section [ sec : ncs - higher - d ] .all proofs are provided in the appendix .for simplicity , we describe the spatial point patterns for two - class populations .the null hypothesis for spatial patterns have been a controversial topic in ecology from the early days ( ) .but in general , the null hypothesis consists of two random pattern types : complete spatial randomness or random labeling . under _ complete spatial randomness _( csr ) for a spatial point pattern where denotes the area functional , we have * given events in domain , the events are an independent random sample from a uniform distribution on ; * there is no spatial interaction .furthermore , the number of events in any planar region with area follows a poisson distribution with mean , whose probability mass function is given by where is the intensity of the poisson distribution . under _ random labeling _, class labels are assigned to a fixed set of points randomly so that the labels are independent of the locations .thus , random labeling is less restrictive than csr . butconditional on a set of points from csr , both processes are equivalent .we only consider a special case of csr as our null hypothesis in this article .that is , only points are assumed to be uniformly distributed over the convex hull of points .the alternative patterns fall under two major categories called _ association _ and _ segregation_. _ association _ occurs if the points from the two classes together form clumps or clusters .that is , association occurs when members of one class have a tendency to attract members of the other class , as in symbiotic species , so that the will tend to cluster around the elements of .for example , in plant biology , points might be parasitic plants exploiting points .as another example , and points might represent mutualistic plant species , so they depend on each other to survive . in epidemiology, points might be contaminant sources , such as a nuclear reactor , or a factory emitting toxic gases , and points might be the residence of cases ( incidences ) of certain diseases caused by the contaminant , e.g. , some type of cancer ._ segregation _ occurs if the members of the same class tend to be clumped or clustered ( see , e.g. , ) .many different forms of segregation are possible .our methods will be useful only for the segregation patterns in which the two classes more or less share the same support ( habitat ) , and members of one class have a tendency to repel members of the other class .for instance , it may be the case that one type of plant does not grow well in the vicinity of another type of plant , and vice versa .this implies , in our notation , that are unlikely to be located near any elements of .see , for instance , ( ) . in plant biology, points might represent a tree species with a large canopy , so that , other plants ( points ) that need light can not grow around these trees . as another interesting but contrived example , consider the arsonist who wishes to start fires with maximum duration time ( hence maximum damage ) , so that he starts the fires at the furthest points possible from fire houses in a city .then points could be the fire houses , while points will be the locations of arson cases .we consider _ completely mapped data _ ,i.e. , the locations of all events in a defined space are observed rather than sparsely sampled data ( only a random subset of locations are observed ) .in general , in a random digraph , there is an arc between two vertices , with a fixed probability , independent of other arcs and vertex pairs .however , in our approach , arcs with a shared vertex will be dependent .hence the name _ data - random digraphs_. let be a measurable space and consider a function , where represents the power set of .then given , the _ proximity map _ associatesa _ proximity region _ with each point .the region is defined in terms of the distance between and . if is a set of -valued random variables , then the , are random sets . if the are independent and identically distributed , then so are the random sets , .define the data - random proximity catch digraph with vertex set and arc set by where point catches " point .the random digraph depends on the ( joint ) distribution of the and on the map .the adjective _ proximity _ for the catch digraph and for the map comes from thinking of the region as representing those points in `` close '' to ( and ) .the _ relative density _ of a digraph of order ( i.e. , number of vertices is ) , denoted , is defined as where denotes the set cardinality functional ( ) .thus represents the ratio of the number of arcs in the digraph to the number of arcs in the complete symmetric digraph of order , namely . if , then the relative density of the associated data - random proximity catch digraph , denoted , is a u - statistic , where with being the indicator function .we denote as henceforth for brevity of notation . although the digraph is not symmetric ( since does not necessarily imply ) , is defined as the number of arcs in between vertices and , in order to produce a symmetric kernel with finite variance ( ) .the random variable depends on and explicitly and on implicitly . the expectation n\ge 2 ] simplifies to = \frac{1}{2n(n-1 ) } { \mathbf{var}\,[}h_{12 } ] + \frac{n-2}{n(n-1 ) } { \mathbf{cov}\,[}h_{12},h_{13 } ] \leq 1/4.\end{aligned}\ ] ] a central limit theorem for -statistics ( ) yields ) \stackrel{\mathcal{l}}{\longrightarrow } { \mathcal{n}}(0,{\mathbf{cov}\,[}h_{12},h_{13}])\end{aligned}\ ] ] provided that > 0 ] , depends on only and .thus , we need determine only ] in order to obtain the normal approximation ,{\mathbf{var}\,[}\rho_n]\right ) = { \mathcal{n}}\left(\frac{{\mathbf{e}\,[}h_{12}]}{2},\frac{{\mathbf{cov}\,[}h_{12},h_{13}]}{n}\right ) \mbox { for large }.\end{aligned}\ ] ] we define the -factor central similarity proximity map briefly .let and let be three non - collinear points. denote the triangle including the interior formed by the points in as . for ] ,the _ -factor _central similarity proximity region is defined to be the triangle with the following properties : * has an edge parallel to such that and where is the euclidean ( perpendicular ) distance from to , * has the same orientation as and is similar to , * is at the center of mass of . note that ( i ) implies the -factor " , ( ii ) implies similarity " , and ( iii ) implies central " in the name , _ -factor central similarity proximity map_. notice that implies that and implies that for all . for and ] , gets larger ( in area ) as gets further away from the edges ( or equivalently gets closer to the center of mass , ) in the sense that as increases ( or equivalently decreases .hence for points in , the further the points away from the vertices ( or closer the points to in the above sense ) , the larger the area of .hence , it is more likely for such points to catch other points , i.e. , have more arcs directed to other points . therefore ,if more points are clustered around the center of mass , then the digraph is more likely to have more arcs , hence larger relative density .so , under segregation , relative density is expected to be larger than that in csr or association . on the other hand , in the case of association ,i.e. , when points are clustered around points , the regions tend to be smaller in area , hence , catch less points , thereby resulting in a small number of arcs , or a smaller relative density compared to csr or segregation .see , for example , figure [ fig : deldata - j=1 ] with 3 points , and 20 points for segregation ( top left ) , csr ( middle left ) and association ( bottom right ) .the corresponding arcs in the -factor central similarity pcd with are plotted in the right in figure [ fig : deldata - j=1 ] .the corresponding relative density values ( for ) are .1395 , .2579 , and .0974 , respectively .furthermore , for a fixed , gets larger ( in area ) as increases .so , as increases , it is more likely to have more arcs , hence larger relative density for a given realization of points in .we first describe the null and alternative patterns we consider briefly , and then provide the asymptotic distribution of the relative density for these patterns .there are two major types of asymptotic structures for spatial data ( ) . in the first, any two observations are required to be at least a fixed distance apart , hence as the number of observations increase , the region on which the process is observed eventually becomes unbounded .this type of sampling structure is called increasing domain asymptotics " . in the second type ,the region of interest is a fixed bounded region and more and more points are observed in this region . hence the minimum distance between data points tends to zero as the sample size tends to infinity .this type of structure is called infill asymptotics " , due to .the sampling structure for our asymptotic analysis is infill , as only the size of the type process tends to infinity , while the support , the convex hull of a given set of points from type process , is a fixed bounded region . for statistical testing for segregation and association ,the null hypothesis is generally some form of _ complete spatial randomness _ ; thus we consider if it is desired to have the sample size be a random variable , we may consider a spatial poisson point process on as our null hypothesis .we first present a geometry invariance " result that will simplify our calculations by allowing us to consider the special case of the equilateral triangle .* theorem 1 : * let be three non - collinear points . for let , the uniform distribution on the triangle .then for any ] , then /2=p(x_2 \in { n_{cs}^{\tau}}(x_1)) ] .we define two simple classes of alternatives , and with , for segregation and association , respectively .see also figure [ fig : seg - alt ] . for ,let denote the edge of opposite vertex , and for let denote the line parallel to through . then define be the model under which and be the model under which .the shaded region in figure [ fig : seg - alt ] is the support for segregation for a particular value ; and its complement is the support for the association alternative with .thus the segregation model excludes the possibility of any occurring near a , and the association model requires that all occur near a .the in the definition of the association alternative is so that yields under both classes of alternatives .we consider these types of alternatives among many other possibilities , since relative density is geometry invariant for these alternatives as the alternatives are defined with parallel lines to the edges .[ ] * remark : * these definitions of the alternatives are given for the standard equilateral triangle .the geometry invariance result of theorem 1 from section [ sec : geo - inv ] still holds under the alternatives , in the following sense . if , in an arbitrary triangle , a small percentage where of the area is carved away as forbidden from each vertex using line segments parallel to the opposite edge , then under the transformation to the standard equilateral triangle this will result in the alternative .this argument is for segregation with ; a similar construction is available for the other cases . by detailed geometric probability calculations provided in the appendix ,the mean and the asymptotic variance of the relative density of the -factor proximity catch digraph can be calculated explicitly .the central limit theorem for -statistics then establishes the asymptotic normality under the uniform null hypothesis .these results are summarized in the following theorem .* theorem 2 : * for ] for the asymptotic variance .in fact , the exact distribution of is , in principle , available by successively conditioning on the values of the . alas , while the joint distribution of is available , the joint distribution of , and hence the calculation for the exact distribution of , is extraordinarily tedious and lengthy for even small values of .figure [ fig : normskewcs ] indicates that , for , the normal approximation is accurate even for small ( although kurtosis and skewness may be indicated for ) .figure [ fig : csnormskew1 ] demonstrates , however , that the smaller the value of the more severe the skewness of the probability density .asymptotic normality of the relative density of the proximity catch digraph can be established under the alternative hypotheses of segregation and association by the same method as under the null hypothesis .let ] for ] yields asymptotic normality for all , while under the segregation alternatives only yields this universal asymptotic normality .the relative density of the central similarity proximity catch digraph is a test statistic for the segregation / association alternative ; rejecting for extreme values of is appropriate since under segregation , we expect to be large ; while under association , we expect to be small .using the test statistic which is the normalized relative density , the asymptotic critical value for the one - sided level test against segregation is given by against segregation , the test rejects for and against association , the test rejects for . for the example patterns in figure [ fig : deldata - j=1 ] , , and -1.361 , respectively .* theorem 4 : * the test against which rejects for and the test against which rejects for are consistent for ] , and ._ based on the pae analysis , we suggest , for large and small , choosing large ( i.e. , ) for testing against segregation . _ notice that , , } { \mbox{pae}}^a(\tau ) \approx .4566 ] is given by provided that with ,\end{aligned}\ ] ] where and are given by equations ( [ eq : csasymean ] ) and ( [ eq : csasyvar ] ) , respectively . by an appropriate application of jensen s inequality, we see that therefore , the covariance iff both and hold , so asymptotic normality may hold even when ( provided that ) .similarly , for the segregation ( association ) alternatives where of the area around the vertices of each triangle is forbidden ( allowed ) , we obtain the above asymptotic distribution of with being replaced by , by , by , and by . likewise for association .thus in the case of , we have a ( conditional ) test of which once again rejects against segregation for large values of and rejects against association for small values of .the segregation ( with , i.e. , ) , null , and association ( with , i.e. , ) realizations ( from left to right ) are depicted in figure [ fig : deldata ] with .for the null realization , the p - value for all values relative to the segregation alternative , also for all values relative to the association alternative .for the segregation realization , we obtain for all . for the association realization ,we obtain for all and at . note that this is only for one realization of .we repeat the null and alternative realizations times with and and estimate the significance levels and empirical power .the estimated values are presented in table [ tab : cs - mt - asy - emp - val ] . with ,the empirical significance levels are all greater than .05 and less than .10 for against both alternatives , much larger for other values .this analysis suggests that is not large enough for normal approximation . with ,the empirical significance levels are around .1 for for segregation , and around but slightly larger than .05 for . based on this analysis , we see that , against segregation , our test is liberal less liberal for larger in rejecting for small and moderate , against association it is slightly liberal for small and moderate , and large values ._ for both alternatives , we suggest the use of large values . _observe that the poor performance of relative density in one - triangle case for association does not persist in multiple triangle case .in fact , for the multiple triangle case , gets to be more appropriate for testing against association compared to testing against segregation . &.7 & .8 & .9 & 1.0 + + & .496 & .366 & .302 & .242 & .190 & .103 & .102 & .092 & .095 & .091 + & .393 & .429 & .464 & .512 & .551 & .578 & .608 & .613 & .611 & .604 + & .726 & .452 & .322 & .310 & .194 & .097 & .081 & .072 & .063 & .067 + & .452 & .426 & .443 & .555 & .567 & .667 & .721 & .809 & .857 & .906 + + & 0.246 & 0.162 & 0.114 & 0.103 & 0.097 & 0.092 & 0.095 & 0.093 & 0.095 & 0.090 + & 0.829 & 0.947 & 0.982 & 0.988 & 0.995 & 0.995 & 0.997 & 0.998 & 0.997 & 0.997 + & 0.255 & 0.117 & 0.077 & 0.067 & 0.052 & 0.059 & 0.061 & 0.054 & 0.056 & 0.058 + & 0.684 & 0.872 & 0.953 & 0.991 & 0.999 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 + the conditional test presented here is appropriate when are fixed , not random .an unconditional version requires the joint distribution of the number and relative size of delaunay triangles when is , for instance , a poisson point pattern .alas , this joint distribution is not available ( ) .the pae analysis is given for .for , the analysis will depend on both the number of triangles as well as the sizes of the triangles .so the optimal values with respect to these efficiency criteria for are not necessarily optimal for , so the analyses need to be updated , conditional on the values of and . under the segregation alternative ,the pae of is given by under association alternative the pae of is similar .the pae curves for ( as in figure [ fig : deldata ] ) are similar to the ones for the case ( see figures [ fig : cs - pae - curves ] ) hence are omitted .some values of note are , }{\mbox{pae}}_j^s(\tau ) = 1 ] with . based on the pitman asymptotic efficiency analysis ,we suggest , _ for large and small , choosing large for testing against segregation and small against association_. however , _ for moderate and small , we suggest large values for association _ due to the skewness of the density of .the extension of to for is straightforward .let be points in general position .denote the simplex formed by these points as .( a simplex is the simplest polytope in having vertices , edges and faces of dimension . ) for ] , the _ -factor _central similarity proximity region is defined to be the simplex with the following properties : * has a face parallel to such that where is the euclidean ( perpendicular ) distance from to , * has the same orientation as and is similar to , * is at the center of mass of .note that implies that . for ,define for all .theorem 1 generalizes , so that any simplex in can be transformed into a regular polytope ( with edges being equal in length and faces being equal in area ) preserving uniformity .delaunay triangulation becomes delaunay tesselation in , provided no more than points being cospherical ( lying on the boundary of the same sphere ) .in particular , with , the general simplex is a tetrahedron ( 4 vertices , 4 triangular faces and 6 edges ) , which can be mapped into a regular tetrahedron ( 4 faces are equilateral triangles ) with vertices .asymptotic normality of the -statistic and consistency of the tests hold for .in this article , we investigate the mathematical and statistical properties of a new proximity catch digraph ( pcd ) and its use in the analysis of spatial point patterns .the mathematical results are the detailed computations of means and variances of the -statistics under the null and alternative hypotheses .these statistics require keeping good track of the geometry of the relevant neighborhoods , and the complicated computations of integrals are done in the symbolic computation package , maple .the methodology is similar to the one given by .however , the results are simplified by deliberate choices we make .for example , among many possibilities , the proximity map is defined in such a way that the distribution of the domination number and relative density is geometry invariant for uniform data in triangles , which allows the calculations on the standard equilateral triangle rather than for each triangle separately . in various fields , there are many tests available for spatial point patterns . an extensive survey is provided by kulldorff who enumerates more than 100 such tests , most of which need adjustment for some sort of inhomogeneity ( ) .he also provides a general framework to classify these tests .the most widely used tests include pielou s test of segregation for two classes ( ) due to its ease of computation and interpretation and ripley s and functions ( ) . the first proximity map similar to the -factor proximity map in literature is the spherical proximity map ; see , e.g. , . a slight variation of is the arc - slice proximity map where is the delaunay cell that contains ( see ( ) )furthermore , ceyhan and priebe introduced the ( unparametrized ) central similarity proximity map in ( ) and another family of pcds in ( ) . the spherical proximity map is used in classification in the literature , but not for testing spatial patterns between two or more classes .we develop a technique to test the patterns of segregation or association .there are many tests available for segregation and association in ecology literature .see ( ) for a survey on these tests and relevant references .two of the most commonly used tests are pielou s test of independence and ripley s test based on and functions . however , the test we introduce here is not comparable to either of them .our test is a conditional test conditional on a realization of ( number of delaunay triangles ) and ( the set of relative areas of the delaunay triangles ) and we require the number of triangles is fixed and relatively small compared to .furthermore , our method deals with a slightly different type of data than most methods to examine spatial patterns .the sample size for one type of point ( type points ) is much larger compared to the the other ( type points ) .this implies that in practice , could be stationary or have much longer life span than members of .for example , a special type of fungi might constitute points , while the tree species around which the fungi grow might be viewed as the points .the sampling structure for our asymptotic analysis is infill asymptotics ( ) .moreover , our statistic that can be written as a -statistic based on the locations of type points with respect to type points .this is one advantage of the proposed method : most statistics for spatial patterns can not be written as -statistics .the -statistic form avails us the asymptotic normality , once the mean and variance is obtained by detailed geometric calculations .the null hypothesis we consider is considerably more restrictive than current approaches , which can be used much more generally . in particular, we consider the completely spatial randomness pattern on the convex hull of points .based on the asymptotic analysis and finite sample performance of relative density of -factor central similarity pcd , we recommend large values of ( ) should be used , regardless of the sample size for segregation . for association , we recommend large values of ( ) for small to moderate sample sizes , and small values of ( ) . however , in a practical situation , we will not know the pattern in advance .so as an automatic data - based selection of to test csr against segregation or association , one can start with , and if the relative density is found to be smaller than that under csr ( which is suggestive of association ) , use any ] ) regardless of the sample size .however , for large ( say , ] where & = & p\bigl(\{x_2,x_3\ } \subset { n_{cs}^{\tau}}(x_1)\bigr)+2\,p\bigl(x_2 \in { n_{cs}^{\tau}}(x_1 ) , x_3 \in { \gamma}_1(x_1,{n_{cs}^{\tau}})\bigr)\\ & & + p\bigl(\{x_2,x_3\ } \subset { \gamma}_1(x_1,{n_{cs}^{\tau}})\bigr ) = p^{\tau}_{2n}+2\,p^{\tau}_m+p^{\tau}_{2g}.\end{aligned}\ ] ] hence = \bigl(p^{\tau}_{2n}+2\,p^{\tau}_m+p^{\tau}_{2g}\bigr)-[2\,\mu(\tau)]^2. ] .there are four cases regarding and one case for .see figure [ fig : g1-ncs - cases-1 ] for the prototypes of these four cases of where , for , the explicit forms of are each case corresponds to the region in figure [ regions - for - ncs ] , where [ ht ] [ ht ] the explicit forms of , are as follows : \times [ 0,q_3(x)]\},\\ r_2 & = & \{(x , y)\in [ 0,s_1 ] \times [ q_3(x),\ell_{am}(x)]\cup [ s_1,1/2 ] \times [ q_3(x),q_2(x)]\},\\ r_3 & = & \{(x , y)\in [ s_1,1/2]\times [ q_2(x),q_1(x)]\},\\ r_4 & = & \{(x , y)\in [ s_1,1/2 ] \times [ q_1(x),\ell_{am}(x)]\}.\end{aligned}\ ] ] by symmetry , and where .hence , next , by symmetry , and for , where . for , where . for , where + . for , where furthermore , by symmetry , and where can be calculated with the same region of integration with integrand being replaced by .then + hence =\frac { { \tau}^{4}(2\,{\tau}^{5}-{\tau}^{4}-5\,{\tau}^{3}+12\,{\tau}^{2}+28\,\tau+8)}{15\,(\tau+1)(2\,\tau+1)(\tau+2)}.\ ] ] therefore , for , it is trivial to see that .under the alternatives , i.e. , is a -statistic with the same symmetric kernel as in the null case .the mean = { \mathbf{e}\,_}{{\varepsilon}}[h_{12}]/2 ] . ] , is less than ( greater than ) the mean under the alternative , ] .likewise , detailed analysis of indicates that under association for all and $ ] .we direct the reader to the technical report for the details of the calculations .hence the desired result follows for both alternatives .in the multiple triangle case ,=\frac{1}{n\,(n-1)}\sum\hspace*{-0.1 in}\sum_{i < j \hspace*{0.25 in } } \hspace*{-0.1 in } \,{\mathbf{e}\,[}h_{ij}]=\\ \frac{1}{2}{\mathbf{e}\,[}h_{12}]={\mathbf{e}\,[}i(a_{12 } ) ] = p\bigl(a_{12}\bigr)=p\bigl(x_2 \in { n_{cs}^{\tau}}(x_1)\bigr).\end{aligned}\ ] ] but , by definition of , a.s . if and are in different triangles .so by the law of total probability letting , we get where is given by equation ( [ eq : csasymean ] ) .furthermore , the asymptotic variance is -{\mathbf{e}\,[}h_{12}]{\mathbf{e}\,[}h_{13}]\\ & = & p\bigl(\{x_2,x_3\ } \subset { n_{cs}^{\tau}}(x_1)\bigr)+2\,p\bigl(x_2 \in { n_{cs}^{\tau}}(x_1 ) , x_3 \in { \gamma}_1(x_1,{n_{cs}^{\tau}})\bigr)\\ & & + p\bigl(\{x_2,x_3\ } \subset { \gamma}_1(x_1,{n_{cs}^{\tau}})\bigr)-4\,(\mu(\tau , j))^2.\end{aligned}\ ] ] then for , we have similarly , , hence , so conditional on , if then .
|
we discuss a graph - based approach for testing spatial point patterns . this approach falls under the category of data - random graphs , which have been introduced and used for statistical pattern recognition in recent years . our goal is to test complete spatial randomness against segregation and association between two or more classes of points . to attain this goal , we use a particular type of parametrized random digraph called proximity catch digraph ( pcd ) which is based based on relative positions of the data points from various classes . the statistic we employ is the relative density of the pcd . when scaled properly , the relative density of the pcd is a -statistic . we derive the asymptotic distribution of the relative density , using the standard central limit theory of -statistics . the finite sample performance of the test statistic is evaluated by monte carlo simulations , and the asymptotic performance is assessed via pitman s asymptotic efficiency , thereby yielding the optimal parameters for testing . furthermore , the methodology discussed in this article is also valid for data in multiple dimensions . _ keywords : _ random graph ; proximity catch digraph ; delaunay triangulation ; relative density ; complete spatial randomness ; segregation ; association work was partially supported by office of naval research grant and defense advanced research projects agency grant . + author . + elceyhan.edu.tr ( e. ceyhan )
|
we have shown that the process of resolving community structure in complex networks can be viewed as a problem in data compression . by drawing out the relationship between module detection and optimal coding we are able to ground the concept of network modularity in the rigorous formalism provided by information theory .enumerating the modules in a network is an act of description ; there is an inevitable tradeoff between capturing most of the network structure at the expense of needing a long description with many modules , and omitting some aspects of network structure so as to allow a shorter description with fewer modules .our information - theoretic approach suggests that there is a natural scale on which to describe the network , thereby balancing this tradeoff between under- and over - description .the main purpose of this manuscript is to propose a new theoretical approach to community detection , and thus we have not extensively explored methods of optimizing the computational search procedure .nonetheless , we have partitioned networks of sizes up to nodes with a simple simulated annealing approach . while many interesting real - world networks are smaller than this , it is our hope that the approach can be used for even larger networks with other optimization methods such as the greedy search technique presented in ref . . for many networks ,our cluster - based compression method yields somewhat different results than does the modularity approach developed by newman and colleagues .the differences reflect alternative perspectives on what community structure might be .if one views community structure as statistical deviation from the null model in which the degree sequence is held constant but links are otherwise equiprobable among all nodes , the modularity optimization method by definition provides the optimal partitioning .if one views community structure as the regularities in a network s topology that allow the greatest compression of the network s structure , our approach provides a useful partitioning .the choice of which to pursue will depend on the questions that a researcher wishes to ask . in this paper, we have concentrated on finding communities of nodes that are positively clustered by the links among them . while this is a common goal in community detection , the sort of information that we wish to extract about network topology may vary from application to application . by choosing an appropriate encoder, one can identify other aspects of structure , such as hub versus periphery distinction illustrated in our alternative partitioning of the karate club network .when we abstract the problem of finding pattern in networks to a problem of data compression , the information - theoretic view described here provides a general basis for how to get the most information out of a network structure .we thank ben althouse for generating the network used in fig .[ fig3 ] and mark newman for constructive comments on the manuscript .this work was supported by the national institute of general medical sciences models of infectious disease agent study program cooperative agreement 5u01gm07649 .
|
to understand the structure of a large - scale biological , social , or technological network , it can be helpful to decompose the network into smaller subunits or modules . in this article , we develop an information - theoretic foundation for the concept of modularity in networks . we identify the modules of which the network is composed by finding an optimal compression of its topology , capitalizing on regularities in its structure . we explain the advantages of this approach and illustrate them by partitioning a number of real - world and model networks . many objects in nature , from proteins to humans , interact in groups that compose social , technological , or biological systems . the groups form a distinct intermediate level between the microscopic and macroscopic descriptions of the system , and group structure may often be coupled to aspects of system function including robustness and stability . when we map the interactions among components of a complex system to a network with nodes connected by links , these groups of interacting objects form highly connected modules that are only weakly connected to one other . we can therefore comprehend the structure of a dauntingly complex network by identifying the modules or communities of which it is composed . when we describe a network as a set of interconnected modules , we are highlighting certain regularities of the network s structure while filtering out the relatively unimportant details . thus a modular description of a network can be viewed as a lossy compression of that network s topology , and the problem of community identification as a problem of finding an efficient compression of the structure . this view suggests that we can approach the challenge of identifying the community structure of a complex network as a fundamental problem in information theory . we provide the groundwork for an information - theoretic approach to community detection , and explore the advantages of this approach relative to other methods for community detection . figure [ fig1 ] illustrates our basic framework for identifying communities . we envision the process of describing a complex network by a simplified summary of its module structure as a communication process . the link structure of a complex network is a random variable ; a signaler knows the full form of the network , and aims to convey much of this information in a reduced fashion to a signal receiver . to do so , the signaler encodes information about as some simplified description . she sends the encoded message through a noiseless communication channel . the signal receiver observes the message , and then `` decodes '' this message , using it to make guesses about the structure of the original network . there are many different ways to describe a network by a simpler description . which of these is best ? the answer to this question of course depends on what you want to do with the description . nonetheless , information theory offers an appealing general answer to this question . given some set of candidate descriptions , the best description of a random variable is the one that tells the most about that is , the one that maximizes the mutual information between description and network . since we are interested in identifying community structure , we will explore descriptions that summarize the structure of a network by enumerating the communities or modules within , and describing the relations among them . in this paper , we will consider one particular method of encoding the community structure of . more generally one could and indeed should consider alternative `` encoders , '' so as to choose one best suited for the problem at hand . we consider an unweighted and undirected network of size with links , which can be described by the adjacency matrix we choose the description for modules , where is the module assignment vector , , and is the module matrix . the module matrix describes how the modules given by the assignment vector are connected in the actual network . module has nodes and connects to module with links ( see fig . [ fig1 ] ) . to find the best assignment we now maximize the mutual information over all possible assignments of the nodes into modules by definition , the mutual information , where is the information necessary to describe and the conditional information is the information necessary to describe given ( see fig . [ fig1 ] ) . we therefore seek to to minimize . this is equivalent to constructing an assignment vector such that the set of network estimates in fig . [ fig1 ] is as small as possible . given that the description assigns nodes to modules , ,\ ] ] where the parentheses denote the binomial coefficients and the logarithm is taken in base 2 . each of the binomial coefficients in the first product gives the number of different modules that can be constructed with nodes and links . each of the binomial coefficients in the second product gives the number of different ways module and can be connected to one another . + in fig . [ fig2 ] we apply our cluster - based compression method to the dolphin social network reported by lusseau _ et al_. . our method selects a division that differs by only one node from the division along which the actual dolphin groups were observed to split . because it is computationally infeasible to check all possible partitions of even modestly - sized networks , we use simulated annealing with the heat - bath algorithm to search for the partition that maximizes the mutual information between the description and the original network . we have confirmed the results for the networks in the figures with exhaustive searches in the vicinity of the monte carlo solutions . we compare our results with the partition obtained by using the modularity approach introduced by newman and girvan in ref . ; that technique has been widely adopted because of its appealing simplicity , its performance in benchmark tests , and the availability of powerful numerical techniques for dealing with large networks . given a partitioning into modules , the modularity is the sum of the contributions from each module where is the number of links between nodes in the -th module , the total degree in module , and is the total number of links in the network . when we maximize the modularity , we are not just minimizing the number of links between modules . instead we find a configuration which maximizes the number of links within modules in the actual network , minus the expected number of links within comparable modules in a random network with the same degree sequence . or equivalently , we aim to divide the network such that the number of links within modules is higher than expected . this approach works beautifully for networks where the modules are similar in size and degree sequence . however , when the dolphin network in fig . [ fig2 ] is partitioned using the modularity approach , the network ends up being divided very differently from the empirically observed fission of the dolphin group . why ? because of the denominator in the second term of the definition of modularity ( eq . [ moularity ] ) , the choice of partition is highly sensitive to the total number of links in the system . by construction , the benefit function that defines modularity favors groups with similar total degree , which means that the size of a module depends on the size of the whole network . the dolphin network by partitioned with our cluster - based compression ( solid line ) and based on the modularity ( dashed line ) . the stars and circles represent the two observed groups of dolphins . the right branch of the dashed line represents a split based on maximizing the modularity , which is different from the left branch solution based on the spectral analysis approximation presented in ref . . the edge - betweenness algorithm presented in ref . splits the network in the same way as our cluster - based compression method . ] to compare quantitatively the performance of our cluster - based compression method with modularity - based approaches , we conducted the benchmark tests described in refs . . in these tests , 128 nodes are divided into four equally sized groups with average degree 16 . as the average number of links from each node to nodes in other groups increases , it becomes harder and harder to identify the underlying group structure . table [ table ] presents the results from both methods using the simulated - annealing scheme described above for the numerical search ; we obtained comparable results for networks with up to nodes . when the groups are of equal size and similar total degree , both methods perform very well , on par with the best results reported in refs . . when the groups vary in size or in total degree , as was the case in the dolphin network , the modularity approach has more difficulty resolving the community structure ( table [ table ] ) . we merged three of the four groups in the benchmark test to form a series of test networks each with one large group of 96 nodes and one small group with 32 nodes . these asymmetrically - sized networks are harder for either approach to resolve , but cluster - based compression recovers the underlying community structure more often than does modularity , by a sizable margin . finally we conducted a set of benchmark tests using networks composed of two groups each with 64 nodes , but with different average degrees of 8 and 24 links per node . for these networks , we use , and cluster - based compression again recovers community structure more often than does modularity . + 1.0 ' '' '' symmetric & & & + ' '' '' _ compression _ & 0.99 ( .01 ) & 0.97 ( .02 ) & 0.87 ( .08 ) + ' '' '' _ modularity _ & 0.99 ( .01 ) & 0.97 ( .02 ) & 0.89 ( .05 ) + ' '' '' node asymmetric & & & + ' '' '' _ compression _ & 0.99 ( .01 ) & 0.96 ( .04 ) & 0.82 ( .10 ) + ' '' '' _ modularity _ & 0.85 ( .04 ) & 0.80 ( .03 ) & 0.74 ( .05 ) + ' '' '' link asymmetric & & & + ' '' '' _ _ compression__. ] & 1.00 ( .00 ) & 1.00 ( .00 ) & 1.00 ( .01 ) + ' '' '' _ modularity _ & 1.00 ( .01 ) & 0.96 ( .03 ) & 0.74 ( .10 ) + partitioning into an optimal number of modules . the network in panel a consists of 40 journals as nodes from four different fields : multidisciplinary physics ( squares ) , chemistry ( circles ) , biology ( stars ) , and ecology ( triangles ) . the 189 links connect nodes if at least one article from one of the journals cites an article in the other journal during 2004 . we have selected the 10 journals with the highest impact factor in the four different fields , but disregarded journals classified in one or more of the other fields . panel b shows the minimum description length for the network in panel a partitioned into 1 to 5 different modules . the optimal partitioning into four modules is illustrated by the lines in panel a. ] next , we address a model selection challenge . in some special cases we will know a priori how many modules compose our sample network , but in general the task of resolving community structure is twofold . we must determine the number of modules in the network , and then we need to partition the nodes into that number of modules . the catch is that we can not determine the optimal number of modules without also considering the assignments of nodes so these problems need to be solved simultaneously . below , we provide a solution grounded in algorithmic information theory . looking back at fig . [ fig1 ] , the encoder seeks to find a compression of the network so that the decoder can make the best possible estimate of the actual network . one approach would be to have the encoder partition the network into modules , one for each node . this ensures that the decoder can reconstruct the network completely , but under this approach nothing is gained either in compression or module identification . therefore the encoder must balance the amount of information necessary to describe the network in modular form , as given by the signal in fig . [ fig1 ] , and the uncertainty that remains once the decoder receives the modular description , as given by the size of the set of network estimates in fig . [ fig1 ] . this is an optimal coding problem and can be resolved by the minimum description length ( mdl ) principle . the idea is to exploit the regularities in the structure of the actual network to summarize it in condensed form , without overfitting it . what do we mean by overfitting in this context ? figure [ fig3 ] illustrates . we want to choose a set of modules for the journal citation network in fig . [ fig3 ] such that if we were to repeat the experiment next year , each journal would likely be assigned to the same module again . if we overfit the data , we may capture more of a specific year s data , but unwittingly we also capture noise that will not recur in next year s data . to minimize the description length of the original network , we look for the number of modules that minimizes that the length of the modular description plus the `` conditional description length '' , where the conditional description length is the amount of additional information that would be needed to specify exactly to a receiver who had already decoded the description . that is , we seek to minimize the sum where is the length in bits of the signal and is number of bits needed to specify which of the network estimates implied by the signal is actually realized . the description length is easy to calculate in this discrete case and is given by where the first and second term give the size necessary to encode the assignment vector and the module matrix , and is given in eq . [ zentropy ] . figure [ fig3]b shows the description length with the journal network partitioned into one to five modules . four modules yield the minimum description length and we show the corresponding partition in fig . [ fig3]a . this cluster - based compression assigns 39 of the 40 journals into the proper categories , but places the central hub _ physical review letters _ ( prl ) in the chemistry cluster . this may seem like a mistake , given that prl has 9 links to physics and only 8 to chemistry . indeed , a partitioning based on the modularity score places prl among the physics journals . but whatever its subject matter , the structural role that prl plays in the unweighted journal network is that of a chemistry journal . like most of the chemistry journals , and unlike its compatriots in physics , prl is closely linked to biology and somewhat connected to ecology . we can also partition the network into two , three , or five modules , but doing so yields a longer total description length . when we compress the network into two components , physics clusters together with chemistry and biology clusters together with ecology . when we split into three components , ecology and biology separate but physics and chemistry remain together in a single module . when we try to split the network into five modules , we get essentially the same partition as with four , only with the singly connected journal _ conservation biology _ split off by itself into its own partition . one might not even consider that singleton to be a valid module . to get a sense of how different methods handle the model selection problem , we compared the performance of our cluster - based compression method with the modularity - based approach . instead of looking for the best assignment given the correct number of modules as in table [ table ] , we look at the performance of each method at estimating the correct number of modules . our results are summarized in table [ table2 ] . both cluster - based compression and modularity exhibit thresholds beyond which they are unable with high probability to reconstruct the underlying module structure that generated the data . beyond this threshold , the compression method tends to underestimate the number of groups . by contrast , the modularity tends to overestimate the number of groups . others have observed similar model selection bias by the modularity approach ; in a completely random network , the modularity - based approach typically detects multiple and therefore statistically insignificant modules . when the clusters are symmetric in size and degree , both methods reach the resolution threshold at approximately the same point . however , when the groups have unequal numbers of nodes or unequal degree distributions , the cluster - based compression method is able to successfully reconstruct the underlying structure of networks that the modularity approach can not recover ( table [ table2 ] ) . + 1.0 ' '' '' symmetric & & & + ' '' '' _ compression _ & 1.00 ( 4.00 ) & 1.00 ( 4.00 ) & 0.14 ( 1.93 ) + ' '' '' _ modularity _ & 1.00 ( 4.00 ) & 1.00 ( 4.00 ) & 0.70 ( 4.33 ) + ' '' '' node asymmetric & & & + ' '' '' _ compression _ & 1.00 ( 2.00 ) & 0.80 ( 1.80 ) & 0.06 ( 1.06 ) + ' '' '' _ modularity _ & 0.00 ( 4.95 ) & 0.00 ( 4.97 ) & 0.00 ( 5.29 ) + ' '' '' link asymmetric & & & + ' '' '' _ compression _ & 1.00 ( 2.00 ) & 1.00 ( 2.00 ) & 1.00 ( 2.00 ) + ' '' '' _ modularity _ & 0.00 ( 3.10 ) & 0.00 ( 4.48 ) & 0.00 ( 5.55 ) + let us look back at the journal network in fig . [ fig3 ] , and recall that we can not partition this network into more than four modules without creating at least one module that has a majority of its links to nodes in other modules . because of this concept of what a module is , we might be interested only in those clusters with more links within than between clusters ( in eq . [ descriptiony ] ) . however , choosing modules in that way will not necessarily maximize mutual information . in many cases we get a higher mutual information by selecting modules such that hubs are clustered together and peripheral nodes are clustered together . when this is true , we can describe the network structure more efficiently by clustering nodes with similar roles instead of clustering nodes that are closely connected to one another . the mixture model approach provides an alternative method of identifying aspects of network structure beyond positive assortment . in our examples where we want to find modules with more links within modules than between them , we impose a `` link constraint '' , penalizing solutions with more links between than within in the simulated annealing scheme . zachary s karate club network partitioned into two modules based on the maximum mutual information with ( panel a ) and without ( panel b ) the link constraint . the partitioning with more links within modules than between modules in panel a clusters closely connected nodes together and the unconstrained partitioning in panel b clusters nodes with similar roles together . ] to visualize the different ways of partitioning a network , we split zachary s classic karate club network with ( panel a ) and without ( panel b ) the link constraint ( fig . [ fig4 ] ) . in panel a the partitioning corresponds exactly to the empirical outcome that was observed by zachary , but in panel b the 5 members with the highest degrees are clustered together . in the first case the compression capitalizes on the high frequency of ties between members of the same subgroup and the relatively few connections between the groups . in the second case the compression takes advantage of the very high number of links between the five largest hubs and the peripheral members , and the very few connections between the peripheral members . the compression with the hubs in one cluster and the peripheral nodes in the other cluster is in this case more efficient .
|
coronal mass ejections play a major role in space weather .these impulsive clouds of magnetized plasma have their origin in the solar corona and can reach earth within a few days ( e.g. * ? ? ?* ; * ? ? ?* ) fast cmes can have transit times to 1 au of less than a day ( e.g. * ? ? ?* ) . using numerical models , such as enlil , to propagate cmes that have been characterized using coronagraph observations , leads to errors in cme arrival time predictions at earth that lie in the range of to hrs ( e.g. * ?* ; * ? ? ?note that for a selected sample of cmes obtained errors of hrs .the reasons for these large forecasting errors are diverse .firstly , the observations are limited .currently , only the lasco c2 and c3 coronagraphs onboard the _ solar and heliospheric observat- ory _ ( soho ) and the cor1 and cor2 coronagraphs onboard the _ ahead _ spacecraft of the twin satellite mission _solar terrestrial relations observatory _( stereo ; * ? ? ?* ) can be used to operationally forecast the arrival times of earth - directed cmes .secondly , the structures , shapes , orientations , sizes , directions and speeds of cmes are highly variable , i.e. it is quite difficult to describe all cmes by a single propagation model .since the launch of stereo , methods have been developed that exploit the observations made by the heliospheric imagers ( hi ; * ? ? ?* ) , providing additional views of cmes propagating all the way out to 1 au and beyond . many of those methods assume a specified geometry for the cme frontal shape ( usually a circle subtending a fixed angular width at the sun , which encompasses , in one limit , a point ) , a constant propagation speed and a fixed direction of motion . with these assumptions , it is possible to fit the time - elongation profile of the cme front to derive estimates of its launch time , radial speed and propagation direction from which its arrival time and speed at a specific target in interplanetary space , usually earth , can be predicted . applied such forecasting methods to 24 cmes observed by stereo / hi and found a mean difference between the forecasted and detected arrival times at 1 au of hrs and a mean difference between forecasted and detected arrival speeds of km s .the main drawback of these fitting methods is the constant speed assumption that systematically overestimates the arrival speed , especially of cmes that are actually decelerating .the propagation speed of cmes tends to approach that of the ambient solar wind , i.e. fast events tend to decelerate and slow ones tend to accelerate ( e.g. * ? ? ?this is ongoing through the stereo / hi1 field of view , which extends from to elongation , the latter of which corresponds to radial distances of r . in order to be able to account for such an evolution in cme speed ,the drag - based model was developed ( dbm ; * ? ? ?* ) * and * has already been used in a multitude of studies ( e.g. * ? ? ?* ; * ? ? ?? * ; * ? ? ?* ; * ? ? ?* ; * ? ? ?as an extension to the dbm , developed the ellipse evolution model ( elevo ) .it assumes an elliptically shaped cme front and also can be used to predict arrival times and speeds at specified locations in space .the disadvantage of elevo and dbm is that they rely on coronagraph data , which allows cmes to be observed out to a maximum heliocentric distance of only r .heliospheric imagery provides the possibility of tracking a cme out to a much larger distance leading to the chance of achieving better reliability of its derived kinematics . in this study, we present a new method of exploiting single spacecraft hi observations , either from stereo or from any other future mission carrying such instrumentation , such as the wide - field imager for _ solar probe plus _ ( wispr ; * ? ? ? * ) or the heliospheric imager ( solohi ; * ? ? ?* ) aboard _ solar orbiter _ : the ellipse conversion method ( elcon ) .elcon converts the observed elongation angle ( the angle between the sun - observer line and the line of sight ) into a radial distance from sun center , assuming an elliptical cme front propagating along a fixed direction .the cme width , propagation direction and the ellipse aspect ratio are free parameters .the combination of the elcon method , the dbm fitting , and the elevo model allows use of hi observations to forecast cme arrival time without compromising arrival speed ; this combination forms the new forecasting utility elevohi .in order to introduce and test elevohi , we analyze a sample of 21 cmes observed by hi on the stereo a spacecraft between the years 2008 and 2012 . for each event , the in situ arrival time and speed of the cme at 1 au is available , either from its passage over _ wind _ or over stereo b. the sample covers slow events during the solar minimum period as well as fast cmes during solar maximum .the list was compiled by to test existing hi fitting methods used for forecasting cme arrival times and speeds .while the event list of contains 24 cmes , this study excludes three of them , for different reasons .event number 3 ( from * ? ? ?* ) was the trailing edge of a cme .we exclude this event since our study focuses on predicting the arrival of the cme leading edge .event number 4 was imaged from stereo b , while in this study we consider only cmes that were imaged from stereo a. finally , we exclude event number 11 , which is the same cme as event number 12 , but in situ detected by stereo b.the ellipse evolution model ( elevo ) was developed by and assumes an elliptical cme leading edge with a predefined half - width and aspect ratio .it makes use of the dbm and is able to provide forecasts of arrival time and speed at any target in the inner heliosphere .altogether , elevo needs 8 input parameters : the ellipse angular half - width , , and inverse aspect ratio , , the propagation direction , , the start time , , and initial speed , , the latter two at the radial distance , , the mean background solar wind speed , , and the drag parameter , . using the newly developed ellipse conversion method ( elcon ,see appendix ) in combination with fixed- fitting ( fpf ; * ? ? ?* ; * ? ? ?* ) , as discussed below , it is possible to estimate all of these parameters ( barring and ) based on heliospheric imager observations only , consistent with all the requirements of elevo ; we call this hi - based alternative to elevo , elevohi .the required parameters can be provided by a combination of elcon and dbm fitting .it is possible to derive from the fpf technique , which is an easy and fast approach applied to hi elongation profiles , i.e. no other data is needed . using only stereo / hi data, we need to assume and .figure [ fig : fchart ] shows the forecasting scheme of elevohi .the blue ellipses show the different components of the method , the grey parts show the parameters required by , and obtained from , each of these components .starting at the top of the flowchart , we acquire the time - elongation profile , , from hi observations .fpf analysis of the time - elongation profile provides an estimate of as input for elcon , which then converts the elongation profile to radial distance assuming an elliptical geometry ( see appendix ) .as noted previously , and , also required as input to elcon , must be assumed . the distance profile produced by elconis then fitted using the dbm building the derivative of both yields the speed profile .the required parameters are then input into elevo , which forecasts the arrival time , , and the arrival speed , , of the cme .as noted above , we term this entire procedure elevohi .the first step in forecasting cme arrival time and speed using elevohi is to track the time - elongation profile of the cme in the ecliptic plane .for this purpose the satplot tool is very convenient . with this tool we can extract the cme track from a time - elongation map ( commonly called a j - map ; * ? ? ? * ) and , additionally , a fpf analysis can be performed .time - elongation profiles for all cmes , presented by and used in this article , are extracted using the satplot tool .similar to the three conversion methodologies , well used for interpreting stereo / hi data , based on fixed- , harmonic mean ( hm ; * ? ? ?* ; * ? ? ?* ) and self - similar expansion ( sse ; * ? ? ?* ) geometries , elcon can be used to convert elongation to radial distance .the mathematical derivation of the elcon method is given in the appendix . by applying equation [ eq : rell ] from the appendix ,the observed elongation angle along with a cme is detected , , can be converted into the radial distance , , of the cme apex from sun center .it is assumed that the line - of - sight from the observer forms the tangent to the leading edge of the cme , similar to the hm and sse conversion methods .figure [ fig : elcon ] illustrates two elliptically shaped cme fronts with different inverse aspect ratios , namely ( blue ellipse ) and ( orange ellipse ) . in this depiction , both cme fronts are observed at the same elongation angle , , in the same propagation direction , .moreover , both have the same angular half - width , . from this figureit is clear that the time of impact of the cme at the in - situ observatory would not only depend upon its radial speed , but would also clearly depend critically upon the cme s inverse aspect ratio ( as well , of course , as the angular offset between the in situ observatory and the cme apex ) .moreover , it is also clear , that for a cme where ( where is the semi - major axis and is the semi - minor axis ) it is particularly important to have an accurate propagation direction in order to achieve an accurate arrival time . for converting the elongation of the cme in the stereo / hi observations to radial distance , we need to input the assumed half - width , , and inverse aspect ratio , , and the propagation direction , . here, we obtain the latter parameter by fitting the time - elongation profile from stereo / hi using the fpf method . the fixed- fitting method ( fpf ; * ? ? ?* ; * ? ? ?* ) is a commonly used , fast and easy method to predict arrival times of cmes from heliospheric imagery .although it has some major disadvantages , in particular the point - like shape and constant radial speed assumed for the cme , it does not , in fact , show a larger error than more sophisticated methods assuming an extended shape for the cme front ( see * ? ? ?* ) . to obtain the propagation direction from fpf , ,whose results we use here , fitted the time - elongation track up to elongation .note that fpf uses the same data as elcon , i.e. no additional data is needed .this is a big advantage of fpf over other potential methods , such as the graduated cylindrical shell model ( gcs ; * ? ? ?* ) for determining the input value of for elcon .the disadvantage when using fpf is , that we have to assume the input parameters and for elcon .other methods , such as gcs , could potentially provide estimates of and .the equation to fit the time - elongation profile is given by where is the distance between sun center and the observer .figure [ fig : fpf ] shows an example of stereo / hi time - elongation profile ( diamonds ) manually tracked using satplot ( from * ? ? ?* their event number 7 ) .the fpf , hmf and ssef methods can also be applied within satplot , the solid line shows the best fpf fit to the tracked points .the event displayed is event number 5 in table [ tab : results ] .having obtained from fpf as an input parameter for the elcon conversion method ( as previously noted , we use values from * ? ? ?* ) , we can apply elcon to the hi time - elongation profile . as noted above, we need to assume and .elcon yields the radial distance profile of the ellipse apex from sun center , , and the corresponding speed profile as input for dbm fitting . after converting the stereo / hi elongations to radial distances by applying elcon, we fit the elcon time - distance profile using the dbm developed by .the dbm considers the influence of the drag force acting on solar wind transients during their propagation through interplanetary space .it is based on the assumption that , beyond a distance of about 15 r from the sun , the driving lorentz force can be neglected and the drag force can be considered as the predominant force affecting the propagation of a cme . under these circumstances ,the equation of motion of a cme can be expressed as + w t + r_{\rm init } , \label{eq : dbm}\ ] ] where is the radial distance from sun - center , is the drag parameter ( usually ranging from km to km ) , and are the initial speed and distance , respectively , and is the background solar wind speed .the sign is positive when and negative when . implemented a dbm fitting technique , in which a solar wind model proposed by was used to provide parameter as an input to the model .in contrast to that work , our version of the dbm fitting produces the best - fit value of as a quasi - output , as its value is constrained by input in situ measurements ( as discussed below ) .the mean , , minimum , , and maximum , , values of the in situ solar wind speed at the detecting spacecraft ( either stereo or _wind _ ) , over the same time range as the remote observations , are used to define a range of possible values for . using these values , five different dbm fitsare performed , for , , , .the value of that yields the fit with the smallest residuals , defines the value of used .the drag parameter , , is also output from the dbm .this parameter is a combination of various properties of the cme and can be expressed as , where is the dimensionless drag coefficient , is the cross section area of the cme , is the solar wind density , and is the cme mass . note that is , however , fitted as a single parameter .figure [ fig : dbmf ] shows an example of a time - distance profile in units of au ( upper panel ) , resulting from the application of elcon to a hi1/hi2 time - elongation profile of cme number 5 from ( black crosses ) ; the lower panel shows the corresponding speed profile of the cme apex , which is obtained by differentiating the elcon time - distance profile .the light blue vertical lines in the lower panel mark the standard deviation in the velocity resulting from a measurement error of ( for hi1 ) and ( for hi2 ) in elongation .these elongation errors , similar to the measurements themselves , are converted to a distance error ( and subsequently to a speed error ) using elcon .the errors in the time - distance profile are so small as to be not visible .the blue curve in the upper panel of figure [ fig : dbmf ] represents the dbm time - distance fit .the dbm speed profile , the blue line in the lower panel , is obtained by differentiating the dbm time - distance fit .one difficulty , when applying the dbm fit to the elcon output , is defining the starting time , , and the corresponding starting distance , , of the fit .the dbm only considers forces akin to `` aerodynamic '' drag , so it is only really valid over the altitude regime covered by the hi observations .note that cor2 data is included in satplot as well as hi data .the best value of for the dbm fit is chosen as that point that yields the best overall fit , i.e. that which gives the smallest residuals .this varies for each event , and for every combination of assumed angular width and assumed aspect ratio .the average starting point for the dbm fit in our sample of cmes lies at r .depending on and , the value of is taken from the elcon speed - profile corresponding to .the mean value of the fitting residuals over all 21 cme fitted lies between 1.5 and 1.8 r . by applying the dbm fitting procedure , we acquire all input parameters to conduct the last step of elevohi , namely to run elevo , which provides the forecast of arrival time and speed . the elevo model can be used to predict cme arrival times and speeds at any specified point in interplanetary space ( usually the location of a spacecraft making in situ solar wind measurements ) .elevo assumes the same elliptical geometry as elcon and includes the dbm to simulate the propagation of the cme beyond the extent of the observations . in the past, elevo has been run based on coronagraph data . to run elevo, one needs to know the following cme parameters : , , , , and as well as and .the latter five parameters are , as explained above , gained through the combination of elcon and dbm fitting . the propagation direction , , results from fpf .use of fpf means that and have to be assumed .figure [ fig : elevo ] shows an example of an elevo run .different times during the propagation of the cme are plotted in different colors .all parameters input to elevo for this cme ( event number 20 in table [ tab : results ] ) are written in the upper right and lower left corners of the figure . for the time of the red colored front, the speed of the cme apex and the cme speed in the direction of the in situ observatory are also marked on the panel .in our application of elevohi to 21 of the 24 cmes previously analyzed by , we have used three different inverse aspect ratios ( ) to test and assess the performance of elevohi .note that corresponds to the sse ( circular ) geometry with the same angular width .as noted above , as input propagation direction for each cme , we use the corresponding fpf values from , who tracked the cmes up to about elongation . note that the fixed- geometry corresponds to the sse and elcon geometries with .for all cmes , we use the same value of the half - width ( ) . in reality , of course , every cme is different and one would not expect them to have the same half - width , or indeed aspect ratio . as we are only introducing and testing elevohi here, we have decided to keep our analysis as simple as possible and have hence fixed the half - width to . following studies will no doubt use different values for the half - width . as mentioned above , using the graduated cylindrical shell model ( gcs ; * ? ? ?* ) , based on coronagraph or even hi data , it is possible to derive and individually . in table[ tab : taball ] in the appendix the resulting fitting parameters for each event and the three half - widths tested are given .llrcrrrrrr 1 & 2008 apr 29 13:21 & 430 & & 2.42 & 106 & 5.92&82 & 10.25 & 56 + 2 & 2008 jun 6 15:35 & 403 & & 1.22 & 11 & 1.72 & 7 & 2.38 & 3 + 3 & 2008dec 31 01:45 & 447 & & 6.48 & & 5.98 & & 1.82 & + 4 & 2009 feb 18 10:00 & 350 & & 9.53 & & 8.73 & & 7.23 & + 5 & 2010 apr 5 07:58 & 735 & & 6.13 & & 6.13 & & 9.13 & + 6 & 2010 apr 11 12:14 & 431 & & 6.62 & 20 & 10.78 & & 16.12 & + 7 & 2010 may 28 01:52 & 370 & & & 22 & & 16 & & 9 + 8 & 2010 jun 20 23:02 & 400 & & 2.20 & 18 & 3.12 & 9 & 5.12 & + 9 & 2010 aug 3 17:05 & 581 & & 8.15 & & 8.15 & & 9.15 & + 10 & 2011 feb 18 00:48 & 497 & & & 28 & & 34 & & 32 + 11 & 2011 aug 4 21:18 & 413 & & 16.43 & 50 & 12.43 & 84 & 9.60 & 117 + 12 & 2011 sep 9 11:46 & 489 & & 7.77 & 78 & 16.47 & & 0.60 & 219 + 13 & 2011 oct 24 17:38 & 503 & & 11.37 & & 8.70 & & 9.38 & + 14 & 2012 jan 22 05:28 & 415 & & 8.00 & 16 & 3.00 & 54 & & 89 + 15 & 2012 jan 24 14:36 & 638 & & 5.95 & & 2.95 & 27 & 0.95 & 61 + 16 & 2012 mar 7 03:28 & 501 & & 16.68 & & 12.68 & 24 & 8.68 & 80 + 17 & 2012 mar 8 10:24 & 679 & & 5.43 & 135 & 2.77 & 198 & 0.77 & 259 + 18 & 2012 mar 12 08:28 & 489 & & 12.98 & 92 & 13.32 & 85 & 14.48 & 71 + 19 & 2012 apr 23 02:14 & 383 & & 3.18 & 21 & & 87 & & 36 + 20 & 2012 jun 16 19:34 & 494 & & 8.10 & & 4.10 & 24 & 1.27 & 46 + 21 & 2012 jul 14 17:38 & 617 & & 2.32 & 14 & 2.78 & & 0.95 & 12 + in order to assess the ability of elevohi to improve the accuracy of predicting cme arrival times and speeds , we compare the outcome to the commonly used hi fitting methods , fpf , harmonic mean fitting ( hmf ; * ? ? ? * ) and self - similar expansion fitting ( ssef ; * ? ? ?hmf assumes a circular cme front with a half - width of , which means that the circle is always attached to the sun .ssef assumes a circular cme frontal shape as well but the half - width is variable . for the events in the list , have set .figure [ fig : forecomp ] shows the resulting forecasts of the arrival times and speeds of all 21 cmes considered in this study , for the three different values for used , along with the fpf , hmf and ssef predictions .the upper panel shows the differences in arrival time ( ) , where values indicate that a cme was predicted to arrive earlier than it actually arrived in situ ( at 1 au ) and values indicate that a cme was predicted to arrive after it actually arrived .while the fpf , hmf and ssef techniques ( plotted using yellow , orange and red circles , respectively ) tend to predict cme arrival too early ( hrs , hrs , hrs ) , elevohi ( light , medium and dark blue bars ) nearly always predicts cme arrival too late .the best elevohi arrival time forecast is found using , with hrs ( light blue bars ) . using ( equivalent to the sse geometry ) results in hrs ( medium blue bars ) , using leads to hrs ( dark blue bars ) .the lower panel of figure [ fig : forecomp ] shows the equivalent plot for the arrival speed ( ) . for our sample of cmes ,at least , elevohi provides a substantial improvement in forecasting cme speed when compared to the fpf , hmf and ssef techniques , for which km s , km s and km s .the mean difference between the forecasted and observed in situ arrival speeds is least for , with km s . using , we find km s and using gives km s .table [ tab : results ] lists all analyzed events and quotes and for each of the three inverse aspect ratios used in this study .figure [ fig : histogram ] shows frequency distributions of ( upper panel ) and ( lower panel ) for elevohi .the blue , grey , and white areas ( the latter bounded by a dashed line ) represent , , and , respectively .regardless of which aspect ratio is used , all arrival time forecasts lie within the range hrs hrs , compared to fpf ( hrs hrs ) , hmf ( hrs hrs ) and ssef ( hrs hrs ) .the minimum and maximum values of for the elevohi speed prediction are and km s , respectively . because of the simplistic assumption of constant speed invoked by fpf , hmf and ssef , the values of from these methods are much greater , ranging from to km s for fpf , from to km s for hmf and from to km s for ssef .the average root mean square values of and from elevohi , over all aspect ratios used , are hrs and km s , respectively .the corresponding values for fpf , hmf and ssef are hrs and km s , hrs and km s , and hrs and km s , respectively .since the analyzed events cover an interval extending from 2008 until 2012 , this study includes both periods of low and high solar activity .forecasts are , however , more accurate during times of low solar activity .table [ tab : split ] shows the mean values and standard deviations of and of the elevohi , fpf , hmf and ssef forecasts for events 110 ( 2008early 2011 ) and events 1121 ( 20112012 ) .compared to solar maximum , we find the elevohi arrival time forecast to be more accurate during low solar activity by about hrs .the arrival speed forecast shows the same behavior , were the difference between solar minimum and maximum is km s .this behavior is even larger for fpf , hmf and ssef especially for the arrival speed forecasts . compared the performance of the dbm model with that of the `` wsa - enlil+cone '' model .they found that the latter yielded a mean arrival time error of hrs . in order to make their dbm forecast ,three different combinations of and were used .dbm yielded a mean arrival time error of hrs , over all combinations . splitting their sample of 50 cmes by solar activity , revealed ( as we also see here ) a smaller arrival time error during solar minimum conditions and a larger error during solar maximum . for four cmes in our event list ( events 1 , 3 , 4 , and 6 ) , we find that the resulting drag parameter , , is higher than usual ( km ) . for one event , km , which may be a consequence of the dbm fit starting too close to the sun .however , for this event , the fit does not converge if one assumes a later value for .nevertheless , the forecasted arrival time and speed errors are not significantly worse than for other cmes .there are several possible reasons why this fit might yield an `` unphysically '' high value for the drag parameter .one such possibility may be the difference between the assumed and true background solar wind speed acting on the cme during its propagation .note that the value of the solar wind speed ultimately used by elevo to provide time / speed estimates is that value ( from the range delimited by the minimum and maximum solar wind speed over the course of the hi observations ) that gives the best dbm fit .forecasting the arrival speed of cmes plays a major role in the prediction of geomagnetic storm intensities .the strength of a geomagnetic storm is quantified through the use of several geomagnetic indices , one being the disturbance storm time ( ) index . and have developed models to derive from solar wind parameters , in particular the component of the magnetic field vector and the solar wind speed .determining the magnetic field orientation within cmes , prior to their arrival at earth , particularly , is one of the most important topics in space weather research and operational space weather .while such forecasting of is as yet unachievable in practice , elevohi does appear to provide a reliable speed forecast that could be used to model .in order to assess this approach , we have calculated the index for a cme ( event number 20 in our list ) that had a shock arrival speed of km s , and that resulted in a moderate geomagnetic storm with a minimum of nt .we used the imf component measured in situ by _ wind _ and a variety of different values of cme arrival speeds ( within the errors of elevohi and the fpf method ) to model using the method of obrien & mcpherron .use of the measured in situ arrival speed , results in a modeled minimum of nt . adding the mean arrival speed error over all cmes , km s , from elevohi with , to the in situ cme speed for this interval , yields a minimum value for of nt ; adding instead the standard deviation of km s gives nt .adding km s , the mean arrival speed error from fpf , to the in situ cme speed , gives nt ; adding the fpf standard deviation of 301 km s results in nt .correctly predicting cme arrival speed at earth is of great importance for modeling the intensities of geomagnetic storms this issue appears to be very well addressed using elevohi .a very important factor for space weather forecasting is the prediction lead time .this is the time between when the prediction is performed until the impact of the cme .the prediction lead times of elevohi for the cmes under study lie in the range of hrs , which is similar to that quoted by as we use a subset of those events . using a shorter hi track to extend the prediction lead time ,would likely lead to some increase in forecasting errors .however , this still needs to be investigated .lrrrrr & & & & & + & & & & & + & & & & & + fpf & & & & & + hmf & & & & & + ssef & & & & & +we have introduced the new hi - based cme forecasting utility , elevohi , which assumes an elliptically shaped cme front , that adjusts , through drag , to the background solar wind speed during its propagation . included within elevohiis the newly presented conversion method , elcon , which converts the time - elongation profile of the cme obtained from heliospheric imagery into a time - distance profile assuming an elliptical cme geometry ; the resultant time - distance profile is used as input into the subsequent stage of elevohi , the dbm fit . as a last stage of elevohi ,all of the resulting parameters are input in the ellipse evolution model ( elevo ; * ? ? ?* ) , which then predicts arrival times and speeds .we have assessed the efficacy of the new elevohi procedure by forecasting the arrival times and speeds of 21 cmes , previously analyzed by , which were detected in situ at 1 au . in our implementation of elevohi ,the cme propagation directions were provided by the fixed- fitting ( fpf ) method .elevohi predictions of arrival times and speeds were compared to the output of other single - spacecraft hi - based methods , specifically fpf , harmonic mean fitting ( hmf ) and self - similar expansion fitting ( ssef ) . we have found that elevohi performs somewhat better at forecasting cme arrival time than the fpf , hmf and ssef methods . applying elevohi with ,results in a mean error between predicted and observed arrival times of hrs ; the equivalent values for fpf , hmf and ssef are hrs , hrs and hrs , respectively .hence , while fpf , hmf and ssef tend to forecast the cme arrival too early ( cf . * ? ? ?* ) , elevohi has a tendency to predict their arrival too late .a substantial improvement is shown in forecasting the arrival speed . the mean error between the modeled and observed arrival speeds for elevohi ( ) is km s , whereas for fpf , hmf and ssef km s , km s and km s , respectively .this improvement has a direct impact on the accuracy of predicting the intensity of geomagnetic storms at earth . demonstrated that the drag force is dominant at heliocentric distances of 1550 r .below this distance , the lorentz force still influences cme kinematics .our study supports this conclusion ; we find dbm applicable beyond a mean heliocentric distance , , of r .as pointed out by , it is quite likely that the fpf method ( as well as hmf and ssef ) performs better if it is applied to data starting at a larger heliocentric distance as well .elevohi requires the presence of a heliospheric imager instrument providing a side view of the sun - earth line .these data must be available in near real - time , with an acceptable quality , to be able to use them for predicting cme arrival .the necessity of hosting heliospheric imagers at l4 and/or l5 is obvious when one compares the efficacy of using hi data to forecast cme arrival to methods using coronagraph data .for example , of the coronagraph - driven dbm and wsa - cone model enlil arrival time forecasts lie within the range of hrs . using elevohi , with the benefit of hi observations ,this value is improved to around hrs .our study shows that there is no significant difference between the three aspect ratios used .a follow up study may discover the most appropriate curvature ( and indeed angular width ) to select by comparing the results to multiple longitudinally separated spacecraft detecting the cme arrival .we use the ellipse conversion method ( elcon ) , to calculate the distance of the cme apex to sun center , , as a function of elongation , , which is the angle between the sun - observer line and the line of sight , assumed to be the tangent to the elliptical cme front .the elongation of the cme front , , is available from stereo / hi imagery .we also know the distance of the observer to the sun , .the cme propagation direction , , can be obtained by fixed- fitting ( see section [ sec : input ] ) , the inverse ratio of the semi - axes , , and the cme half - width , , need to be assumed . figure [ fig : meth ] illustrates the elcon geometry . by applying the sine rule to the yellow trianglewe find similarly , we find is given as using equations [ eq : romega ] and [ eq : romegamc ] , and by applying the sine rule to the orange triangle , we can make the following ansatz : the angles and can be expressed as and furthermore , the distances of the two tangent points to the ellipse center , and , can be expressed as and by equating the expressions for in equations [ eq : ansatzc ] we can solve for the semi - minor axis as with note that . in the case of , we have to replace by .now we are able to calculate using the second equation in ansatz [ eq : ansatzc ] and find the distance of the cme apex from sun center : to calculate the arrival time of the cme front at any location in interplanetary space , we need to account for the offset between the direction of the cme apex and the direction of the location of interest , the so - called off - axis correction .the distance from sun center of the cme front at an angular offset from its axis , , as presented by in their equation 12 , is given by & & & & & & + 1 & & 2008 apr 26 23:05 & 30.2 & 953 & 556 & 3.06 + 2 & & 2008 jun 2 14:38 & 21.2 & 373 & 424 & 2.5 + 3 & & 2008 dec 27 10:34 & 10.6 & 714 & 404 & 2.6 + 4 & & 2009 feb 13 08:32 & 6.9 & 373 & 292 & 3.07 + 5 & & 2010 apr3 16:42 & 35.8 & 1145 & 589 & 0.76 + 6 & & 2010 apr 8 04:01 & 3.6 & 989 & 480 & 6.83 + 7 & & 2010 may 24 02:08 & 22 & 465 & 375 & 0.93 + 8 & & 2010 jun 16 22:39 & 18.6 & 358 & 492 & 0.22 + 9 & & 2010 aug 1 15:34 & 40.3 & 810 & 526 & 0.86 + 10 & & 2011 feb 15 03:11 & 11.7 & 847 & 516 & 1.43 + 11 & & 2011 aug 2 10:34 & 21.8 & 562 & 377 & 0.23 + 12 & & 2011 sep 7 05:12 & 25.8 & 608 & 361 & 0.04 + 13 & & 2011 oct 22 09:40 & 27.1 & 669 & 352 & 0.21 + 14 & & 2012 jan 19 19:28 & 19.8 & 889 & 294 & 0.23 + 15 & & 2012 jan 23 07:33 & 39.6 & 2237 & 470 & 0.43 + 16 & & 2012 mar 5 07:39 & 17.4 & 1086 & 406 & 0.41 + 17 & & 2012 mar 7 02:00 & 16.6 & 1480 & 606 & 0.26 + 18 & & 2012 mar 10 19:17 & 12.3 & 1883 & 489 & 0.45 + 19 & & 2012 apr 19 19:05 & 13.3 & 815 & 366 & 0.75 + 20 & & 2012 jun 14 15:49 & 15.4 & 1349 & 381 & 0.37 + 21 & & 2012 jul 12 20:47 & 26.2 & 1079 & 373 & 0.14 + & & & & & & + 1 & & 2008 apr 26 23:05 & 30.3 & 952 & 556 & 3.08 + 2 & & 2008 jun 2 14:38 & 21.7 & 379 & 424 & 1.63 + 3 & & 2008 dec 27 10:34 & 10.6 & 717 & 404 & 2.01 + 4 & & 2009 feb 13 14:34 & 19.8 & 355 & 292 & 3.31 + 5 & & 2010 apr 3 16:06 & 35.8 & 1143 & 589 & 0.7 + 6 & & 2010 apr 8 04:01 & 3.6 & 989 & 480 & 5.99 + 7 & & 2010 may 24 02:08 & 22.2 & 468 & 375 & 1.01 + 8 & & 2010 jun 16 22:39 & 18.8 & 360 & 492 & 0.18 + 9 & & 2010 aug 1 15:34 & 40.5 & 822 & 526 & 0.55 + 10 & & 2011 feb 15 05:12 & 20.7 & 720 & 516 & 0.72 + 11 & & 2011 aug 2 10:34 & 22.0 & 570 & 377 & 0.12 + 12 & & 2011 sep 7 10:44 & 38.1 & 1448 & 423 & 1.59 + 13 & & 2011 oct 22 09:40 & 27.2 & 674 & 352 & 0.15 + 14 & & 2012 jan 19 19:28 & 20.2 & 908 & 294 & 0.18 + 15 & & 2012 jan 23 07:33 & 40.4 & 2318 & 470 & 0.37 + 16 & & 2012 mar 5 07:39 & 17.8 & 1114 & 406 & 0.32 + 17 & & 2012 mar 7 02:00 & 17.8 & 1519 & 606 & 0.19 + 18 & & 2012 mar 10 19:17 & 12.3 & 1888 & 489 & 0.41 + 19 & & 2012 apr 19 20:36 & 18.8 & 625 & 348 & 0.16 + 20 & & 2012 jun 14 16:20 & 15.6 & 1438 & 381 & 0.46 + 21 & & 2012 jul 12 17:45 & 6.9 & 1396 & 422 & 0.23 + & & & & & & + 1 & & 2008 apr 26 23:05 & 30.3 & 951 & 556 & 3.10 + 2 & & 2008 jun 2 14:38 & 22.1 & 383 & 424 & 1.03 + 3 & & 2008 dec 27 10:34 & 10.6 & 719 & 454 & 4.89 + 4 & & 2009 feb 13 14:34 & 19.8 & 357 & 292 & 2.09 + 5 & & 2010 apr 3 15:06 & 26.7 & 2052 & 589 & 1.26 + 6 & & 2010 apr 8 04:01 & 3.6 & 989 & 480 & 5.51 + 7 & & 2010 may 24 02:08 & 22.4 & 469 & 375 & 1.07 + 8 & & 2010 jun 16 22:39 & 18.9 & 361 & 492 & 0.17 + 9 & & 2010 aug 1 15:34 & 40.7 & 830 & 526 & 0.39 + 10 & & 2011 feb 15 05:12 & 20.8 & 725 & 516 & 0.49 + 11 & & 2011 aug 2 10:34 & 22.1 & 574 & 377 & 0.06 + 12 & & 2011 sep 7 05:12 & 26.1 & 623 & 391 & 0.06 + 13 & & 2011 oct 22 11:41 & 34.4 & 738 & 352 & 0.21 + 14 & & 2012 jan 19 19:28 & 20.4 & 921 & 294 & 0.15 + 15 & & 2012 jan 23 07:33 & 40.9 & 2377 & 470 & 0.33 + 16 & & 2012 mar 5 05:39 & 9.1 & 1054 & 406 & 0.19 + 17 & & 2012 mar 7 02:00 & 17.2 & 1546 & 606 & 0.15 + 18 & & 2012 mar 10 19:17 & 12.3 & 1892 & 489 & 0.38 + 19 & & 2012 apr 19 19:05 & 13.5 & 832 & 366 & 0.53 + 20 & & 2012 jun 14 15:49 & 15.8 & 1399 & 381 & 0.27 + 21 & & 2012 jul 12 17:45 & 6.9 & 1409 & 422 & 0.2 +
|
in this study , we present a new method for forecasting arrival times and speeds of coronal mass ejections ( cmes ) at any location in the inner heliosphere . this new approach enables the adoption of a highly flexible geometrical shape for the cme front with an adjustable cme angular width and an adjustable radius of curvature of its leading edge , i.e. the assumed geometry is elliptical . using , as input , stereo heliospheric imager ( hi ) observations , a new elliptic conversion ( elcon ) method is introduced and combined with the use of drag - based model ( dbm ) fitting to quantify the deceleration or acceleration experienced by cmes during propagation . the result is then used as input for the ellipse evolution model ( elevo ) . together , elcon , dbm fitting , and elevo form the novel elevohi forecasting utility . to demonstrate the applicability of elevohi , we forecast the arrival times and speeds of 21 cmes remotely observed from stereo / hi and compare them to in situ arrival times and speeds at 1 au . compared to the commonly used stereo / hi fitting techniques ( fixed- , harmonic mean , and self - similar expansion fitting ) , elevohi improves the arrival time forecast by about hours to hours and the arrival speed forecast by km s to km s , depending on the ellipse aspect ratio assumed . in particular , the remarkable improvement of the arrival speed prediction is potentially beneficial for predicting geomagnetic storm strength at earth .
|
supergranules are cellular flow structures observed in the solar photosphere with typical diameters of about 30 mm and lifetimes of about one day .they cover the entire surface of the sun and are intimately involved with the structure and evolution of the magnetic field in the photosphere .the magnetic structures of the chromospheric network form at the boundaries of these cells and magnetic elements are shuffled about the surface as the cells evolve .the diffusion of magnetic elements by the evolving supergranules has long been associated with the evolution of the sun s magnetic field [ ] .supergranules were discovered by . while these cellular flows were quickly identified as convective features [ ] the difficulty of detecting any associated thermal features consistent with that dentification ( i.e. hot cell centers ) has made this identification somewhat problematic [ ] .the rotation of the supergranules has added further mystery to their nature . - correlated the equatorial doppler velocity patterns and found that the supergranules rotated more rapidly than the plasma at the photosphere and that even faster rotation rates were obtained when longer ( 24-hour vs. 8-hour ) time intervals were used .he attributed this behavior to a surface shear layer [ proposed by and and modeled by ] in which larger , longer - lived , cells extend deeper into the more rapidly rotating layers . used data from mount wilson observatory to find the rotation rate at different latitudes and noted that the rotation rates for the doppler pattern were some 4% faster than the spectroscopic rate and , more interestingly , some 2% faster than the magnetic features and sunspots .more recently used a 2d fourier transform method to find that the doppler pattern rotates more rapidly than the shear layer itself and that larger features do rotate more rapidly than the smaller features .they suggested that supergranules have wave - like characteristics with a preference for prograde propagation . in a previous paper [ ]we showed that this `` super - rotation '' of the doppler pattern could be attributed to projection effects associated with the doppler signal itself .as the velocity pattern rotates across the field of view its line - of - sight component is modulated in a way that essentially adds another half wave and gives a higher rotation rate that is a function of wavenumber . in that paperwe took a fixed velocity pattern ( which had spatial characteristics that matched the soho / mdi data ) and rotated it rigidly to show this `` super - rotation '' effect . while this indicated that this doppler projection effect should be accounted for , the fixed pattern could not account for all the variations reported by .furthermore , when `` divided - out '' the line - of - sight modulation he still saw prograde and retrograde moving components .in this paper we report on our analyses of simulated data in which the supergranules are advected by a differential rotation that varies with both latitude and depth .the data is designed to faithfully mimic the soho / mdi data that was analyzed in and and the analyses are reproductions of those done in earlier studies .the full - disk doppler images from soho / mdi [ ] are obtained at a 1-minute cadence to resolve the temporal variations associated with the p - mode oscillations .we [ c.f . and ]have temporally filtered the images to remove the p - mode signal by using a 31-minute long tapered gaussian with a fwhm of 16 minutes on sets of 31 images that were de - rotated to register each to the central image .series of these filtered images were formed at a 15-minute cadence over the 60-day mdi dynamics runs in 1996 and 1997 .this filtering process effectively removes the p - mode signal and leaves behind the doppler signal from flows with temporal variations longer than about 16 minutes .supergranules , with typical wavenumbers of about 110 , are very well resolved in this data ( at disk center wavenumbers of about 1500 are resolved ) .while granules are not well resolved , they do appear in the data as pixel - to - pixel and image - to - image `` noise , '' as a convective blue shift ( due to the correlation between brightness and updrafts ) , and as resolved structures for the largest members .the simulated data are constructed in the manner described in , , , and from vector velocities generated by an input spectrum of complex spectral coefficients for the radial , poloidal , and toroidal components . to simulate the observed line - of - sight velocitythe three vector velocity components are calculated on a grid with 1024 points in latitude and 2048 points in longitude .a doppler velocity image is constructed by determining the longitude and latitude at a point on the image , finding the vector velocity at that point using bi - cubic interpolation , and then projecting that vector velocity onto the line - of - sight .the line - of - sight velocities at an array of 16 points within each pixel are determined and the average taken to simulate the integration over a pixel in the acquisition of the actual mdi doppler data . with the current simulationswe have added two changes that were not included in our previous work on individual doppler images .first , we add velocity `` noise '' at each pixel .this represents the contribution from the spatially unresolved granules that , nonetheless , have temporal varibility that is not filtered out by the 31-minute temporal filter .this noise has a center - to - limb variation due to the foreshortened area covered by each pixel and is randomly varied from pixel to pixel and from one doppler image to the next .the noise level is determined by matching the initial drop in correlation from one image to the next that is seen in the mdi data .secondly , we treat the instrumental blurring in a more realistic manner .previously we took the doppler velocity image and convolved it with an mdi point - spread - function .we now make red and blue intensity images from our doppler velocity image and a simple limb darkened intensity image , convolve those with an mdi point - spread - function , and construct a blurred doppler velocity image from the difference divided by the sum .this process yields a doppler velocity image that is virtually indistinguishable from an mdi doppler velocity image .the velocity pattern is evolved in time by introducing changes to the spectral coefficients based on two processes - the advection by an axisymmetric zonal flow ( differential rotation ) and random processes that lead to the finite lifetime of the cells .the advection by the differential rotation is governed by an advection equation where is a vector velocity component and is the differential rotation profile . we represent as a series of spherical harmonic components and project this advection equation onto a single spherical harmonic which gives a series of coupled equations for the evolution of the spectral coefficients .solid body rotation simply introduces a phase variation for each coefficient .differential rotation couples the change in one spectral coefficient to spectral coefficients with wavenumbers and for differential rotation dependent on and .the finite lifetimes for the cells are simulated by introducing random perturbations to the spectral coefficient amplitudes and phases .the size of these perturbations increases with wavenumber to give shorter lifetimes to smaller cells .several anaylsis programs were applied to both the mdi data and the simulated data .convection spectra for individual images were obtained using the methods described by and the doppler signal due to the motion of the observer is removed , the convective blue shift signal is identified and removed , the data is mapped to heliographic coordinates , the axisymmetric flow signals due to differential rotation and meridional circulation are identified and removed , and the remaining signal is projected onto spherical harmonics .the averaged spectra from the 1996 mdi dynamics run and from our 10-day simulated data run are nearly perfectly matched at all wavenumbers .this match is obtained by adjusting the input spectrum for the simulated data .this spectrum contains two lorentzian - like spectral components a supergranule component centered on with a width of about 100 and a granule component centered on with a width of about 4000 .the mdi spectrum is well matched with just these two components without the addition of a mesogranule component [ ] .in fact , we find a distinct _ dip _ in the spectrum at wavenumbers that should be representative of mesogranules .this dip is also seen in spectra of the mdi high resolution data [ ] .additional analyses are applied to the data after it has been mapped onto heliographic coordinates .longitudal strips of this data centered on latitudes from south to north were cross - correlated with corresponding strips from later images as was done by and .the longitudinal shift of the cross - correlation peak gives the rotation rate while the height of this peak is associated with cell lifetimes .these strips were also fourier analyzed in longitude to get spectral coefficients and those coefficents were fourier anaylzed in time over 10-day intervals as was done by to get rotation rates as functions of wavenumber .1 shows the rotation rates from the cross - correlation analysis .the rotation profiles from the simulated data match those from the mdi data .both show faster rotation rates for longer time lags as noted by and by .this indicates that we have found the right latitudinal differential rotation profile .the strength of the correlations as functions of latitude and time lag for both the mdi data and the simulated data are also well matched .this indicates that we have found the right lifetimes for the cells .we have also reproduced the analysis of .the data strips are apodized and multiplied by longitude dependent functions designed to remove the doppler projection effect and to isolate either longitudinal motions or latitudinal motions .the strips are shifted in longitude according to the differential rotation rate and then fourier analysed in space and time to obtain `` '' diagrams .2a shows the equatorial rotation rate as a function of wavenumber for the simulated data while fig .2b shows the diagram .these should be compared to fig .4 of and fig . 8 of respectively .we have produced simulated data in which the cellular structures ( supergranules ) are advected by differential rotation and evolve by uncorrelated random changes .when we compare results from analyses of this data with those from analyses of the mdi data we find that the simulated data exhibits the same characteristics as the mdi data the visual structures , the power spectra , the rotation characteristics , and the evolution rates all match . while some of these characteristics have been attributed to wave - like properties [ c.f . and ] our simulated data is simply advected by a zonal flow ( differential rotation ) with speeds that never exceed those determined from helioseismology [ ] .the differential rotation we impose does , however , have a dependence on wavenumber .if we assume that the rotation rate of cells with diameters , , reflects the rotation rate at a depth , , then the surface shear layer indicated by our differential rotation has a thickness of about 20 mm somewhat thinner than the 30 mm suggested by helioseismic inversions [ ] .we would like to thank nasa for its support of this research through a grant from the heliophysics guest investigator program to nasa marshall space flight center and the university of texas arlington .we would also like to thank the soho / mdi team for the critical role they played in producing the raw mdi data and john beck in particular for implementing the temporal averaging of that data to remove the p - mode noise .scherrer , p. h. , bogart , r. s. , bush , r. i. , hoeksema , j. t. , kosovichev , a. g. , schou , j. , rosenberg , w. , springer , l. , tarbell , t. d. , title , a. , wolfson , c. j. , zayer , i. , and the mdi engineering team 1995 , , 162 , 129 schou , j. , antia , h. m. , basu , s. , bogart , r. s. , bush , r. i. , chitre , s. m. , christensen - dalsgaard , j. , di mauro , m. p. , dziembowski , w. a. , eff - darwich , a. , gough , d. o. , haber , d. a. , hoeksema , j. t. , howe , r. , korzennik , s. g. , kosovichev , a. g. , larsen , r. m. , pijpers , f. p. , scherrer , p. h. , sekii , t. , tarbell , t. d. , title , a. m. , thompson , m. j. , & toomre , j. 1998 , , 505 , 390
|
we produce a 10-day series of simulated doppler images at a 15-minute cadence that reproduces the spatial and temporal characteristics seen in the soho / mdi doppler data . our simulated data contains a spectrum of cellular flows with but two necessary components a granule component that peaks at wavenumbers of about 4000 and a supergranule component that peaks at wavenumbers of about 110 . we include the advection of these cellular components by a differential rotation profile that depends on latitude and wavenumber ( depth ) . we further mimic the evolution of the cellular pattern by introducing random variations to the amplitudes and phases of the spectral components at rates that reproduce the level of cross - correlation as a function of time and latitude . our simulated data do not include any wave - like characteristics for the supergranules yet can accurately reproduce the rotation characteristics previously attributed to wave - like characteristics .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.